Food & Paper: I'll be Bach! Modeling Expressive Performance with Machine Learning (Cancino-Chacón)

Postdoctoral researcher Carlos Eduardo Cancino-Chacón is a guest researcher at RITMO from the Austrian Research Institute for Artificial Intelligence. He will gave a talk on "Modeling Expressive Performance with Machine Learning"

Image may contain: Glasses, Hair, Photograph, Glasses, Black-and-white.


The way a piece of music is performed is a very important factor influencing our enjoyment of music. In many kinds of music, particularly Western classical music, a good performance is expected to be more than an exact acoustic rendering of the notes in the score. When playing a piece, performers shape various parameters (tempo, timing, dynamics, intonation, articulation, etc.) producing an expressive rendition that brings out dramatic, affective, and emotional qualities that may engage and affect the listeners. 

This talk focuses on a specific thread of research: work on computational music performance models. Computational models are attempts at codifying hypotheses about expressive performance in terms of mathematical formulas or computer programs, so that they can be evaluated in systematic and quantitative ways. Such models can serve at least two purposes: they permit us to systematically study certain hypotheses regarding performance; and they can be used as tools to generate automated or semi-automated performances, in artistic or educational contexts.

In this talk, I focus on data-driven approaches: the model is not constructed manually, based on musical knowledge or hypotheses, but is learned from large collections of real human performances, via machine learning algorithms, in particular artificial neural networks. In this way it is the empirical data that dictate what the model looks like, and an analysis of the learned models can provide interesting insights into the complex relation between score and performance. I will also present recent developments towards integrating these framework into an interactive, real-time accompaniment system.


Carlos Cancino-Chacón is a postdoctoral researcher at the Austrian Research Institute for Artificial Intelligence (OFAI), Vienna, Austria working in the ERC funded Con Espressione project and a Guest Researcher at RITMO collaborating in the MIRAGE project. His research focuses on studying  expressive music performance, music cognition and music theory with machine learning methods. He pursued a doctoral degree in Computer Science at the Institute of Computational Perception of the Johannes Kepler University Linz, Austria. He received a Master's degree in Electrical Engineering and Audio Engineering from the Graz University of Technology, a degree in Physics from the National Autonomous University of Mexico and a degree in Piano Performance from the National Conservatory of Music of Mexico.​


Published Mar. 27, 2020 3:06 PM - Last modified May 4, 2020 8:26 AM