Teresa Pelinski

Anomaly detection as a means of sensing subtlety and nuance in musical gesture

Image may contain: Person, Forehead, Nose, Cheek, Lip.

When

Thematic Session 4: Creativity and Expressivity (Tuesday, 11:05)

Abstract

Novel audio synthesis techniques open a vast space of possibilities for new musical instruments; however, our ability to 'play' them lags behind acoustic instruments. For instance, real-time implementations of neural audio synthesis models typically take the form of plug-ins, controlled by knobs and sliders. Another common approach is mapping control signals' predefined features, such as amplitude or velocity, to the model's input parameters. Still, these direct mappings might not reflect the complex parameters' cross-couplings that occur in acoustic instruments. Conversely, deep learning techniques offer an exciting opportunity to achieve complex mappings between gesture and sound. Some approaches exist in using AI for mapping, yet these typically rely on gestural classifiers that standardise the gesture smoothing out its nuances and diversity across performances and performers. 

We build from the assumption that subtlety and nuance in musical gesture enable the emergence of embodied and tacit knowledge in the performer. Instead of smoothing out the gesture's nuance, we focus on amplifying it. We present a work in progress in which we use anomaly detection AI techniques in dozens of signals obtained from sensors placed in an instrument. These anomaly signals indicate how the sensor signals deviate from the most likely signal (the model's prediction) after the performer interacts with the instrument. We expect these anomaly signals to reflect salient perceptual aspects of the interaction that predefined features might neglect. The talk's first half will be dedicated to presenting the toolchain used to record the dataset of sensor signals, which allows recording signals from dozens of sensors connected to various Bela embedded computers and aligning these signals framewise. This toolchain might be relevant to other researchers interested in running AI models on embedded devices with sensor signals as inputs. The talk's second half will introduce anomaly detection techniques for capturing subtlety and nuance in gesture.

Bio

Teresa Pelinski is a PhD researcher at the Augmented Instruments Lab, part of the Centre for Digital Music (C4DM) at Queen Mary University of London. She holds an MSc in Sound and Music Computing (Universitat Pompeu Fabra) and a BSc in Physics (Universidad Autónoma de Madrid). Her research focuses on gestural interaction with digital musical instruments, and, in particular, AI techniques that capture the subtlety and nuances of musical gesture.

Published Oct. 22, 2022 7:39 PM - Last modified Nov. 16, 2022 11:08 AM