Fluid Corpus Manipulation: Creative and Programmatic Approaches to Sound Banks

A two-day workshop on music making with collections of sounds using machine listening and learning

Owen Green and James Bradbury (University of Huddersfield, UK)

The Fluid Corpus Manipulation Project

The Fluid Corpus Manipulation (FluCoMa) project is about trying to enable and animate development of the music potential of signal analysis, machine listening and machine learning technologies. We aim to do this by providing open source implementations in a range of environments (Max, SuperCollider, PureData and the command line), supported by workshops and learning materials, in the hope of seeding a sustainable community for exchange and exploration.

The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 725899).

A great deal of music making with computers involves processing, combining and playing with record sound in various forms, and as storage gets cheaper and archives larger, our collections become harder to manage and explore. Meanwhile, advances in signal processing and machine learning show promise for working fluently with these collections, but their musical potential is hard to explore because these technologies have not always been readily available in music software.

These musical questions are not, and can not be, solely technical. As repeated controversies to do with the applications of machine learning make clear, the increasingly urgent questions of how questions are formulated, and by / for whom will be of lasting cultural consequence. As well as equipping participants with a understanding of what these technologies can do, we aim also to help develop a critical perspective on their possible role in creative practice.

This two-day workshop will give participants a hand-on introduction to these technologies and their possible usefulness using the Fluid Corpus Manipulation toolkits, available for Max, Supercollider, Pure Data and the command line. We will explore topics in signal decomposition and analysis; dataset exploration; and machine listening/learning in relation to musical tasks of curating and developing a corpus in the context of one’s piece/instrument/system.

Aims

Participants will learn:

  • Non-trivial aspects of decomposing, segmenting, and describing sounds
  • how to explore this new functionality as a creative device
  • basic machine listening and data analysis approaches to make sense of the newly segmented and described sounds
  • how (or whether) these tools might play a role in their future practice

This workshop is hands-on: each participant will be experimenting with the toolsets throughout.

Requirements

Participants need to come with a portable 64-bit computer (Mac, Windows, Linux) with Max, Supercollider or PureData installed, and headphones. Experience in one of these environments will be assumed.
 

Sign up here!

Schedule

Day 1

  • A short introduction to contextualise the workshop, lay out the schedule and (hopefully) generate some excitement by showing examples of what’s possible
  • Participants experiment by playing with something pre-built (a similarity based corpus explorer to produce concatentative variations on sounds)
  • Quick show and tell of people’s experiments

Overview: Suggest a working model for working musically with corpora as an iterative process of selection, exploration and curation.

  • Representing sounds: time, frequency, descriptors and features
  • Exploring: dimension reduction; similarity
  • Curating: tidying and conditioning data

Day 2: Working with Neural Networks

  • From parameters to hyperparameters
  • Finding a workflow
  • Zooming out: Thinking of your piece/instrument/system
  • Key question: generality vs specificity; control vs delegation
Published Sep. 21, 2021 12:16 PM - Last modified Sep. 23, 2021 11:05 AM