RITMO Seminar Series: Machine learning as (Meta-) instrument (Rebecca Fiebrink, University of the Arts London)

Dr Rebecca Fiebrink, Reader at the University of the Arts London Creative Computing Institute, will give a seminar on "Machine learning as (Meta-) instrument".


Computer scientists typically think aboutRebecca Fiebrink machine learning as a set of powerful algorithms for modeling data in order to make decisions or predictions, or to better understand some phenomenon. In this talk, I’ll invite you to consider a different perspective, one in which machine learning algorithms function as live and interactive human-machine interfaces, akin to a musical instrument. These “instruments" can support a rich variety of activities, including creative, embodied, and exploratory interactions with computers and media. They can also enable a broader range of people—from software developers to children to music therapists—to create interactive digital systems. Drawing on a decade of research on these topics, I’ll discuss some of our most exciting findings about how machine learning can support human creative practices, for instance by enabling faster prototyping and exploration of new technologies (including by non-programmers), by supporting greater embodied engagement in design, and by changing the ways that creators are able to think about the design process and about themselves. I’ll discuss how these findings inform new ways of thinking about what machine learning is good for, how to make more useful and usable creative machine learning tools, how to teach creative practitioners about machine learning, and what the future of human-computer collaboration might look like.


Dr Rebecca Fiebrink makes new accessible and creative technologies. As a Reader at the Creative Computing Institute at University of the Arts London, her teaching and research focus largely on how machine learning and artificial intelligence can change human creative practices. Fiebrink is the developer of the Wekinator creative machine learning software, which is used around the world by musicians, artists, game designers, and educators. She is the creator of the world’s first online class about machine learning for music and art. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, and designing inclusive machine learning curricula and tools. Dr. Fiebrink previously taught at Goldsmiths University of London and Princeton University, and she has worked with companies including Microsoft, Smule, and Imagine Research. She holds a PhD in Computer Science from Princeton University.


The lecture will be followed by a panel discussion with three RITMO researchers:

  • Kyrre Glette: Associate professor in informatics working on automatic co-design of robot bodies and behaviors using artificial intelligence methods.
  • Benedikte Wallace: PhD fellow in informatics working on creativity in artifical intelligence, with a particular focus on music and dance.
  • Alexander Refsum Jensenius: Professor of music technology, researching music-related body motion, and how such motion can be uesd in untraditional music making.


Q&A Session

There wasn't enough time to answer all the questions during the webinar. Rebecca Fiebrink kindly answered the remaining questions in writing.

Balandino Di Donato: Sometimes I struggle teaching my Creative Computing students the process of feature selection, how certain features impact the model ("positively"/"negatively"). Do you have any advice, are there any tools out there that can help CC/artists? I remember the Featurnator, is it still out there?

Rebecca Fiebrink: Yes, this is a tricky thing to teach, I agree. My postdoc Louis McCallum and I built Featurnator to help people explore this better than they can using Wekinator alone, though it’s definitely not an ideal tool yet! We’ve got a paper about this in the upcoming Programming for Moving Bodies workshop if you want to learn more about how we see this challenge, and what we learned from making Featurnator.

Pedro Pablo Lucas Bravo: You mention the possibility of using Wekinator together with Unity. Can it be currently used as a plugin for a multi-platform project and build an interface in the game for the tool for any kind of control? I am thinking in merging visuals and interaction with physics, etc.

Rebecca Fiebrink: Yes — it’s currently still in beta but we’re making fast progress on it. The ML component just receives data from any game object(s) so you can use whatever control input you want

Jackson Goode: In your experience in showing new users the Wekinator, do they find it challenging to use, or that it doesn't provide enough fine grain control? And as a follow up, do you think systems that follow the path of the Wekinator (as a meta-instrument) should further simplify the interface or allow more diverse controls for experimentation?

Rebecca Fiebrink: For new users, this isn’t typically a problem. But for expert/power users, sometimes they definitely find that they have ideas they want to realise where Wekinator doesn’t offer fine-grained enough control. Often this is when they have a specific idea in mind that is actually easily expressed via programming (e.g., they want a linear relationship between some input and output). As for whether new systems should further simplify — I think that’s entirely dependent on the domain & target users.

Francesco Di Maggio: Are you planning to keep optimizing the Wekinator as a space of prototyping, perhaps adding new features and implementing user feedback as well, or move to a new reinvented system?

Rebecca Fiebrink: For the moment I am trying to keep integrating the most important feature requests and bug fixes into Wekinator, but I also want to keep it pretty stable without radical changes to support my teaching and to support current users. (It’s hard to do major changes without disruption as a single developer, especially trying to support cross-platform applications!) When there’s something very specific and new I want to support, I usually find it’s easier to make a new tool, but often use Wekinator for early prototyping of that tool (e.g., the first year of the Sound Control project, we used Wekinator behind the scenes in order to learn about what the Sound Control software should ultimately support and look like).

Bernt Isak Wærstad: Following up on Francesco’s question - have you thought about adding MIDI support so make it even easier to use for novice users with software like Ableton Live without adding another software in between?

Rebecca Fiebrink: Yes, but this is honestly a pretty low priority. There have been a few users who’ve made good Wekinator->Ableton bridges, and these seem to work well (I have one in my email inbox I need to upload to the website still!). People can also route MIDI through Processing or other programs. It’s pretty tough to decide what would be the “correct” MIDI format for Wekinator to output, so I think this is best decided by users.


Please contact Marit Johanne Furunes if you have any questions.

Published Sep. 9, 2020 2:10 PM - Last modified Sep. 27, 2022 3:23 PM