The Dance Jockey project
In the Dance Jockey project, we have used a full-body inertial motion capture system, the Xsens MVN suit, for musical interaction.
A Dance Jockey performance at Mostra UP in Porto, Portugal.
The name Dance Jockey is a wordplay on the well-known term Disc Jockey or DJ. This name is meant to reflect that instead of using discs to perform music, we use dance or full-body motion as the basis for the performance.
The motivation behind this project originated from our experience during typical “interactive computer music performances” where the performers' actions on a stage play a minor role compared to more conventional concert settings, like classical chamber music or rock concerts. Using computers for sound synthesis broadens the performers' possibilities for soundscapes, but it also dislocates the sound from the physicality of the instrument.
We want our performance to be both seen and heard, and for the audience to make connections between the two modalities. In other words, to establish strong perceptual connections between action and sound in such a way that the properties of the sound-producing action somehow matches the properties of the output sound.
With motion capture (MoCap) systems we are able to measure, with some limitation, the physical properties of our bodies' actions. It should therefore be possible to use this data to create physical relationships between actions and sounds. And with a full-body MoCap system it should be possible to explore these possibilities extensively. This has been our goal with the Dance Jockey project.
The Controller: Every sensor that can sense some aspects of the physical world can be used as a controller. For this project, we went for a rather big one - the Xsens MVN suit. This is an inertial sensor-based full-body MoCap system. We find this system to be well suited for exploring full-body musical interaction. See  and  for more details.
Feature extraction: We used our own implementation of the Xsens MVN software development kit together with MAX/MSP to extract different features suitable for control signals. See  and  for details.
Action-sound mappings: We feel that the typical performance in the genre of novel interfaces has a tendency to fall into a trap, where the wish for an impressive sound design surpasses the need for making an “audience-friendly” performance. In particular, it is easy to overlook that any performance is a kind of communication with the audience, who, for their part, may want to understand some couplings between sounds and actions on stage. In our opinion, this is an essential element of a performance and has been the motivation behind most of our action-sound mappings.
Sound engine: All the sounds for the performance were generated and manipulated in Ableton Live 8. Ableton live 8 was controlled via MIDI and Open Sound Control (OSC).
We have performed several concerts during the period 2010-2011. These concerts are given below in chronological order and include a video if available. Note that all aspects of the performance were controlled solely by the performer through the Xsens suit.
Department of Musicology, Oslo, Norway on 25 August 2010.
Dance Club venue, Gabler, Oslo, Norway, August 2010
VERDIKT, OSLO, Norway on November 1 2010
Mostra UP, Porto, Portugal on 18-19 March 2011 (two concerts)
NIME 2011, Chateau Neuf, Oslo, Norway, 1 June 2011
Idefestivalen, Oslo, Norway, 17 September 2011.
- S. A. Skogstad, K. Nymoen, Y. de Quay, and A. R. Jensenius. OSC Implementation and Evaluation of the Xsens MVN suit. In Proc. of NIME, 2011.
- S. A. Skogstad, K. Nymoen, and M. Hovin. Comparing inertial and optical mocap technologies for synthesis control. In Proc. SMC, 2011.
- Y. de Quay, S. A. Skogstad and A. R. Jensenius. Dance Jockey: Performing Electronic Music by Dancing. Leonardo music journal, 2011
- S. A. Skogstad, K. Nymoen, Y. de Quay, and A. R. Jensenius. Developing the Dance Jockey System for Musical Interaction with The Xsens MVN Suit, In Proc. of NIME, 2012.