The Contingent Events performance is essentially fairly simple: I perform with homemade performance software controlled via a simple MIDI controller. A large red button (or possibly two) is/are placed in the audience that audience members are invited to push whenever they like. Pushing the button instantly and completely changes the nature of the performance system, selecting from a wide range of pre-made performance systems. A new system is randomly selected, potentially changing from a synth-based melodic system to a sample-based granulator or a modular synth environment, FM feedback system, physical model-based instrument, and so on. The MIDI controller is retained as an input device for me as a performer, but is instantly remapped to provide control over the new system. Any of these systems (there are around 20 at present) could be used individually to create a performance, with enough depth to sustain my interest as a performer (and hopefully the audiences interest too).
The audience therefore have the power to completely alter the trajectory of the performance. This is currently untested, but I would be very interested to see how the audience deal with this responsibility, whether they try to work with me, or against me, whether they allow me time to develop material with a given system or whether they relentlessly press the button. They could treat it like channel surfing, or treat me as a puppet, or they could attempt to find opportune musical moments to make the changes.
The Contingent Events performance is essentially fairly simple: I perform with homemade performance software controlled via a simple MIDI controller. A large red button (or possibly two) is/are placed in the audience that audience members are invited to push whenever they like. Pushing the button instantly and completely changes the nature of the performance system, selecting from a wide range of pre-made performance systems. A new system is randomly selected, potentially changing from a synth-based melodic system to a sample-based granulator or a modular synth environment, FM feedback system, physical model-based instrument, and so on. The MIDI controller is retained as an input device for me as a performer, but is instantly remapped to provide control over the new system. Any of these systems (there are around 20 at present) could be used individually to create a performance, with enough depth to sustain my interest as a performer (and hopefully the audiences interest too).
The audience therefore have the power to completely alter the trajectory of the performance. This is currently untested, but I would be very interested to see how the audience deal with this responsibility, whether they try to work with me, or against me, whether they allow me time to develop material with a given system or whether they relentlessly press the button. They could treat it like channel surfing, or treat me as a puppet, or they could attempt to find opportune musical moments to make the changes.
Crafting Sound continues for the second evening, with performances feeding sounds from Sheffield into self-made, archaic systems and machinery, audiovisual live coding with sonic pi, dance choreographed by live coded JavaScript, and the physical clapping out of binary code to build a pure sine wave.