Excursion Into Mixed Reality - 2009
Every day we interact with the digital world of computers and electronic media. We may consider this realm to be distinct from our experience in the physical world. Excursion Into Mixed Reality explores the permeable boundary between these 'realities' by harnessing physical motion to create meaningful digital representations. The dancer becomes the musical performer of a meta-instrument that extends bodily gestures into the realms of sound and video. Through a form of play, the dancer navigates the constantly shifting and parameterized digital environments that comprise the work. By linking different forms of media, the piece reveals underlying gestural similarities between motion, imagery and sound.
At the heart of the system is a video camera attached to software that tracks the placement of red LEDs worn by the dancer. This tracking software, as well as all of the video output code was written in the Java-based Processing environment. When the video camera ‘sees’ red locations in a given video frame, it will record the X and Y position of those points and use that data to create a red circles in those locations. X and Y positions are also sent to Max/MSP as input to a granular synthesis module. Both the processing patch and the Max/MSP patch contain many parameters that change throughout the piece based on the section. Some of these video parameters include red threshold (how bright a pixel has to be to be recognized), dot size, dot fill (whether the circles are solid or empty), fade time, whether or not the dots are connected with blue lines, etc. Some of the audio parameters include which sound file is put through granular synthesis, granule pitch, granule length, pan, volume, delay, reverb time, etc.
Each section contains different parameter values for sound and video. These values are stored in Max/MSP in a ‘preset’ object. Whenever a section change is triggered, the parameter values continuously ramp to their new values over a specified time. The video parameters are sent from Max/MSP back into the Processing patch. All communication between Max/MSP and Processing uses the MaxLink library.
Every time the camera recognizes a red pixel there will be accompanying sound and video. While this connection is defined at a low level within each individual video frame, the relation between movement, video and sound remains strong as you look at larger gestures made by the performer. If the dancer were to wave one LED from the upper left corner down to the bottom right, we would see video data precisely following the movement and creating a spatial representation of the temporal motion that eventually fades away. The sound accompanying this, though made up of individual granules, would still sound like one coherent unit, because the individual granules would be close together both in time and in the original sound file producing the granules.
Finally, it should be noted that the camera recognizes the red of the LEDs, but there is also video data projecting red back into the video camera’s field of vision. Generally, the camera can distinguish between red LEDs and red projection because the LED’s are brighter. However, if the video parameter for ‘red threshold’ is brought lower, the computer will begin to see red of the dimmer projections. Once this happens, we begin to get a feedback loop, because the camera sees it’s own projection and projects more red back into the corresponding location. Video data will continue to be produced on the screen even in the dancer’s absence. Thus, the motion of the dancer does not necessarily have a one-to-one correspondence with the video and sound produced.