We have been asked to create a new multi-touch application for the Opera Ah!, created by composer-performer David Rosenboom and poet-writer Martine Bellen, and which will be performed at RedCat September 15-17th. The Opera will also be featuring the work of our friends Meason Wiley, Jim Murphy, Dimitri Diakopoulos, and Dr. Ajay Kapur from CalArt’s MTIID program. They will be building Ten Hemisphere speakers (see Plork or Slork ), an MLGI laser controller, and flying robotic drums. The opera will also use live video feed from the lobby as source material for four projections onto the floor of the performance space.
Ah Opera is described below as:
“The AH! opera no-opera is a language/theatrical/musical soundswordsoundsworld experience for an interactive audience of Creative Engagers. AH! opera no-opera integrates sound and word, such that music and libretto are not separate, not different from one another (not one and another at all), born from the origins of both music and language, illuminating their power to both differentiate and join human beings.”
After an initial meeting about the project, we were left with the impression that one of the key ideas behind the Opera was the meaning of symbols and words. What is the relationship between Signifier and Signified? Based on the Buddhist Diamond Sutra, these ideas are explored through 13 different stories which are interconnected through a story wheel using special shared phrases. With this in mind, we wanted to create an application that would facilitate exploring the libretto while allowing for the user to de-contextualize, and re-contextualize the text.
The application, called Ah Text!, will be split up into three parts (see video below): first, there will be a “matrix style” loop that has characters falling down in columns, which then split to reveal phrases from the opera, before fading to a white background; second, a touch on the table during the first part causes the characters to explode and fall to the bottom of the screen, allowing for a story wheel menu to appear; lastly, once inside a particular story, users can drag up window of text form the libretto and rearrange these parts on the surface. This last section will also control a physical model (designed by Perry Cook) of the human voice written in Chuck by Ajay Kapur and communicating to Processing via OSC.