Category Archives: Media Choreography

Orbital Resonance

media-copy copy

 

Orbital Resonance is an exploration into responsive environments and live performance. The performers improvise with sound and movement through breathe, voice, and bodily sensors. In the process, the performers experiment with different internal physiological states of their bodies outwardly displaced in light and in sound to create an immersive sensual environment for them and for the audience. The larger environment merges the interactions between various elements (audience, performers, light, sound, architecture, sensors) into a unified, existential orbit.

A performance and discussion took place as part of the Topological Media Lab‘s Re-Mediation Series on April 23 and 24, 2014 from 5-7pm at the Hexagram Blackbox at Concordia University. The performance was supported by Hexagram-CIAM Student Grant.

http://orbitalresonance.weebly.com

PROCESS

The project was a collaboration between artist and PhD Humanities candidate Margaret Westby and interactive developer, creative engineer and an MA Special Individualized Program candidate Nikolaos Chandolias with participation from experimental musician and sound artist Doug Van Nort and performance artist, facilitator, and sociologist of digital technology Anne Goldenberg. Westby and Chandolias participated in every aspect of the process: initial research questions, conceptual and creative content, technological research and development, workshop and rehearsal directives, set-up of space and scenography, organization, promotion, documentation, and all other tasks. Van Nort and Goldenberg collaborated at specific rehearsal days to assist in workshops, conceptual and technological development, and performance creation and dissemination.

The process and methodologies undertaken to create this research-creation project were quite complex. We followed current threads in open source projects (software and movement creation) informed by the DIY (do-it-yourself) ethos, developing new methods of choreography for sonic performative environments and technological design informed by and for the body. The materials in play were biological sensors, language, movement, gestures, sound, lights, cameras, computers, and various other technological apparatuses. We dove into movement exercises based upon the Skinner Release Technique, Viewpoints, contemporary dance, and more current cross-fertilizations in yoga and Open Source Forms (OSF). This combination of kinaesethic methodologies informed our exploration to integrate both human and nonhuman materials.  In addition, experimental music practices including Deep Listening by composer Pauline Oliveros and improvisational techniques in both sound and movement informed our process and creative content. Furthermore, an attuned focus on maintaining a horizontal collaborative spirit was key at all time. This involved continuous discussions around language, teaching and patience with different practices, and an understanding of knowing and accepting limits, whether with the technology or within ourselves.

A performance was presented as a result of a two-month long residency at Concordia’s Blackbox, with the objective of creating a space where we could blur the lines of performer and audience. The immersive, responsive environment invited co-creations to occur between the performers, the space, the spectators and the technology in a non-linear, non-hierarchical and non-dictatorial way.

Alternate Reality: A Pervasive Play Project

Alternate Reality: A Pervasive Play Project

The Project

Over the course of 2012-13, Sha Xin Wei (Director of the Topological Media Lab, Canada Research Chair in Media Arts and Sciences, and Associate Professor of Fine Arts and Computer Science at Concordia University, Montreal) will be collaborating with Patrick Jagoda (Assistant Professor of English and Co-editor of Critical Inquiry) on an alternate reality gaming project. The fellowship will begin with a seminar that Sha and Jagoda are co-teaching in fall 2012, in which graduate and undergraduate students from a host of disciplines will collaborate on game design. During the winter of 2013, Sha, Jagoda and a team of collaborators will conclude game design and post-production. In the early spring of ’13, the transmedia game is slated to take place, to be followed by an international practicum on Play as a Mode of Inquiry Nov 1 – 3, ’13.

The interactive production created by Jagoda and Sha belongs to the emerging artistic form of ?Alternate Reality Games? or ?transmedia games.? Unlike conventional digital games, these creative productions use the real world as their platform and tell a single story across numerous media and technologies. Such transmedia games are distinctive for their tightly networked collaborative communities, player-driven narratives, performance-oriented events, and interpenetration of real and virtual spaces. This project is intended to explore the relations between digital media and space, the affordances of collective storytelling, the generation of new media theory through design, and the development of methodologies for studying the emergent art form of Alternate Reality Games.

[vimeo]https://vimeo.com/72283300[/vimeo]

[vimeo]http://vimeo.com/64941871[/vimeo]

The Course

Course Description: This course offered in Fall 2012 explores the emerging game genre of ?transmedia,? ?pervasive,? or ?alternate reality? gaming. Transmedia games are not bound by any single medium or hardware system. Conventionally, they use the real world as their primary platform while incorporating text, video, audio, live performance, phone calls, email, websites, and locative technologies. The stories that organize most of these games are nonlinear and broken into discrete pieces that audiences must discover and actively reassemble. The participants who play these games must generally collaborate to solve puzzles. Throughout the quarter, we will approach new media theory through the history, aesthetics, and design of transmedia games. For all of their novelty, these games build on the narrative strategies of novels, the performative role-playing of theater, the branching narratives of electronic literature, the procedural qualities of videogames, and the team dynamics of sports contests. Moreover, their genealogical roots stretch back to a diverse series of gaming practices such as nineteenth-century English ?letterboxing,? the Polish tradition of ?podchody,? scavenger hunts, assassination games, and pervasive Live Action Role-Playing games. An understanding of these related forms will be critical to our analytical and creative work.

Course requirements include weekly blog entry responses to theoretical readings; an analytical midterm paper; avid engagement in discussion and design; and collaborative participation in a single narrative-based transmedia game project created by the class that will run on campus, in the city of Chicago, and/or online. No preexisting technical expertise is required. Since transmedia games draw on numerous skill sets, students will be able to contribute with a background in any of the following areas: creative writing, literary or media theory, web design, visual art, computer programming, music, and game design.

Project Inventory

a team-taught course (Fall 2012) entitled Transmedia Games: Theory and Design, for graduate and undergraduate students, run through the Department of English and cross-listed in Creative Writing, Cinema & Media Studies, Theater & Performance Studies, and the Department of Visual Arts;
co-presentation on fellowship project in tandem with student performance event (choreographed by Sha’s frequent collaborator Michael Montanaro) at the opening of the Logan Center for the Arts, October 12, 2012;
introductory and recruitng event on December 6, 2012;
residency visits in winter 2013 by Sha and colleagues from the Topological Media Lab to collaborate on game design and post-production with Jagoda and a team of students;
residency visits in spring 2013 by Sha and colleagues from the Topological Media Lab for the collaborative and transmedia game experience with university and non-university participants;
culminating event for The Project on April 25, 2013; and
an international practicum on Play as Mode of Inquiry, Nov 1 – 3, 2013.

EINSTEIN’S DREAM

media copy

EINSTEIN’S DREAM

Overview

Einstein’s Dream is an environment in which visitors encounter performers in responsive fields of video, light, and spatialized sound, in a set of tableaus. Each tableau is inspired by a vignette from Alan Lightman’s novel, Einstein’s Dreams, set in Berne Switzerland, in 1904, the year that Albert Einstein received the Nobel prize. Or rather, a set of parallel 1904’s, each of which is a different kind of time. In one, time slows to a halt as you approach a particular place; in another there is no future; in third, time sticks and slips; in a fourth age reverses and what is rotten becomes fresh as time passes.

In one version of this project, a large theatrical space (24m x 20m x 8m) will contain multiple tableaux, each accommodating 6-12 people in a pool of light and sound modulating in concert with activity. Visitors and performers can move from tableau to tableau. The performers’ actions, together with the textures and rhythms of lighting, sound and visitors’ expectations, create different kinds of time poetically related to the novel’s vignettes. As a performer walks from place to place she may drag a pool of conditioning light and sound. The pool mutates or merges into another pool with a different type of time.

[ nggallery ]

Context

One hundred years after the two epochal advents in physics of relativity theory and quantum mechanics, we are still reverberating with the consequences. Einstein’s Dream is not a biography or a didactic allegory, but a poetic exploration of our consciousness in time.
There are many didactic works about the theory of relativity and quantum mechanics, including Einstein’s own popular essays, and canonical scientific / philosophical works by Hermann Weyl, Alfred N. Whitehead, Henri Bergson Duration and Simultaneity. In the arts, there have been some distinguished works that treat the theories and the theorists in an externalist way, as icons or as social phenomena. But what we propose is to work directly with the spectators’ felt experiences of time.
What Alan Lightman evoked in his novel was a poetic variation around the felt experience of time, not the “actual” physics of time, but alternatives of time, those hypothesized modes of living in time that could have been imagined, or that never were. By foregrounding how time works in these worlds, the vignettes foreground movement, which is the temporalization of the body. These movements are embedded in everyday life, made marvelous by poetic conceits of time. This fits perfectly with both Sha and Montanaro’s own artistic research into the charging of movement and gesture, of finding or evoking marvelous configurations of movement in the everyday.

Einstein’s Dream will create an experimental apparatus for inducing perceptibly different senses of temporal passage, chance and order, mortality and anisotropy — the arrow of time. Our goal is to use the techniques of theater and dance and responsive media to evoke sharply different kinds of temporal experience that the visitors will feel for themselves. Our experimental goal will be to discover ways to not just depict different kinds of temporal processes, but to condition a physical setting to yield in-person experiences of these different ways of being in time.

Einstein Dreams : Scenarios / Zones, Mechanics, Design approaches

(March 2013, updated for Synthesis Workshop February – March 2014, ASU)

[vimeo]https://vimeo.com/77089477[/vimeo]

SCENARIOS

Scatter / gather

Your shadow splits. The shadows run way from you. The shadows quiver with tension & intention.

Follow spot lights you up. Other spots lurk in the shadows, come after you with persona.

(Use Julian’s rhythm abstractions to record corporeal/ analog movements, and playback. Use analog rhythms as cheap way to get huge variety of very subtle NON-regularity to avoid dead mechanical beats. Also can improvise. )

Freeze

Deposit snapshots of yourself.

Use “flash” timing : charging increase in tension. Snap!

Alternatively: NO tension, just fill zone with flashes.

Use MF VP8 to take webcam images and project them kaleidoscopically throughout zone, with time warp, delay, reversals.

Sutures

The world is fissured, and sutured: as you walk you see/hear into discontiguous parts of the room.

Portals! Use MFortin’s VP8, + Jitter intermediation, eg timespace to introduce time dilation effects).

Motion, oil slick, molasses

Every action is weighted down, slooooooowwwweeedd asymptotically but never quite stilling. Every action causes all the pieces of the world to slide as if on air hockey table, but powered so they accelerate like crazy : Map room to TORUS so the imagery is always visible. Use zeno-divide-in-half or any tunable asymptotic (Max expr object) to decellerate or accelerate.

Blur wind

Fray actions, images, sounds into noise

(e.g. ye olde jit.streak + feedback example )

Vortex, dizzy

Spin the world — every linear movement or “line” of sound becomes drawn into a giant vortex, that sucks into the earth. OR reverse.

Stepped video-strobe in the concentric rings, tunable by OSC + Midi sliders like everything.

Brittle, crack

Need to step carefully. If not, hear and see pending catastrophe: cracking ice underfoot …

Or sometimes pure acoustic in darkness or whiteout strobe.

Use Navid’s adaptive scaler to shrink sensitivity down to smallest movement causing catastrophe in strobe + massive sound. Use subs to add pre-preparatory sound like Earth grinding her teeth before breaking loose.

Stasis ( hot or cold )

Sitting in the bowl of the desert (Sahara or Himalayas )

No-time sonic and field. Noise-field, snow blindness video (black or white majority), or BillViola ultra-slow-mo? (better than 25-1 speed reduction, with no motion-blur)?

Use heat lamps or fans, to subtly add heat or cold ?

Repetition

Visual — take video from a given location, but send to multiple locations (using VP8 + repeated stills) or map to OpenGL polygons…

Audio — use OMAX, with coarse grain

Infection / Dark Light

Use video, e.g. use particles — thickened as necessary — as sources of light. Cluster around movement or around bodies presence as source of light.

MECHANICS

IMPORTANT OZONE ARCHITECTURE

(Julian with MF, working with Navid, Jerome, Omar’s instruments): Each of your instruments should expose key parameters to be tunable by OSC + Midi sliders, so someone OTHER than programmer can play with the instruments qualitatively. OSC gives access to handheld MF’s Max/iOS client so we can walk around IN the space under the effects and vary the instruments IN SITU, LIVE.

ALSO: Crucial that any TML experimentalist can walk in with her / his laptop, tap into the video feed, and emit her own video into the projectors, and control where her video shows up, an with what alpha blend. S/he must be able to do this without Jerome babysitting on call 24 x 7.

Ditto sound & lighting & OSC feed — Julian’s got good design for this. Navid for sensor channels. I hope the sensor channels work transparently on top of Julian’s OSC resource discovery code.

[vimeo]http://www.vimeo.com/77514936[/vimeo]

[vimeo]http://www.vimeo.com/77514935[/vimeo]

[vimeo]http://www.vimeo.com/77514861[/vimeo]
[vimeo]http://www.vimeo.com/77514329[/vimeo]
[vimeo]http://www.vimeo.com/77514328[/vimeo]
[vimeo]http://www.vimeo.com/76984794[/vimeo]
[vimeo]http://www.vimeo.com/74327817[/vimeo]
[vimeo]http://www.vimeo.com/74327393[/vimeo]

Einsteins Dreams: An Ecological Approach to Media Choreography

There are some basic experimental presumptions that I’d like to try out in the Einsteins Dreams work. There aren’t many, but they go to the heart of research in how we compose the behavior of rich responsive environments. The TML starts from where most of the world of interactive environments stops.

One of those places is how events evolve. The obvious ways include: timeline (graph, cues), random (stochastic), decision tree (if-then).
But is this all there is? Not in life nor in art.
This is where things were at up to the 1990’s, well, even now if you ask most programmers and conventional time-based media artists.

I started designing responsive environments with a profoundly different approach to media choreography. And that’s been a core part of the TML’s radically different way of making rich responsive environments that are more like ecologies. This approach learns from continuous state evolution characteristic of tangible, physical, ecological systems.

Practically, how do we do this? That’s an open question. The Topological Media Lab is for exploring open questions, rather than producing artwork reproducing convention. And it is not the case that sprinkling some “AI” will save the day. GIven that learning methods such as HMM, PCA, ICA, are all retrospective, (Bergson’s critique of mechanism), and given that scores, scripts, clocks, and timelines cage action, we set them aside in favor of techniques that give us the maximum nuance, and potential for expressive invention over conditioned space. The most powerful alternative we’ve only begun to exploit is the dynamical systems approach. Rather than rehashing it, let me attach some references, like the “Theater Without Organs” essay (for artists, writers), and the more precise Ozone ACM Multimedia paper.

Most fundamentally, the ED project is to really push on these fronts:

• Move from time-line, random, or decision-tree logics that are typical of engineered environments to dynamical systems modeled on ecologies. (See pages 16-19 of “Theater Without Organs.”)

• Acting & Articulating vs. Depicting or Allegory

The TML is about making environments that palpable condition experience in definite ways, not to display representations (pictures) or models of experience. The radical experimental challenge of ED is about inducing rich experiences of dynamics, change, rhythm, not making an image (representation) of some model of time. The latter is merely allegory. Easy. The former is alchemically transmuted experience. Hard. Given enough skill, making representations is easy. We built the TML, got the ED seed grant, and coordinated the temporality seminars these past 3 years to do something hard: inducing a different mode of temporality — sense of temporal change.

• Rhythm ≠ Isochrony (regularly periodic)

There are no mathematically regular periods “in nature” — that’s an artifact of delusions imposed by our models of mechanical time — frozen in by computers.

Adrian Freed has a rich way of thinking about this, and a rich way of making things that reflect this.

No matter what “curves” you draw, if the pattern is repeated, then you have imposed an isochronous pattern. So we’ve cheated life by pushing / pumping with an artificial “beat.” Instead, ED includes how pseudo-regularities emerge from the dynamical system.

• Give up (geometric) time as a independent parameter

Also since a few years ago, when people like Adrian and David Morris’ students came on board, some people have taken up a challenge I put to radicalize our notion and use of “time”

Instead of using time as an independent parameter, in fact, instead of using any parameter as a “clock” driving the event, use our sensors — cameras and mics — to pick up what is happening, and from the contingent action derive the changes of the responsive environment.

100 years ago Bergson insightfully criticized what he called the cinematic conceit of time / temporal experience. (This is part of the point of the Ontogenesis group this past year with Magda, Will, Felix, Liza, Harry, Adrian, and myself.) We don’t need to fall back into those naiveties.

Even more fundamentally, let’s be mindful of Maturana and Varela’s profound observation that “time” is itself just a linguistic description rather than some thing in the stuff of our bodies and the stuff of the world:

Time as a Dimension

Any mode of behavioral distinction between otherwise equivalent interactions, in a domain that has to do with the states of the organism and not with the ambience features which define the interaction, gives rise to a referential dimension as a mode of conduct. This is the case with time. It is sufficient that as a result of an interaction (defined by an ambience configuration) the nervous system should be modified with respect to the specific referential state (emotion of assuredness, for example) which the recurrence of the interaction (regardless of its nature) may generate for otherwise equivalent interactions to cause conducts which distinguish them in a dimension associated with their sequence, and, thus, give rise to a mode of behavior which constitutes the definition and characterization of this dimension. Therefore, sequence as a dimension is defined in the domain of interactions of the organism, not in the operation of the nervous system as a closed neuronal network. Similarly, the behavioral distinction by the observer of sequential states in his recurrent states of nervous activity, as he recursively interacts with them, constitutes the generation of time as a dimension of the descriptive domain. Accordingly, time is a dimension in the domain of descriptions, not a feature of the ambience. (H. Maturana & F. Varela, p 133. Autopoiesis and Cognition. See also Henri Bergson’s example of the arcing arm, Creative Evolution, chap 1.)

Why not tug the sun as a controller rather than passively watch it sail out of reach!

As I said before, I think time is an effect not an “independent parameter.” This permits a more profound interpretation of Lightman’s novel beyond its “time is…” syntax.

• Functional relation ≠> Determinism

A curve f(t), eg f(t) = sin(t), can provide an utterly precise and reproducible result simply because it is a FUNCTION. For example, f(t) could govern the height and intensity of the “sun” in the Blackbox. However f(t) need NOT be fed parameters t1, t2, t3 … in a regularly incremented monotone sequence. There just needs to be a (reversible) function in order to have reproducibility of the event when the action is reproduced.

There is a profound performative difference in live experience between a a fixed curve — a graph which is traced from left to right in order, and a f[t] = Sin[t] ready to be evaluated given any input.

In the example above, “t” is the INDEPENDENT parameter. y = f(t) is the DEPENDENT parameter. In a realistic system, there is no reason to presume that the world runs on only one independent parameter. (the “unidimensionality” fallacy)

• Functions

There is no contradiction with the graphs that J drew. In fact we use this in many places in Ozone code for 10 years in the form of Max function object (you draw the curve yourself). Jerome could in fact use Morgan and Navid’s function-clocks instead of re-inventing the wheel :). But we called those abstractions “fake state” because we knew that they simply imposed a uni-dimensional sequence on the entire event.

Indeed we can write in any number of FUNCTIONAL, even REVERSIBLE (invertible) relations at the parameter level , yielding an arbitrary number of dimensions of deterministic relation between parameters, I.e. optical flow and number of particles; Color of input and wind (potential) force on fluid (MF : red => heat => flow up against gravity); scratchiness of sound and brittleness of floor. ALL of these can be functions of action, and even of each other. That way, the human can do richly nuanced action, and even drive the action in a fully definite manner because the parameters are deterministically coupled to action. But the relation is mapped from action in as many dimensions as the instruments can sense (either as raw or cooked sensor channels).

See Ozone documentation by Mani Thivierge on TML Private WIKI for precise description. Since we are short on time, I propose that composers read the Ozone document merely for the notation and the approach. In this workshop, I propose we try only this notation “on paper” as a way of thinking about composing an event. If the composers have time, they are welcome to write state engines, but that is not necessary this round.

• COMPLEXITY vs RICH, COHERENT ACTION

We do not control complexity by imposing a small number of independent parameters. In fact, as long as we can engineer functional relations, then the human and nonhuman agents can drive the event by ACTION. Actions can be compact and coherent — e.g. Everyone huddle together and stay huddled together in one place. OR everyone huddle together but move about in a compact group around the floor. Etc. Even if this maps to multiple parameters there should be no need for us actors / inhabitants to think in terms of parameters as we act.

SCRIPTED CONTROL vs LIVE ACTION

There’s a fundamental difference in attitude between code state as a trace of what’s going on, vs. code as a driver of action.

There are at least three modes of agency: script (machine), human, and medium.

(A) Clock drives event

For example, some software code animates a light simulating the sun rising in the course of a day. The shadow of a pole shortens lengthen as a function of clock-time.

(B) Human drives event

For example: human lifts a lantern. Overhead Camera sees shadow of fixed pole shorten on the floor. Code uses length of shadow to move an image of the sun…

A & B may look quite similar. Downstream media code may even be identical, driven by OSC — that is via a FUNCTIONAL, hence DETERMINISTIC dependence. But the KEY difference is that A is driven by a clock, and B can be nuanced by LIVE action. The actor can “scrub” through the event by moving his her arm up or down in any manner.

• (C) MAKING A MEDIUM rather than a movie of a medium’s particular action.

Nothing precludes programming a zone or instrument as a living medium with its own dynamics — think of creating not a movie of a ripple spreading across the floor, but a whole sheet of “water” that ripples in response to any number of fingers or toes or stones doing any action in it.

• An embodied second order EVENT DESIGN METHOD

(inspired by Harrry Smoak, Matthew Warne’s Thick/N 2004)

NOT as actual scenography, just as a design method: lay out several ZONES on the floor of the Blackbox, each with its own dynamics. Then we can try walking from zone to zone in many different sequences, to get a feel for what transitions might feel like. Imagine what players / inhabitants should be doing in order for the state of the event to change from zone A to zone B. THEN we can design a state topology of those as POTENTIAL transitions, that actualize only when the inhabitants and the system actually act accordingly (as picked up by the sensors).

(The Topological Media Lab’s Ozone media choreography architecture as coded in Max / C already does this.)

REFERENCES

Einstein, Albert, Relativity
Frankel, Theodore. Gravitational Curvature : An Introduction to Einstein’s Theory. San Francisco: W. H. Freeman, 1979.
Bergson, Henri. Creative Evolution. Basingstoke ; New York: Palgrave Macmillan, 2007.
Bergson, Henri. Duration and Simultaneity. 2nd ed. Manchester: Clinamen Press, 1999.
Broglio, Ron, “Thinking about stuff: Posthumanist phenomenology and cognition,” in Special Issue on Poetic and Speculative Architecture in Public Space, AI & Society 26.2, 2011, p. 187-192.
Merleau-Ponty, Maurice. Phenomenology of Perception. Tr. Donald A. Landes. Abingdon, Oxon ; New York: Routledge, 2011. Print.
Bergson, Henri. Duration and Simultaneity. 2nd ed. Manchester: Clinamen Press, 1999.
Lightman, Alan P. Einstein’s Dreams. New York: Vintage Contempories, 2004.
Alfred N. Whitehead. Principle of Relativity. 1922.
Humberto Maturana and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living, Reidel 1980.
Sha Xin Wei, Michael Fortin, Tim Sutton, Navid Navab, “Ozone: Continuous state-based media choreography system for live performance,” ACM Multimedia 2010.
Sha Xin Wei, Poiesis and Enchantment in Topological Matter, MIT Press, forthcoming 2013. (Preface and Chapter 1).

People

Michael Montanaro :Creative direction, art direction and coordination
Sha Xin Wei : Phenomenology of time perception
Jerome Delapierre: Realtime video, visual design,videography and photography
Navid Navab: Real-time sound, sound design, sensor system design
Julian Stein: Realtime lighting, realtime sound processing, photography, videography
Nicolas Chandolias: Speech and Voice Processing, Natural Language Processing
Nina Bouchard: Videography and photography
Katerina Lagasse: Event coordination, publicity, videography and photography

Support

Fonds de Recherche Société et Culture (FQRSC) Quebec; Concordia University: Vice Provost Teaching and Learning, Vice President Research and Graduate Studies, Faculty of Fine Arts, Hexagram.

Documentation

Flick photos | Vimeo videos

TGarden

t

TGarden

TGarden is an investigation of how people make sense of and navigate in rich and dynamically evolving media spaces. Given the rise of ubiquitous computing and realtime media synthesis, we’re anticipating the need for coherent yet supple ways for designers to create such complex interactive media spaces and for people to inhabit them.

In a TGarden space, visitors wearing instrumented clothing creates and modulate video and sound based on their gesture and movement. In effect, visitors write video and sound by their movement.

[wpsgallery]

For 2001-2002, we concentrated on using wireless sensors on the body to track gesture. We built a state evolution system that responds continuously to sensor statistics, synthesizes and marshalls media in realtime.

In TGarden spaces, we use a combination of costumes outfitted with sensors, video tracking, realtime sound and video processing, and gestural pattern tracking.

Links:Research concerns include the design of continuously varying narrative spaces, how people improvise meaningful gesture, and factors of tangibility and coherence such as latency, temporal (musical) texture and rhythm. Our goal is to come up principles of design that should be useful for creating and inhabiting responsive media spaces. This research thread parallels a series of international productions in Europe and the United States.

SIGGRAPH2000 – New Orleans

http://sponge.org

http://www.f0.am/tgarden

TGarden[TM1]

TGarden is a responsive environment, inspired by calligraphy and scrying. In TGarden, players’ gestures are transformed into generative computer graphics and digital soundscapes, leaving marks and traces in much the same way as a calligrapher would with brushes and ink. When visitors approach the TGarden, they choose from a range of costumes, designed to encourage particular kinds of movement. Light and voluminous for space-filling, fast movements; tight and restrictive for small, fine gestures; heavy and transparent for slow, meditative actions. In intimate dressing chambres, in addition to the costumes, the players are equipped with accelerometers, sensors able to detect changes in speed and tilt of the movement, an optical device for tracking the players’ position and direction in the space, as well as a small wearable transmitter that communicates with the software systems “back-stage.”

Once players enter the space, they are left alone to explore the connections between their bodies and the environment. A swiping motion could send an organic-looking, digital shadow smearing across the floor; walking across the room could sound like swimming with a swarm of invisible, but musical creatures. The sonic and visual media are layered in textures and meanings, allowing for various styles and interpretations. Even though simple interactions are easily learned, it takes time to get acquainted with the environment’s own nature. As an apprentice calligrapher must learn to find a balance between the flow of ink, the pressure of the brush and the speed of his gesture, a player in TGarden slowly learns to write, scratch and dig through the media space, to be able to play it as an instrument…

Together with Sponge, we designed and developed several installations over a two-year period between 2000 and 2001, testing them with audiences across Europe and North America.

[TM1]Information taken from: http://fo.am/tgarden/

Sebald Puppet Theatre

Sebald Puppet Theatre

Performed by Mark Sussman , Roberto Rossi, Sarah Chênevert-Beaudoin, Gabe Levine, & Ayesha Hameed
original performances created by Mark Sussman , Roberto Rossi, Stephen Kaplin, & Jenny Romaine


Directed & designed by Mark Sussman & Roberto Rossi
text adapted from “After Nature,” by W.G. Sebald


. A tabletop show, with live and pre-recorded video. A production of Great Small Works, NYC, with the support of the Topological Media Lab, Concordia University; thanks for advice and suggestions to Sha Xin Wei, Michael Montanaro, and Robert Reid.

www.greatsmallworks.org

[wpsgallery]

Ouija

IMG_3201

Ouija

In 2007, based on a series of conversation with Sha Xin Wei about movement, agency, entrainment, and responsivity, Michael Montanaro (Chair of Contemporary Dance), created a set of structured improvisation exercises for dancers working in responsive media environment in the Hexagram Blackbox.

Assistant choreographer Soo-yeon Cho, 7 dancers, and realtime media creators from the Topological Media Lab, and collaborating researchers held a series of experiments in structured improvisation exploring the emergence of collective intention in a field of movement. The field of movement includes un-prepared everyday “un-conscious” movement, pre-conditioned but un-rehearsed movement, as well as fully phrased movement. The experiments included dancers and non-dancers, sometimes identified as such, sometimes not. Themes included entrainment, camouflage, calligraphy and exchanging initiative and momentum between dancers and media.

[wpsgallery]

TECHNIQUE [SOFTWARE]

All these experimental events lived in a set of responsive substrate media supplied with calligraphic video and gestural sound software instruments, the Oxygen media choreography software system, WYSIWYG’s sounding tapestries, and some proto-jewelry. The realtime media instruments were implemented in Max/MSP/Jitter, with substantial extensions in C.

PEOPLE

Soo-yeon Cho, Choreographer
Prof. Sha Xin Wei, Director

Dancers

Mike Croitoru
Kiani del Valle
Veronique Gaudreau
Rebecca Halls
Marie Laurier
Joannie Pharand
 Olivia Foulke
Oxygen
Jean-Sebastien Rousseau, Calligraphic video, videography, visual effects, production
Tim Sutton, Gestural sound design and programming, production
Emmannuel Thivierge, State engine, camera tracking, production
Filip Radonjik, Live ink painting
WYSIWYG
Marguerite Bromley (XS Labs), Tapestry design and weaving
Elliot Sinyor (IDMIL McGill), Tapestry mechatronics
David Gauthier, Tapestry mechatronics
Freida Abtan, Sound design & programming
David Birnbaum (IDMIL McGill), Sound design & programming
Doug van Nort (IDMIL McGill), Gestural motion feature analysis
Josee-Anne Drolet, TML Project Coordinator, production, videography, editing
Harry Smoak, TML Research Coordinator, production support, research advisor
Ma Zhiming, Production

SUPPORT

Special thanks to Faculty Colleagues
Prof. Michael Montanaro, Contemporary Dance, Ouija movement experiment design
Prof. Marcelo Wanderley, IDMIL, McGill University, WYSIWYG gestural control of sound synthesis
Prof. Joey Berzowksa, XS Labs, Interactive textiles

Thanks also to affiliates of the TML and the SenseLab for artistic and research support: Michael Fortin, Elena Frantova, Olfa Driss, Rene Sills, Raul Gomez, Paul Melançon, Antoine Blanchet,Younjeong Choi, and Shermine Sawalha.

Frankenstein’s Ghosts

media

Frankenstein’s Ghosts

Frankenstein’s Ghosts is a SSHRC funded research creation project (2007-2010) deconstruction, analysis and exploration of Mary Shelley’s Frankenstein to explore substantive themes emerging from the novel such as: what are the boundaries of the human? To what extent do we create ourselves? What is our responsibility towards what we create? What is our responsibility towards the “Other”? What ethical challenges do our present technological advances present ? What is monstrous? And what does it mean to be human? The project amalgamated the eminent Blue Riders Canadian chamber music ensemble, director / choreographer Michael Montanaro, media researcher and artists from Dr. Sha Xin Wei’s Topological Media Lab, dancers, and scholars from religious studies and literary studies into a new sort of ensemble that experimented with new modes of performance practice. Over four years, the media artists and the musicians and dancers developed fresh modes of movement and performance that fused what before were largely independent practice.

We are using 19th century lighting techniques and tricks to create shadow images. Real-time video and sound portray shifting realities, memory and other possible truths. Musically, structured improvisation gives shape to concepts. Movement expresses the need for relationship. Words draw us back and forth from conscious to subconscious.

[wpsgallery]

COLLABORATORS

Michael Montanaro, choreographer/director
Sha Xin Wei, topological media
Ann Sowcroft, writer
Jerome DelaPierre, real-time video
Navid Navab, Timothy Sutton, real-time sound
the Blue Rider Music Ensemble
Leal Stellick & Milan Gervais, Emmanuele Calve, Ashley, dancers

REFERENCES

http://www.frankensteinsghosts.com/