Category Archives: Movement

Tabletap

gestural

Tabletap

A performance choreographed around a chef and sonified objects: fruit, vegetables, meat, knives, pots and pans, cutting board and table.

Cooking*, the most ancient art of transmutation, has become over a quarter of a million years an unremarkable, domestic practice. But in this everyday practice, things perish, transform, nourish other things. Enchanting the fibers, meats, wood and metal with sound and painterly light, we stage a performance made from the moves(gestures) of cooking, scripted from the recipes of cuisine both high and humble…

A performance choreographed around a chef and sonified objects: fruit, vegetables, meat, knives, pots and pans, cutting board and table.

[wpsgallery]

Cooking*, the most ancient art of transmutation, has become over a quarter of a million years an unremarkable, domestic practice. But in this everyday practice, things perish, transform, nourish other things. Enchanting the fibers, meats, wood and metal with sound and painterly light, we stage a performance made from the moves(gestures) of cooking, scripted from the recipes of cuisine both high and humble. Panning features virtuosic chefs who are also movement artists, such as Tony Chong.

Within our responsive scenography system, every cooking process is transformed into an immersive multimedia environment and performance; A multi-sensory experience composed of scent(smell), light, video, sound, movement, and objects. Every process is experienced across many senses at once. The sizzling sound of hot oil, and the mouthwatering aroma of onion and garlic hits the audience within an audio-visual thunderstorm. At the very end, the audience is invited to taste a sample of the dish within the accumulated sonic environment.

The acoustic state evolves via transmutations of sound, light and image in an amalgam of, not abstract data, but substances like wood, fire, water, earth, smoke, food, and movement. Panning allows the performers modulate these transmutations with their fingers and ears and bodies – the transmutation of movement into sound, chemical reaction into sound, and sound into light and image.

Panning is the first part in a series of performances exploring how everyday gestures/events could become charged with symbolic intensity.

[vimeo]http://vimeo.com/51474504[/vimeo]

[vimeo]http://vimeo.com/42057497[/vimeo]

Materials:

Self contained responsive kitchen set embedded into our portable table8.1 speaker system, projectors, food

Sebald Puppet Theatre

Sebald Puppet Theatre

Performed by Mark Sussman , Roberto Rossi, Sarah Chênevert-Beaudoin, Gabe Levine, & Ayesha Hameed
original performances created by Mark Sussman , Roberto Rossi, Stephen Kaplin, & Jenny Romaine


Directed & designed by Mark Sussman & Roberto Rossi
text adapted from “After Nature,” by W.G. Sebald


. A tabletop show, with live and pre-recorded video. A production of Great Small Works, NYC, with the support of the Topological Media Lab, Concordia University; thanks for advice and suggestions to Sha Xin Wei, Michael Montanaro, and Robert Reid.

www.greatsmallworks.org

[wpsgallery]

ozone

o4

O4

O4 – From 2008-2013,the research strand for shaping a responsive environment’s media response to inhabitant activity has evolved into a greatly refined and much more powerful software system: the OZONE media choreography framework.

This system allows:

(a) the reading of arbitrary configurations of sensors (including cameras and microphones, but also any array of physical sensors that can be interfaced to a computer through serial inputs);

(b) feature extraction in realtime;

(c) continuous evolution of behavior and orchestration;

(d) mappings to networks of video synthesis computers, realtime sound synthesis computers, theatrical lighting systems, or any electronically or digitally controllable system. (We have controlled for example, household fans and lamps, networks of small commercial toys and LED’s.) In brief the system uses pattern recognition on motion-capture data, to animate and mix the motions of virtual puppets, model-free learning using methods from partial differential equations, and computational physics of lattices and dynamical systems.

LIGHT

[vimeo]https://vimeo.com/88192813[/vimeo]

IPAD CONTROL VIEW

ipad_light

SOUND

VIDEO

TECHNIQUE [SOFTWARE]

The Ozone media choreography system factors into the following set of software abstractions: (1) sensor input conditioning, (2) simple statistics, (3) continuous state engine for behavior of the media engines, (4) realtime video re-synthesis instruments, (5) realtime sound re-synthesis instruments, (6) animation interfaces to other protocols, such as DMX, custom LED networks, and actuators. The implementation framework is Max/MSP/Jitter, with substantial extensions to custom computational physics, computer vision, and other methods.

PEOPLE

Sha Xin Wei, system architecture, experiment design, media choreography
Navid Navab, realtime sound
Julian Stein, realtime lighting
Evan Montpellier, visual programming, state engine

Previous

Harry Smoak, media choreography, lighting, experiment design

Michael Fortin, computational fluid dynamics and video

Morgan Sutherland, state engine, sensor fusion, media choreography, project management

Tyr Umbach, realtime video, state engine

Tim Sutton, realtime sound

Jean-Sebastien Rousseau, realtime video

Delphine Nain, computational fluid dynamics and video
Yon Visell, Emmanuel Thivierge, state engine

Ouija

IMG_3201

Ouija

In 2007, based on a series of conversation with Sha Xin Wei about movement, agency, entrainment, and responsivity, Michael Montanaro (Chair of Contemporary Dance), created a set of structured improvisation exercises for dancers working in responsive media environment in the Hexagram Blackbox.

Assistant choreographer Soo-yeon Cho, 7 dancers, and realtime media creators from the Topological Media Lab, and collaborating researchers held a series of experiments in structured improvisation exploring the emergence of collective intention in a field of movement. The field of movement includes un-prepared everyday “un-conscious” movement, pre-conditioned but un-rehearsed movement, as well as fully phrased movement. The experiments included dancers and non-dancers, sometimes identified as such, sometimes not. Themes included entrainment, camouflage, calligraphy and exchanging initiative and momentum between dancers and media.

[wpsgallery]

TECHNIQUE [SOFTWARE]

All these experimental events lived in a set of responsive substrate media supplied with calligraphic video and gestural sound software instruments, the Oxygen media choreography software system, WYSIWYG’s sounding tapestries, and some proto-jewelry. The realtime media instruments were implemented in Max/MSP/Jitter, with substantial extensions in C.

PEOPLE

Soo-yeon Cho, Choreographer
Prof. Sha Xin Wei, Director

Dancers

Mike Croitoru
Kiani del Valle
Veronique Gaudreau
Rebecca Halls
Marie Laurier
Joannie Pharand
 Olivia Foulke
Oxygen
Jean-Sebastien Rousseau, Calligraphic video, videography, visual effects, production
Tim Sutton, Gestural sound design and programming, production
Emmannuel Thivierge, State engine, camera tracking, production
Filip Radonjik, Live ink painting
WYSIWYG
Marguerite Bromley (XS Labs), Tapestry design and weaving
Elliot Sinyor (IDMIL McGill), Tapestry mechatronics
David Gauthier, Tapestry mechatronics
Freida Abtan, Sound design & programming
David Birnbaum (IDMIL McGill), Sound design & programming
Doug van Nort (IDMIL McGill), Gestural motion feature analysis
Josee-Anne Drolet, TML Project Coordinator, production, videography, editing
Harry Smoak, TML Research Coordinator, production support, research advisor
Ma Zhiming, Production

SUPPORT

Special thanks to Faculty Colleagues
Prof. Michael Montanaro, Contemporary Dance, Ouija movement experiment design
Prof. Marcelo Wanderley, IDMIL, McGill University, WYSIWYG gestural control of sound synthesis
Prof. Joey Berzowksa, XS Labs, Interactive textiles

Thanks also to affiliates of the TML and the SenseLab for artistic and research support: Michael Fortin, Elena Frantova, Olfa Driss, Rene Sills, Raul Gomez, Paul Melançon, Antoine Blanchet,Younjeong Choi, and Shermine Sawalha.

Meteor Shower

a1

Meteor Shower

Meteor Shower was initially built as a simple responsive environment, its next incarnation it will [TM1] incorporate state-aware behaviour, and further explore ideas of nature/artifice by building narrative structures involving “lunar characters.”

As deployable installation, Meteor Shower holds potential as an environment for architectural installations, play spaces, and performance events – it is being designed with such flexibility in mind.

[wpsgallery]

TECHNIQUE [SOFTWARE]

PEOPLE

Sha Xin Wei – concept & meta-physics
Jean-Sébastien Rousseau – video and particle programming
Timothy Sutton – sound design and programming
Emmanuel Thivierge – state evolution and video feature extraction
Louis-Andre Fortin – visual design and programming
Freida Abtan – sound and systems design advisor

Hubbub

[instant_gallery]

Hubbub is one application of TML research treating speech as a computational substance for architectural construction, complementary to its role as a medium of communication.

Success will be measured by the extent to which strangers who revisit a Hubbub space begin to interact with one another socially in ways they otherwise would not. Hubbub is a part of a larger cycle called URBAN EARS, which explores how cities conduct conversations via the architecture of physical and computational matter.

Hubbub installations may be built into a bench, in a bus stop, a bar, a cafe, a school courtyard, a plaza, a park. As you walk by a Hubbub installation, the words you speak will dance in projection across the surfaces according to the energy and prosody of your voice. For example loud speech produces bold text, whispers We’ll capitalize on recognition errors to give a playful character to the space.

HYBRID ARCHITECTURE AND HABITATION OF URBAN SPACE

In this street-scale research thread, we investigate how people build, destroy, modify and inhabit city environments using embedded computational systems. The first part of this study is social and historical, employing methods of field observations as well as insights from phenomenological and anthropological studies. We intend to combine this work with insights of colleagues from the domains of urban design and architecture to design computer-mediated, responsive environmental systems for urban space.

The HUBBUB research series presents an foray in this domain of urban responsive architecture. As you walk through a Hubbub space, your speech is picked up by microphones, your speech is partially recognized and converted to text. Associate text is projected onto the walls, furniture and other surfaces around you as animated glyphs whose dancing motion reflects the energy and prosody of your speech. Hubbub is an investigation of how accidental and non-accidental conversations can take place in public spaces, by means of speech that appears as glyphs projected on public surfaces. The installation takes its meaning from the social space in which it is embedded, so its “function” depends on the site we select. Some of the technical issues concern realtime, speaker-independent, training-free, speech recognition; realtime extraction of features from speech data; multi-variate continuous deformation and animation of glyphs in open-air public display systems, such as in projection or in LED displays. We will investigate how embedding such responsive media as the Hubbub speech-painting technology as well as TGarden technologies into the urban environment can modestly support rich, playful forms of sociality.

TECHNIQUE [SOFTWARE]

2003 Architecture. We use a custom speech recognizer which can recognize continuous speech, independent of speakers. Moreover, this speech recognition application uses the Windows SAPI engine which allows us to word-spot for a restricted vocabulary and avoid training. This way anyone in a language group can freely speak without first preparing the software system. We have developed a new portable animation system called Commotion which supports some kinetic text animation in general OpenGL and Objective-C GNUStep open source environments. In parallel we use MAX/MSP to perform feature extraction on the speech and use the features to send animation hints to Commotion.

PEOPLE

Vincent Fiano, Commotion animation system.
Stephen Ingram, Word-spotting, grammar-driven speech recognition system.
Graham Coleman, MSP speech feature extraction and Max animation choreography.

Frankenstein’s Ghosts

media

Frankenstein’s Ghosts

Frankenstein’s Ghosts is a SSHRC funded research creation project (2007-2010) deconstruction, analysis and exploration of Mary Shelley’s Frankenstein to explore substantive themes emerging from the novel such as: what are the boundaries of the human? To what extent do we create ourselves? What is our responsibility towards what we create? What is our responsibility towards the “Other”? What ethical challenges do our present technological advances present ? What is monstrous? And what does it mean to be human? The project amalgamated the eminent Blue Riders Canadian chamber music ensemble, director / choreographer Michael Montanaro, media researcher and artists from Dr. Sha Xin Wei’s Topological Media Lab, dancers, and scholars from religious studies and literary studies into a new sort of ensemble that experimented with new modes of performance practice. Over four years, the media artists and the musicians and dancers developed fresh modes of movement and performance that fused what before were largely independent practice.

We are using 19th century lighting techniques and tricks to create shadow images. Real-time video and sound portray shifting realities, memory and other possible truths. Musically, structured improvisation gives shape to concepts. Movement expresses the need for relationship. Words draw us back and forth from conscious to subconscious.

[wpsgallery]

COLLABORATORS

Michael Montanaro, choreographer/director
Sha Xin Wei, topological media
Ann Sowcroft, writer
Jerome DelaPierre, real-time video
Navid Navab, Timothy Sutton, real-time sound
the Blue Rider Music Ensemble
Leal Stellick & Milan Gervais, Emmanuele Calve, Ashley, dancers

REFERENCES

http://www.frankensteinsghosts.com/

Cosmicomics

c

Cosmicomics

Based on previous work with Meteor Shower, Cosmicomics presents a fantastical sky animated by a fusion of lunar dreams inspired by Italo Calvino’s eponymous novel, and by the quantum inflationary cosmology created by Andre Linde. A large ceiling-mounted display (three plasma displays or a projected screen) opens a window into a fable of a cosmos, filled with liquid light and sound that dance to movement, epoch, and the alchemical condition of the Moon.

Cosmicomics was presented at Elektra 9-13 May 2007, Montreal.

[wpsgallery]

TECHNIQUE / SOFTWARE

1 tracking camera
3 tiled plasma screens
Video processing computer
Sound processing computer

Camera-based tracking, motion-feature extraction, media choreography state engine, realtime sound processing, realtime video processing.

PEOPLE

Sha Xin Wei – Director, Art Concept
Harry Smoak – Director of production, creative advisor
Jean-Sébastien Rousseau – Video design and Max/Jitter OpenGL programming, Models and special effects video
Timothy Sutton – Sound design and Max/MSP programming

Emmanuel Thivierge – State engine programming, Camera feature extraction
Josée-Anne Drolet – Project Coordinator, Models and special effects video

Olfa Driss – Research, Models and special effects video
Michael Fortin – Graphics programming, OpenGL and optimization