Category Archives: Movement

Orbital Resonance

media-copy copy

 

Orbital Resonance is an exploration into responsive environments and live performance. The performers improvise with sound and movement through breathe, voice, and bodily sensors. In the process, the performers experiment with different internal physiological states of their bodies outwardly displaced in light and in sound to create an immersive sensual environment for them and for the audience. The larger environment merges the interactions between various elements (audience, performers, light, sound, architecture, sensors) into a unified, existential orbit.

A performance and discussion took place as part of the Topological Media Lab‘s Re-Mediation Series on April 23 and 24, 2014 from 5-7pm at the Hexagram Blackbox at Concordia University. The performance was supported by Hexagram-CIAM Student Grant.

http://orbitalresonance.weebly.com

PROCESS

The project was a collaboration between artist and PhD Humanities candidate Margaret Westby and interactive developer, creative engineer and an MA Special Individualized Program candidate Nikolaos Chandolias with participation from experimental musician and sound artist Doug Van Nort and performance artist, facilitator, and sociologist of digital technology Anne Goldenberg. Westby and Chandolias participated in every aspect of the process: initial research questions, conceptual and creative content, technological research and development, workshop and rehearsal directives, set-up of space and scenography, organization, promotion, documentation, and all other tasks. Van Nort and Goldenberg collaborated at specific rehearsal days to assist in workshops, conceptual and technological development, and performance creation and dissemination.

The process and methodologies undertaken to create this research-creation project were quite complex. We followed current threads in open source projects (software and movement creation) informed by the DIY (do-it-yourself) ethos, developing new methods of choreography for sonic performative environments and technological design informed by and for the body. The materials in play were biological sensors, language, movement, gestures, sound, lights, cameras, computers, and various other technological apparatuses. We dove into movement exercises based upon the Skinner Release Technique, Viewpoints, contemporary dance, and more current cross-fertilizations in yoga and Open Source Forms (OSF). This combination of kinaesethic methodologies informed our exploration to integrate both human and nonhuman materials.  In addition, experimental music practices including Deep Listening by composer Pauline Oliveros and improvisational techniques in both sound and movement informed our process and creative content. Furthermore, an attuned focus on maintaining a horizontal collaborative spirit was key at all time. This involved continuous discussions around language, teaching and patience with different practices, and an understanding of knowing and accepting limits, whether with the technology or within ourselves.

A performance was presented as a result of a two-month long residency at Concordia’s Blackbox, with the objective of creating a space where we could blur the lines of performer and audience. The immersive, responsive environment invited co-creations to occur between the performers, the space, the spectators and the technology in a non-linear, non-hierarchical and non-dictatorial way.

Alternate Reality: A Pervasive Play Project

Alternate Reality: A Pervasive Play Project

The Project

Over the course of 2012-13, Sha Xin Wei (Director of the Topological Media Lab, Canada Research Chair in Media Arts and Sciences, and Associate Professor of Fine Arts and Computer Science at Concordia University, Montreal) will be collaborating with Patrick Jagoda (Assistant Professor of English and Co-editor of Critical Inquiry) on an alternate reality gaming project. The fellowship will begin with a seminar that Sha and Jagoda are co-teaching in fall 2012, in which graduate and undergraduate students from a host of disciplines will collaborate on game design. During the winter of 2013, Sha, Jagoda and a team of collaborators will conclude game design and post-production. In the early spring of ’13, the transmedia game is slated to take place, to be followed by an international practicum on Play as a Mode of Inquiry Nov 1 – 3, ’13.

The interactive production created by Jagoda and Sha belongs to the emerging artistic form of ?Alternate Reality Games? or ?transmedia games.? Unlike conventional digital games, these creative productions use the real world as their platform and tell a single story across numerous media and technologies. Such transmedia games are distinctive for their tightly networked collaborative communities, player-driven narratives, performance-oriented events, and interpenetration of real and virtual spaces. This project is intended to explore the relations between digital media and space, the affordances of collective storytelling, the generation of new media theory through design, and the development of methodologies for studying the emergent art form of Alternate Reality Games.

[vimeo]https://vimeo.com/72283300[/vimeo]

[vimeo]http://vimeo.com/64941871[/vimeo]

The Course

Course Description: This course offered in Fall 2012 explores the emerging game genre of ?transmedia,? ?pervasive,? or ?alternate reality? gaming. Transmedia games are not bound by any single medium or hardware system. Conventionally, they use the real world as their primary platform while incorporating text, video, audio, live performance, phone calls, email, websites, and locative technologies. The stories that organize most of these games are nonlinear and broken into discrete pieces that audiences must discover and actively reassemble. The participants who play these games must generally collaborate to solve puzzles. Throughout the quarter, we will approach new media theory through the history, aesthetics, and design of transmedia games. For all of their novelty, these games build on the narrative strategies of novels, the performative role-playing of theater, the branching narratives of electronic literature, the procedural qualities of videogames, and the team dynamics of sports contests. Moreover, their genealogical roots stretch back to a diverse series of gaming practices such as nineteenth-century English ?letterboxing,? the Polish tradition of ?podchody,? scavenger hunts, assassination games, and pervasive Live Action Role-Playing games. An understanding of these related forms will be critical to our analytical and creative work.

Course requirements include weekly blog entry responses to theoretical readings; an analytical midterm paper; avid engagement in discussion and design; and collaborative participation in a single narrative-based transmedia game project created by the class that will run on campus, in the city of Chicago, and/or online. No preexisting technical expertise is required. Since transmedia games draw on numerous skill sets, students will be able to contribute with a background in any of the following areas: creative writing, literary or media theory, web design, visual art, computer programming, music, and game design.

Project Inventory

a team-taught course (Fall 2012) entitled Transmedia Games: Theory and Design, for graduate and undergraduate students, run through the Department of English and cross-listed in Creative Writing, Cinema & Media Studies, Theater & Performance Studies, and the Department of Visual Arts;
co-presentation on fellowship project in tandem with student performance event (choreographed by Sha’s frequent collaborator Michael Montanaro) at the opening of the Logan Center for the Arts, October 12, 2012;
introductory and recruitng event on December 6, 2012;
residency visits in winter 2013 by Sha and colleagues from the Topological Media Lab to collaborate on game design and post-production with Jagoda and a team of students;
residency visits in spring 2013 by Sha and colleagues from the Topological Media Lab for the collaborative and transmedia game experience with university and non-university participants;
culminating event for The Project on April 25, 2013; and
an international practicum on Play as Mode of Inquiry, Nov 1 – 3, 2013.

Story Telling Space

as

Story Telling Space

This research project is an in depth investigation for the realization of a system, able to combine gesture and vocal recognition for interactive art, live events and speech based expressive applications. Practically, we have created a flexible platform for collaborative and improvisatory storytelling combining voice and movement. Our work advances both conceptual and technical research relating speech, body, and performance using digital technologies and interactive media.

We create an immersive audio-visual Story Telling Room that responds to voice and sound inputs. The challenge is to set up an efficient speech feature extraction mechanism using as good microphone conditions as possible. Leveraging the TML’s realtime media choreography framework, we map speech to a wide variety of media such as animated glyphs, visual graphics, light fields and soundscapes. Our desiderata for mapping speech prosody information centre on reproducibility, maximum sensitivity, and nil latency. Our purpose is not to duplicate but to supplement and augment the experience of the story that is being unfolded with a performer and audience in ad hoc, improvised situations using speech and voice for expressive purposes.

[wpsgallery]

 

We are exploring the possibilities of Natural Language Processing in the context of live performance. As the performer speaks the system analyzes the spoken words and with the help of the Oxford American Writer’s Thesaurus (OAWT), each semantically significant lexical unit initiates its own semantic cluster.

As the story is being unfold by the performer the environment is shifting from a state to another according to the censoring data that occur from the system analysis. Furthermore, we are exploring the possibilities of transcribed text from spoken utterances. The spoken words of the performer already burry a communicative value as they already have attached to them a semantic component that has been evolved and transform through out the history of language. The objective of this prototype is to see what happens when light, imagery and texture is added in the text and how it is perceived by the performer.

The pieces of text when encountered as audio-visual forms animated by quasi- intelligent dynamics in digital media become widely perceived as animate entities. People tent to regard animated glyphs as things to be tamed or played with rather than a functional and abstract system of communicative symbols.

[youtube]http://www.youtube.com/watch?v=tLPw1WjHoic[/youtube] [youtube]http://www.youtube.com/watch?v=TyF_-m6RWkY[/youtube]

TECHNIQUE

We have create two stand-alone Java application that perform the Speech Recognition and Speech Analysis tasks. For the mapping techniques we have used partially the already existing ozone state engine and we have create a new state engine in MAX/ MSP for better results in the context of improvisatory story telling

COLLABORATORS

Nikolaos Chandolias, real-time Speech Recognition & Analysis, System Design
Jerome DelaPierre, real-time video
Navid Navab real-time sound
Julian Stein, real-time lights

Michael Montanaro, choreographer/director
Patricia Duquet, Actress
Sha Xin Wei, Topological Media Lab
Jason Lewis, Obx Labs

MORE INFO

Story Telling Space Documentation

 

EINSTEIN’S DREAM

media copy

EINSTEIN’S DREAM

Overview

Einstein’s Dream is an environment in which visitors encounter performers in responsive fields of video, light, and spatialized sound, in a set of tableaus. Each tableau is inspired by a vignette from Alan Lightman’s novel, Einstein’s Dreams, set in Berne Switzerland, in 1904, the year that Albert Einstein received the Nobel prize. Or rather, a set of parallel 1904’s, each of which is a different kind of time. In one, time slows to a halt as you approach a particular place; in another there is no future; in third, time sticks and slips; in a fourth age reverses and what is rotten becomes fresh as time passes.

In one version of this project, a large theatrical space (24m x 20m x 8m) will contain multiple tableaux, each accommodating 6-12 people in a pool of light and sound modulating in concert with activity. Visitors and performers can move from tableau to tableau. The performers’ actions, together with the textures and rhythms of lighting, sound and visitors’ expectations, create different kinds of time poetically related to the novel’s vignettes. As a performer walks from place to place she may drag a pool of conditioning light and sound. The pool mutates or merges into another pool with a different type of time.

[ nggallery ]

Context

One hundred years after the two epochal advents in physics of relativity theory and quantum mechanics, we are still reverberating with the consequences. Einstein’s Dream is not a biography or a didactic allegory, but a poetic exploration of our consciousness in time.
There are many didactic works about the theory of relativity and quantum mechanics, including Einstein’s own popular essays, and canonical scientific / philosophical works by Hermann Weyl, Alfred N. Whitehead, Henri Bergson Duration and Simultaneity. In the arts, there have been some distinguished works that treat the theories and the theorists in an externalist way, as icons or as social phenomena. But what we propose is to work directly with the spectators’ felt experiences of time.
What Alan Lightman evoked in his novel was a poetic variation around the felt experience of time, not the “actual” physics of time, but alternatives of time, those hypothesized modes of living in time that could have been imagined, or that never were. By foregrounding how time works in these worlds, the vignettes foreground movement, which is the temporalization of the body. These movements are embedded in everyday life, made marvelous by poetic conceits of time. This fits perfectly with both Sha and Montanaro’s own artistic research into the charging of movement and gesture, of finding or evoking marvelous configurations of movement in the everyday.

Einstein’s Dream will create an experimental apparatus for inducing perceptibly different senses of temporal passage, chance and order, mortality and anisotropy — the arrow of time. Our goal is to use the techniques of theater and dance and responsive media to evoke sharply different kinds of temporal experience that the visitors will feel for themselves. Our experimental goal will be to discover ways to not just depict different kinds of temporal processes, but to condition a physical setting to yield in-person experiences of these different ways of being in time.

Einstein Dreams : Scenarios / Zones, Mechanics, Design approaches

(March 2013, updated for Synthesis Workshop February – March 2014, ASU)

[vimeo]https://vimeo.com/77089477[/vimeo]

SCENARIOS

Scatter / gather

Your shadow splits. The shadows run way from you. The shadows quiver with tension & intention.

Follow spot lights you up. Other spots lurk in the shadows, come after you with persona.

(Use Julian’s rhythm abstractions to record corporeal/ analog movements, and playback. Use analog rhythms as cheap way to get huge variety of very subtle NON-regularity to avoid dead mechanical beats. Also can improvise. )

Freeze

Deposit snapshots of yourself.

Use “flash” timing : charging increase in tension. Snap!

Alternatively: NO tension, just fill zone with flashes.

Use MF VP8 to take webcam images and project them kaleidoscopically throughout zone, with time warp, delay, reversals.

Sutures

The world is fissured, and sutured: as you walk you see/hear into discontiguous parts of the room.

Portals! Use MFortin’s VP8, + Jitter intermediation, eg timespace to introduce time dilation effects).

Motion, oil slick, molasses

Every action is weighted down, slooooooowwwweeedd asymptotically but never quite stilling. Every action causes all the pieces of the world to slide as if on air hockey table, but powered so they accelerate like crazy : Map room to TORUS so the imagery is always visible. Use zeno-divide-in-half or any tunable asymptotic (Max expr object) to decellerate or accelerate.

Blur wind

Fray actions, images, sounds into noise

(e.g. ye olde jit.streak + feedback example )

Vortex, dizzy

Spin the world — every linear movement or “line” of sound becomes drawn into a giant vortex, that sucks into the earth. OR reverse.

Stepped video-strobe in the concentric rings, tunable by OSC + Midi sliders like everything.

Brittle, crack

Need to step carefully. If not, hear and see pending catastrophe: cracking ice underfoot …

Or sometimes pure acoustic in darkness or whiteout strobe.

Use Navid’s adaptive scaler to shrink sensitivity down to smallest movement causing catastrophe in strobe + massive sound. Use subs to add pre-preparatory sound like Earth grinding her teeth before breaking loose.

Stasis ( hot or cold )

Sitting in the bowl of the desert (Sahara or Himalayas )

No-time sonic and field. Noise-field, snow blindness video (black or white majority), or BillViola ultra-slow-mo? (better than 25-1 speed reduction, with no motion-blur)?

Use heat lamps or fans, to subtly add heat or cold ?

Repetition

Visual — take video from a given location, but send to multiple locations (using VP8 + repeated stills) or map to OpenGL polygons…

Audio — use OMAX, with coarse grain

Infection / Dark Light

Use video, e.g. use particles — thickened as necessary — as sources of light. Cluster around movement or around bodies presence as source of light.

MECHANICS

IMPORTANT OZONE ARCHITECTURE

(Julian with MF, working with Navid, Jerome, Omar’s instruments): Each of your instruments should expose key parameters to be tunable by OSC + Midi sliders, so someone OTHER than programmer can play with the instruments qualitatively. OSC gives access to handheld MF’s Max/iOS client so we can walk around IN the space under the effects and vary the instruments IN SITU, LIVE.

ALSO: Crucial that any TML experimentalist can walk in with her / his laptop, tap into the video feed, and emit her own video into the projectors, and control where her video shows up, an with what alpha blend. S/he must be able to do this without Jerome babysitting on call 24 x 7.

Ditto sound & lighting & OSC feed — Julian’s got good design for this. Navid for sensor channels. I hope the sensor channels work transparently on top of Julian’s OSC resource discovery code.

[vimeo]http://www.vimeo.com/77514936[/vimeo]

[vimeo]http://www.vimeo.com/77514935[/vimeo]

[vimeo]http://www.vimeo.com/77514861[/vimeo]
[vimeo]http://www.vimeo.com/77514329[/vimeo]
[vimeo]http://www.vimeo.com/77514328[/vimeo]
[vimeo]http://www.vimeo.com/76984794[/vimeo]
[vimeo]http://www.vimeo.com/74327817[/vimeo]
[vimeo]http://www.vimeo.com/74327393[/vimeo]

Einsteins Dreams: An Ecological Approach to Media Choreography

There are some basic experimental presumptions that I’d like to try out in the Einsteins Dreams work. There aren’t many, but they go to the heart of research in how we compose the behavior of rich responsive environments. The TML starts from where most of the world of interactive environments stops.

One of those places is how events evolve. The obvious ways include: timeline (graph, cues), random (stochastic), decision tree (if-then).
But is this all there is? Not in life nor in art.
This is where things were at up to the 1990’s, well, even now if you ask most programmers and conventional time-based media artists.

I started designing responsive environments with a profoundly different approach to media choreography. And that’s been a core part of the TML’s radically different way of making rich responsive environments that are more like ecologies. This approach learns from continuous state evolution characteristic of tangible, physical, ecological systems.

Practically, how do we do this? That’s an open question. The Topological Media Lab is for exploring open questions, rather than producing artwork reproducing convention. And it is not the case that sprinkling some “AI” will save the day. GIven that learning methods such as HMM, PCA, ICA, are all retrospective, (Bergson’s critique of mechanism), and given that scores, scripts, clocks, and timelines cage action, we set them aside in favor of techniques that give us the maximum nuance, and potential for expressive invention over conditioned space. The most powerful alternative we’ve only begun to exploit is the dynamical systems approach. Rather than rehashing it, let me attach some references, like the “Theater Without Organs” essay (for artists, writers), and the more precise Ozone ACM Multimedia paper.

Most fundamentally, the ED project is to really push on these fronts:

• Move from time-line, random, or decision-tree logics that are typical of engineered environments to dynamical systems modeled on ecologies. (See pages 16-19 of “Theater Without Organs.”)

• Acting & Articulating vs. Depicting or Allegory

The TML is about making environments that palpable condition experience in definite ways, not to display representations (pictures) or models of experience. The radical experimental challenge of ED is about inducing rich experiences of dynamics, change, rhythm, not making an image (representation) of some model of time. The latter is merely allegory. Easy. The former is alchemically transmuted experience. Hard. Given enough skill, making representations is easy. We built the TML, got the ED seed grant, and coordinated the temporality seminars these past 3 years to do something hard: inducing a different mode of temporality — sense of temporal change.

• Rhythm ≠ Isochrony (regularly periodic)

There are no mathematically regular periods “in nature” — that’s an artifact of delusions imposed by our models of mechanical time — frozen in by computers.

Adrian Freed has a rich way of thinking about this, and a rich way of making things that reflect this.

No matter what “curves” you draw, if the pattern is repeated, then you have imposed an isochronous pattern. So we’ve cheated life by pushing / pumping with an artificial “beat.” Instead, ED includes how pseudo-regularities emerge from the dynamical system.

• Give up (geometric) time as a independent parameter

Also since a few years ago, when people like Adrian and David Morris’ students came on board, some people have taken up a challenge I put to radicalize our notion and use of “time”

Instead of using time as an independent parameter, in fact, instead of using any parameter as a “clock” driving the event, use our sensors — cameras and mics — to pick up what is happening, and from the contingent action derive the changes of the responsive environment.

100 years ago Bergson insightfully criticized what he called the cinematic conceit of time / temporal experience. (This is part of the point of the Ontogenesis group this past year with Magda, Will, Felix, Liza, Harry, Adrian, and myself.) We don’t need to fall back into those naiveties.

Even more fundamentally, let’s be mindful of Maturana and Varela’s profound observation that “time” is itself just a linguistic description rather than some thing in the stuff of our bodies and the stuff of the world:

Time as a Dimension

Any mode of behavioral distinction between otherwise equivalent interactions, in a domain that has to do with the states of the organism and not with the ambience features which define the interaction, gives rise to a referential dimension as a mode of conduct. This is the case with time. It is sufficient that as a result of an interaction (defined by an ambience configuration) the nervous system should be modified with respect to the specific referential state (emotion of assuredness, for example) which the recurrence of the interaction (regardless of its nature) may generate for otherwise equivalent interactions to cause conducts which distinguish them in a dimension associated with their sequence, and, thus, give rise to a mode of behavior which constitutes the definition and characterization of this dimension. Therefore, sequence as a dimension is defined in the domain of interactions of the organism, not in the operation of the nervous system as a closed neuronal network. Similarly, the behavioral distinction by the observer of sequential states in his recurrent states of nervous activity, as he recursively interacts with them, constitutes the generation of time as a dimension of the descriptive domain. Accordingly, time is a dimension in the domain of descriptions, not a feature of the ambience. (H. Maturana & F. Varela, p 133. Autopoiesis and Cognition. See also Henri Bergson’s example of the arcing arm, Creative Evolution, chap 1.)

Why not tug the sun as a controller rather than passively watch it sail out of reach!

As I said before, I think time is an effect not an “independent parameter.” This permits a more profound interpretation of Lightman’s novel beyond its “time is…” syntax.

• Functional relation ≠> Determinism

A curve f(t), eg f(t) = sin(t), can provide an utterly precise and reproducible result simply because it is a FUNCTION. For example, f(t) could govern the height and intensity of the “sun” in the Blackbox. However f(t) need NOT be fed parameters t1, t2, t3 … in a regularly incremented monotone sequence. There just needs to be a (reversible) function in order to have reproducibility of the event when the action is reproduced.

There is a profound performative difference in live experience between a a fixed curve — a graph which is traced from left to right in order, and a f[t] = Sin[t] ready to be evaluated given any input.

In the example above, “t” is the INDEPENDENT parameter. y = f(t) is the DEPENDENT parameter. In a realistic system, there is no reason to presume that the world runs on only one independent parameter. (the “unidimensionality” fallacy)

• Functions

There is no contradiction with the graphs that J drew. In fact we use this in many places in Ozone code for 10 years in the form of Max function object (you draw the curve yourself). Jerome could in fact use Morgan and Navid’s function-clocks instead of re-inventing the wheel :). But we called those abstractions “fake state” because we knew that they simply imposed a uni-dimensional sequence on the entire event.

Indeed we can write in any number of FUNCTIONAL, even REVERSIBLE (invertible) relations at the parameter level , yielding an arbitrary number of dimensions of deterministic relation between parameters, I.e. optical flow and number of particles; Color of input and wind (potential) force on fluid (MF : red => heat => flow up against gravity); scratchiness of sound and brittleness of floor. ALL of these can be functions of action, and even of each other. That way, the human can do richly nuanced action, and even drive the action in a fully definite manner because the parameters are deterministically coupled to action. But the relation is mapped from action in as many dimensions as the instruments can sense (either as raw or cooked sensor channels).

See Ozone documentation by Mani Thivierge on TML Private WIKI for precise description. Since we are short on time, I propose that composers read the Ozone document merely for the notation and the approach. In this workshop, I propose we try only this notation “on paper” as a way of thinking about composing an event. If the composers have time, they are welcome to write state engines, but that is not necessary this round.

• COMPLEXITY vs RICH, COHERENT ACTION

We do not control complexity by imposing a small number of independent parameters. In fact, as long as we can engineer functional relations, then the human and nonhuman agents can drive the event by ACTION. Actions can be compact and coherent — e.g. Everyone huddle together and stay huddled together in one place. OR everyone huddle together but move about in a compact group around the floor. Etc. Even if this maps to multiple parameters there should be no need for us actors / inhabitants to think in terms of parameters as we act.

SCRIPTED CONTROL vs LIVE ACTION

There’s a fundamental difference in attitude between code state as a trace of what’s going on, vs. code as a driver of action.

There are at least three modes of agency: script (machine), human, and medium.

(A) Clock drives event

For example, some software code animates a light simulating the sun rising in the course of a day. The shadow of a pole shortens lengthen as a function of clock-time.

(B) Human drives event

For example: human lifts a lantern. Overhead Camera sees shadow of fixed pole shorten on the floor. Code uses length of shadow to move an image of the sun…

A & B may look quite similar. Downstream media code may even be identical, driven by OSC — that is via a FUNCTIONAL, hence DETERMINISTIC dependence. But the KEY difference is that A is driven by a clock, and B can be nuanced by LIVE action. The actor can “scrub” through the event by moving his her arm up or down in any manner.

• (C) MAKING A MEDIUM rather than a movie of a medium’s particular action.

Nothing precludes programming a zone or instrument as a living medium with its own dynamics — think of creating not a movie of a ripple spreading across the floor, but a whole sheet of “water” that ripples in response to any number of fingers or toes or stones doing any action in it.

• An embodied second order EVENT DESIGN METHOD

(inspired by Harrry Smoak, Matthew Warne’s Thick/N 2004)

NOT as actual scenography, just as a design method: lay out several ZONES on the floor of the Blackbox, each with its own dynamics. Then we can try walking from zone to zone in many different sequences, to get a feel for what transitions might feel like. Imagine what players / inhabitants should be doing in order for the state of the event to change from zone A to zone B. THEN we can design a state topology of those as POTENTIAL transitions, that actualize only when the inhabitants and the system actually act accordingly (as picked up by the sensors).

(The Topological Media Lab’s Ozone media choreography architecture as coded in Max / C already does this.)

REFERENCES

Einstein, Albert, Relativity
Frankel, Theodore. Gravitational Curvature : An Introduction to Einstein’s Theory. San Francisco: W. H. Freeman, 1979.
Bergson, Henri. Creative Evolution. Basingstoke ; New York: Palgrave Macmillan, 2007.
Bergson, Henri. Duration and Simultaneity. 2nd ed. Manchester: Clinamen Press, 1999.
Broglio, Ron, “Thinking about stuff: Posthumanist phenomenology and cognition,” in Special Issue on Poetic and Speculative Architecture in Public Space, AI & Society 26.2, 2011, p. 187-192.
Merleau-Ponty, Maurice. Phenomenology of Perception. Tr. Donald A. Landes. Abingdon, Oxon ; New York: Routledge, 2011. Print.
Bergson, Henri. Duration and Simultaneity. 2nd ed. Manchester: Clinamen Press, 1999.
Lightman, Alan P. Einstein’s Dreams. New York: Vintage Contempories, 2004.
Alfred N. Whitehead. Principle of Relativity. 1922.
Humberto Maturana and Francisco Varela, Autopoiesis and Cognition: The Realization of the Living, Reidel 1980.
Sha Xin Wei, Michael Fortin, Tim Sutton, Navid Navab, “Ozone: Continuous state-based media choreography system for live performance,” ACM Multimedia 2010.
Sha Xin Wei, Poiesis and Enchantment in Topological Matter, MIT Press, forthcoming 2013. (Preface and Chapter 1).

People

Michael Montanaro :Creative direction, art direction and coordination
Sha Xin Wei : Phenomenology of time perception
Jerome Delapierre: Realtime video, visual design,videography and photography
Navid Navab: Real-time sound, sound design, sensor system design
Julian Stein: Realtime lighting, realtime sound processing, photography, videography
Nicolas Chandolias: Speech and Voice Processing, Natural Language Processing
Nina Bouchard: Videography and photography
Katerina Lagasse: Event coordination, publicity, videography and photography

Support

Fonds de Recherche Société et Culture (FQRSC) Quebec; Concordia University: Vice Provost Teaching and Learning, Vice President Research and Graduate Studies, Faculty of Fine Arts, Hexagram.

Documentation

Flick photos | Vimeo videos

Il y a

q

Il y a

The IL Y A double-sided video, 12-channel sound installation, mixes live video from its two sides so you see through its opaque wall as if it were a glass window. IL Y A transforms what you see of the other side: your gesture transmutes the other, conjures the other’s body. Your movement distends what you see of the other side like smoke or other pseudo-physical material. The effect is symmetrical – any movement by the other reshapes your image as well. Over time, the behavior of the installation changes through a field of behaviors staged by the composer, according also to the activities of its visitors.

Figures from the past appear in place of visitors who leave the opposite side, and their movements transmute your image as yours transmutes theirs, via real-time calligraphic video and sound effects. Moving bodies from the past act on your image just as you act on theirs. Since the effect is symmetrical, the living self and its historical or present others can play with the forms of each other’s bodies with equal power. When no one at all is in the room, the membrane mixes documentary footage of the populated site, in testimony to that place’s historical past.

[wpsgallery]

Portable and re-usable with video footage referencing the local site’s history, IL Y A is designed to be installed in museums and galleries as well as community spaces or former industrial sites, localized with images from the sites’ historical archive. IL Y A acts as a lens into past as well as the present of the given site, and explores how the past can entangle the living present, or how living bodies entangle each other.

[vimeo]http://vimeo.com/45333708[/vimeo]

TECHNIQUE [SOFTWARE]

This double-sided video screen, with 12 audio channels and 2 cameras, is designed for museum and gallery exhibition. The physical installation, designed by Scott Minneman, is a rigid, opaque board mounted in a rigid aluminum frame, with 5’ x 7’ footprint, and 8’6” height. Each side has one video projector beaming an image onto that side. The projector clears the head of any visitor.

It’s weight (excluding computer gear in flight cases) is 200 lbs, and can be shipped in a wheeled, wooden crate: 88″ x 39″ x 74″ tall. (Total shipping weight: about 400 lbs.)

PEOPLE

Sha Xin Wei, artistic direction, programming
Harry Smoak, technical direction & installation support
Jean-Sebastien Rousseau, visual programming (2010)
Tyr Umbach, visual programming (2011)
Michael Fortin, computer graphics, physics (2009-2012)
Navid Navab, realtime sound
Julian Stein, realtime sound

Thanks also to Freida Abtan, Erik Conrad, Delphine Nain, Yoichiro Serita

SUPPORT

FQRC Fonds de rechereche sur la societé et la culture.
Hexagram

WYSIWYG

ss

WYSIWYG

WYSIWYG was an investigation of sonified soft materials that encourage playful interaction. The group was a diverse mix of artists, scientists and musicians from McGill University’s Input Devices and Music Interaction Lab and Concordia University’s Topological Media Lab. In the first phase of the project, a large, stretchy, light-sensitive square “blanket” was developed, which was shown at a public exhibition on October 31st 2006. At the show, visitors interacted with the interface by standing under it and lifting it up. The tension of the fabric was such that shapes and waves could be made, producing rich, multichannel sound. Detailed discussion of this installation can be found in the publication Mapping and dimensionality of a cloth-based sound instrument

In the second phase, a tapestry was designed and woven with conductive thread which was used to generate an electric field. At its public exhibition on July 18th, 2007, visitors could touch various parts of the tapestry to generate sound. The interplay of narrative image on the tapestry and the abstract sound associated with it encouraged discovery and experimentation.

 

The following overview is from the Topological Media Lab’s WYSIWYG page:

As an extension of the research work conducted with the Topological Media Lab (TML), Sha Xin Wei and his team are creating textile objects such as wall hangings, blankets, scarves, and jewelry that create sound as they are approached or manipulated. These sonic blankets can be used for improvised play. A phonetic pun on the old acronym for What You See is What You Get from the era of the Graphical User Interface, WYSIWYG (for wearable, sonic instrument, with gesture) draws on music technology, dance, children’s group games, textile arts, and fashion. Created first and foremost to sustain social play for people of all ages, WYSIWYG allows players to express themselves whether enjoying time in a park, dancing at a club, passing the time during a long car trip, or just playing at home.

The custom-designed digital instruments embedded in the cloth sample movement to transform ambient body movement and freehand gestures into new sounds or “voices” associated with a player or transmitted to other players in the vicinity.

When the project was launched in November 2006, the WYSIWYG team experimented with a prototype ”blanket” able to sense how it is handled. During the presentation, eight people manipulated this photo-sensitive blanket to produce a spatial sonic landscape. In July 2007, dancers performed a semi-choreographed movement improvisation around a 20’ suspended “tapestry” and a 6-foot “tablecloth” woven with conductive thread on a Jacquard loom by Joey Berzowska’s XS Labs.

Dancer Marie Laurier with 20’ sounding cloth woven by Marguerite Bromley during Ouija workshop. © 2007 Topological Media Lab.

Custom electronics by Elliot Sinyor, McGill University. © 2007 Topological Media Lab.

David Gauthier with capacitive proximity sensor in the form of a bird woven from conductive fiber. © 2007 Topological Media Lab.

Principal investigators: Sha Xin Wei, Marcelo Wanderley
Physical materials advisor: Rodolphe Koehly
Mechatronics, feature extraction: David Gauthier
Mapping, feature extraction: Doug van Nort
Sound instruments: Freida Abtan, David Birnbaum, Elliot Sinyor
Assistant project technical coordinator: Harry Smoak

Ubicomp

asvc

Ubicomp

Projecting live video modified by physically-models video texture synthesis, nuanced by the activity of passersby. The membrane was steel mesh, allowing people to see each other through the projected image.

[wpsgallery]

Touch

d

Touch

Palpation — the laying of a hand on the body to read its state of health — is perhaps the oldest of medical practices. When a physician lays her hand on her patient, however, she is not only reading or diagnosing the patient, she is saying to the patient: “You are my responsibility. I take you into my care.” This touch ethically entangles the physician and the patient.

Speech too is an ethical medium — words spoken can warm three winters or chill three summers, the Chinese say. Under western law, some words can be fighting words, and those who wield language with malice can be charged as if they had hit the victim with their hand.

[wpsgallery]

So ethics comes back to touch.

The choreography in Act 1 is inspired by thinking of two dancers in a chamber as being transformed from one hermaphrodite body into two. The chamber, viewed from above presents an alchemical vessel within which the hermaphrodite, compound body twitches and coils in a fluid medium until it splits into two independent bodies. The energy and momentum of their movement swirls the visual media between the bodies: negative space is itself pregnant with ethical charge, visualized as textures and particles in the gaps between the bodies, rippling in the wake of the dancers’ gestures.

This epochal fission is also the birth of desire, of sexual love, as Aristophanes famously described in Plato’s Symposium, and marks the transition between Act 1, an intimate epoch, and Act 2, our epoch, in which we find ourselves as isolate bodies in a void, seeking one another via the much sparser tissues of language and sign.

Act 2 is shot outdoors. The dancer who emerges shows traces of energetic, now erotic, entanglement with her distant partner. She discovers a (male) dancer already in an open field. The textures and particles trailing behind her lead back to an implied third being, the dancer from Act 1 who remains hidden as the first dancer evolves through her sequence of more and more passionate, elaborated movement with the discovered dancer. We use the word passion in its ancient sense of a primordial force below the level of emotions. The first dancer is multiplied by temporal copies of herself, and plays contrapuntally with her own delayed selves as well as with the other dancers.

This second act closes with the fusion of the dancer with her multiples and the emergence of the hidden dancer as an authentic other.

TECHNIQUE / SOFTWARE

PEOPLE: Sha Xin Wei, Soo-yeon Cho, Desh Fernando
+ Topological Media Lab
TOUCH 2 a performance :Soo-yeon Cho, Kiani de Valle
Set: Topological Media Lab
Remedios Terrarium

TGarden

t

TGarden

TGarden is an investigation of how people make sense of and navigate in rich and dynamically evolving media spaces. Given the rise of ubiquitous computing and realtime media synthesis, we’re anticipating the need for coherent yet supple ways for designers to create such complex interactive media spaces and for people to inhabit them.

In a TGarden space, visitors wearing instrumented clothing creates and modulate video and sound based on their gesture and movement. In effect, visitors write video and sound by their movement.

[wpsgallery]

For 2001-2002, we concentrated on using wireless sensors on the body to track gesture. We built a state evolution system that responds continuously to sensor statistics, synthesizes and marshalls media in realtime.

In TGarden spaces, we use a combination of costumes outfitted with sensors, video tracking, realtime sound and video processing, and gestural pattern tracking.

Links:Research concerns include the design of continuously varying narrative spaces, how people improvise meaningful gesture, and factors of tangibility and coherence such as latency, temporal (musical) texture and rhythm. Our goal is to come up principles of design that should be useful for creating and inhabiting responsive media spaces. This research thread parallels a series of international productions in Europe and the United States.

SIGGRAPH2000 – New Orleans

http://sponge.org

http://www.f0.am/tgarden

TGarden[TM1]

TGarden is a responsive environment, inspired by calligraphy and scrying. In TGarden, players’ gestures are transformed into generative computer graphics and digital soundscapes, leaving marks and traces in much the same way as a calligrapher would with brushes and ink. When visitors approach the TGarden, they choose from a range of costumes, designed to encourage particular kinds of movement. Light and voluminous for space-filling, fast movements; tight and restrictive for small, fine gestures; heavy and transparent for slow, meditative actions. In intimate dressing chambres, in addition to the costumes, the players are equipped with accelerometers, sensors able to detect changes in speed and tilt of the movement, an optical device for tracking the players’ position and direction in the space, as well as a small wearable transmitter that communicates with the software systems “back-stage.”

Once players enter the space, they are left alone to explore the connections between their bodies and the environment. A swiping motion could send an organic-looking, digital shadow smearing across the floor; walking across the room could sound like swimming with a swarm of invisible, but musical creatures. The sonic and visual media are layered in textures and meanings, allowing for various styles and interpretations. Even though simple interactions are easily learned, it takes time to get acquainted with the environment’s own nature. As an apprentice calligrapher must learn to find a balance between the flow of ink, the pressure of the brush and the speed of his gesture, a player in TGarden slowly learns to write, scratch and dig through the media space, to be able to play it as an instrument…

Together with Sponge, we designed and developed several installations over a two-year period between 2000 and 2001, testing them with audiences across Europe and North America.

[TM1]Information taken from: http://fo.am/tgarden/