All posts by nchandol

Meteor Shower

a1

Meteor Shower

Meteor Shower was initially built as a simple responsive environment, its next incarnation it will [TM1] incorporate state-aware behaviour, and further explore ideas of nature/artifice by building narrative structures involving “lunar characters.”

As deployable installation, Meteor Shower holds potential as an environment for architectural installations, play spaces, and performance events – it is being designed with such flexibility in mind.

[wpsgallery]

TECHNIQUE [SOFTWARE]

PEOPLE

Sha Xin Wei – concept & meta-physics
Jean-Sébastien Rousseau – video and particle programming
Timothy Sutton – sound design and programming
Emmanuel Thivierge – state evolution and video feature extraction
Louis-Andre Fortin – visual design and programming
Freida Abtan – sound and systems design advisor

Memory, Place, Identity

Memory, Place, Identity

The development of the Memory, Place and Identity experiments involved two axes of exploration: a substantive one, concerned with place, memory, identity, especially in relation to the body, movement and things; a methodological one, concerned with how to go about doing phenomenological experiments. Here two things can be noted about the proposed phenomenological experiments: first, they would be more focused on enabling precise descriptions of experiences, from a first person point of view and tracking the dynamics of the individual experience, rather than quantifying over populations according to variables already specified by the experimenter; second, they would be more focused on arriving at the conceptual framework proper to the experience generated in the experiment, vs. constructing an experiment to fit an already given conceptual framework or at least they would keep open this arrival.

[wpsgallery]

To prepare for these explorations, Sha and Morris held seminars in the fall of 2009. Participants read designated texts taking notes, or wrote up small observations, and posted them to a blog. Most of the texts focused on the substantive axis of exploration.

TECHNIQUE [SOFTWARE]

The first experiments used Max/MSP/Jitter. For the actual experiments, TML researchers built prosthetic sensory organs that converted light into pressure. Zohar Kfir and Patricia Duquette constructed a glove with a photocell mounted inside a straw lined up with the index finger. Incident light above a tunable threshold mapped via an Arduino board into a vibrator motor.

Hubbub

[instant_gallery]

Hubbub is one application of TML research treating speech as a computational substance for architectural construction, complementary to its role as a medium of communication.

Success will be measured by the extent to which strangers who revisit a Hubbub space begin to interact with one another socially in ways they otherwise would not. Hubbub is a part of a larger cycle called URBAN EARS, which explores how cities conduct conversations via the architecture of physical and computational matter.

Hubbub installations may be built into a bench, in a bus stop, a bar, a cafe, a school courtyard, a plaza, a park. As you walk by a Hubbub installation, the words you speak will dance in projection across the surfaces according to the energy and prosody of your voice. For example loud speech produces bold text, whispers We’ll capitalize on recognition errors to give a playful character to the space.

HYBRID ARCHITECTURE AND HABITATION OF URBAN SPACE

In this street-scale research thread, we investigate how people build, destroy, modify and inhabit city environments using embedded computational systems. The first part of this study is social and historical, employing methods of field observations as well as insights from phenomenological and anthropological studies. We intend to combine this work with insights of colleagues from the domains of urban design and architecture to design computer-mediated, responsive environmental systems for urban space.

The HUBBUB research series presents an foray in this domain of urban responsive architecture. As you walk through a Hubbub space, your speech is picked up by microphones, your speech is partially recognized and converted to text. Associate text is projected onto the walls, furniture and other surfaces around you as animated glyphs whose dancing motion reflects the energy and prosody of your speech. Hubbub is an investigation of how accidental and non-accidental conversations can take place in public spaces, by means of speech that appears as glyphs projected on public surfaces. The installation takes its meaning from the social space in which it is embedded, so its “function” depends on the site we select. Some of the technical issues concern realtime, speaker-independent, training-free, speech recognition; realtime extraction of features from speech data; multi-variate continuous deformation and animation of glyphs in open-air public display systems, such as in projection or in LED displays. We will investigate how embedding such responsive media as the Hubbub speech-painting technology as well as TGarden technologies into the urban environment can modestly support rich, playful forms of sociality.

TECHNIQUE [SOFTWARE]

2003 Architecture. We use a custom speech recognizer which can recognize continuous speech, independent of speakers. Moreover, this speech recognition application uses the Windows SAPI engine which allows us to word-spot for a restricted vocabulary and avoid training. This way anyone in a language group can freely speak without first preparing the software system. We have developed a new portable animation system called Commotion which supports some kinetic text animation in general OpenGL and Objective-C GNUStep open source environments. In parallel we use MAX/MSP to perform feature extraction on the speech and use the features to send animation hints to Commotion.

PEOPLE

Vincent Fiano, Commotion animation system.
Stephen Ingram, Word-spotting, grammar-driven speech recognition system.
Graham Coleman, MSP speech feature extraction and Max animation choreography.

Frankenstein’s Ghosts

media

Frankenstein’s Ghosts

Frankenstein’s Ghosts is a SSHRC funded research creation project (2007-2010) deconstruction, analysis and exploration of Mary Shelley’s Frankenstein to explore substantive themes emerging from the novel such as: what are the boundaries of the human? To what extent do we create ourselves? What is our responsibility towards what we create? What is our responsibility towards the “Other”? What ethical challenges do our present technological advances present ? What is monstrous? And what does it mean to be human? The project amalgamated the eminent Blue Riders Canadian chamber music ensemble, director / choreographer Michael Montanaro, media researcher and artists from Dr. Sha Xin Wei’s Topological Media Lab, dancers, and scholars from religious studies and literary studies into a new sort of ensemble that experimented with new modes of performance practice. Over four years, the media artists and the musicians and dancers developed fresh modes of movement and performance that fused what before were largely independent practice.

We are using 19th century lighting techniques and tricks to create shadow images. Real-time video and sound portray shifting realities, memory and other possible truths. Musically, structured improvisation gives shape to concepts. Movement expresses the need for relationship. Words draw us back and forth from conscious to subconscious.

[wpsgallery]

COLLABORATORS

Michael Montanaro, choreographer/director
Sha Xin Wei, topological media
Ann Sowcroft, writer
Jerome DelaPierre, real-time video
Navid Navab, Timothy Sutton, real-time sound
the Blue Rider Music Ensemble
Leal Stellick & Milan Gervais, Emmanuele Calve, Ashley, dancers

REFERENCES

http://www.frankensteinsghosts.com/

eSea Shanghai 2008

eSea Shanghai 2008

ESEA, is an irregular reef-like wall, 12m long, 2.5m high, made of more than 2000 sheets of individually cut sheets of cardboard. It multiplexes the rhythms of the sun with the ephemeral rhythms of pedestrians, and manifests the result as a pattern of varying LED lights at night. ESEA was presented in Shanghai’s Century Plaza at the E-Arts Festival, October 17-22, 2008.

The wall’s paper substrate holds the inflatable cells in a variety of holes. The substrate is CNC cut corrugated cardboard that allows for different patterns for the holes. Note that not all holes have cells in them. The holes that do not have cells can be used for viewing, peeping, communication etc. The overall result of the various holes can be like the structure of coral.

[wpsgallery]

TECHNIQUE / SOFTWARE

The architectural design was done in Montreal, Hong Kong and Australia.

The custom electronics was designed by Sha Xin Wei (TML) and Vincent Leclerc and built by ESKI. The CNC (computer-numerically-controlled) manufacturing was done in Shanghai. Media choreography logic was designed and written by Sha, TIm Sutton, JS Rousseau. Supported by the Topological Media Lab.

Photocells rooted inside long tubes atop the wall measured sun’s slow transit across the sky, and ultrasound sensors embedded into the wall measured nearby people’s movement. Logic was coded in Max, and mapped to custom LED control electronics by ESKI.

PEOPLE

Peter Haskell, Patrick Harrop, Joshua Bolchover, Architectural design
Sha Xin Wei, Interaction design, Max programming
Vincent Leclerc (ESKI), LED electronics
Tim Sutton, JS Rousseau (TML), Max programming
Dedale studio, and Shanghai eSea, fabrication and assembly

Cosmicomics

c

Cosmicomics

Based on previous work with Meteor Shower, Cosmicomics presents a fantastical sky animated by a fusion of lunar dreams inspired by Italo Calvino’s eponymous novel, and by the quantum inflationary cosmology created by Andre Linde. A large ceiling-mounted display (three plasma displays or a projected screen) opens a window into a fable of a cosmos, filled with liquid light and sound that dance to movement, epoch, and the alchemical condition of the Moon.

Cosmicomics was presented at Elektra 9-13 May 2007, Montreal.

[wpsgallery]

TECHNIQUE / SOFTWARE

1 tracking camera
3 tiled plasma screens
Video processing computer
Sound processing computer

Camera-based tracking, motion-feature extraction, media choreography state engine, realtime sound processing, realtime video processing.

PEOPLE

Sha Xin Wei – Director, Art Concept
Harry Smoak – Director of production, creative advisor
Jean-Sébastien Rousseau – Video design and Max/Jitter OpenGL programming, Models and special effects video
Timothy Sutton – Sound design and Max/MSP programming

Emmanuel Thivierge – State engine programming, Camera feature extraction
Josée-Anne Drolet – Project Coordinator, Models and special effects video

Olfa Driss – Research, Models and special effects video
Michael Fortin – Graphics programming, OpenGL and optimization

Nikos Chandolias

M.Sc in Electrical & Computer Engineering, currently enrolled in M.A., Special Individualized Program (INDI).

During his studies in Electrical and Computer Engineering, he has developed strong skills and knowledge in programming and designing software systems. His experience as a volunteer in various European student organizations made him aware of the cultural diversity and the wealth of different perspectives in research and learning. His participation to several collaborative projects as well as many student and artistic groups cultivated a truly collaborative perspective and the understanding of common interest. His former research experience is in the fields of nature language processing and semantics. He is currently looking at expanding his knowledge in the fields of interactive media art and developing installations at an international multicultural level.

Doug Van Nort

Banting Post-doctoral Fellow, Topological Media Lab 2013-2015

Doug Van Nort is an experimental musician and researcher whose work is dedicated to the creation of immersive and visceral sonic experiences, and to personal and collective creative expression through composition, free improvisation and generally electro-acoustic means of production. His instruments are custom-built systems that draw on concepts ranging from spectral analysis/synthesis to artificial life and machine listening algorithms, and his source materials include any and all sounds discovered through attentive listening to the world.

Dr. van Nort’s Ph.D. 2010 from McGill University concerned, Modular and Adaptive Control of Sound Processing. He worked with Prof. Marcelo Wanderley’s Input Devices and Music Interaction Laboratory (IDMIL), a lab affiliated with McGill University’s Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT).

Van Nort’s work, presented internationally, has recently spanned telematic music, laptop
ensemble compositions driven by evolutionary “human algorithms”, improvisations in various acoustic/electronic settings, multi-channel electroacoustic pieces, sonic installations and various idiosyncratic algorithms related to machine improvisation and interactive sound sculpting.Van Nort often performs with his custom GREIS software designed for on-the-fly spectral and textural sound transformations. He is a member of the trio Triple Point with Pauline Oliveros and Jonas Braasch, where his focus lies in improvised transformation of the sounds arising from his acoustic partners. This group also collaborates through research and teaching, and in this context Van Nort has been actively designing and creating an intelligent system (named FILTER) for improvisation, currently as research associate in music at Rensselaer Polytechnic Institute.A discussion of Triple Point, GREIS and this intelligent systems work, of which he was primary author, was acknowledged with the “best paper award” at the 2010 International Computer Music Conference. Recordings of Van Nort’s music can be found on Deep Listening, Pogus and Zeromoon among other experimental music labels, and his writing has recently appeared in Organised Sound and the Leonardo Music Journal. He has performed at venues ranging from the [sat] and Casa del Popolo in Montreal, Casa da Musica in Porto, Betong in Oslo, The Red Room in Baltimore, The Guelph Jazz Festival, Roulette, Harvestworks, the Miller Theatre, Issue Project Room and the Stone in NYC, at Town Hall (NYC) on intonarumori as part of the Performa futurist biennial, and at EMPAC in Troy, NY. His compositional work has been featured in contexts as disparate as the International Conference on Auditory Display (ICAD) and New Interfaces for Musical Expression

(NIME) to the Flea theatre’s “music with a view” series and the NYC electroacoustic music festival at Elebash Hall. Collaboration has been an important thread of recent work, including Oliveros, Braasch, Francisco López, Al Margolis (aka if, bwana), Stuart Dempster, Chris Chafe, KathyKennedy, Ben Miller, Anne Bourne, Judy Dunaway, the Composers Inside Electronics and many others. Van Nort holds a Ph.D. in Music Technology from McGill University, an M.F.A. in Electronic Arts from Rensselaer Polytechnic Institute and an M.A. and B.A. in Pure Mathematics from the State University of New York (Potsdam) including studies in Electronic Composition at the Crane School of Music.

Affiliation
Research Associate
Electronic Arts and Architectural Acoustics
Rensselaer Polytechnic Institute

Link

Doug Van Nort

Julian Stein

Julian Stein is a composer and sound artist currently residing in Montréal, QC. His work often explores musical applications of the everyday, placing a large focus on intuition and present- experience. Exploring both composed and real-time environments, his work has ranged from multichannel composition and theatre sound-design to collaborative performance and kinetic sound installation. In specific, his work is interested by methods of audio-visual synchronization, phonetics, animal communication, and the urban environment.

Julian is a co-creator of the Montreal Sound Map (http://www.montrealsoundmap.com), and has recently completed a BFA in Music (Electroacoustic Studies) from Concordia University. He is urrently is a researcher at matralab and the Topological Media Lab, both which are part of the Hexagram Institute for Research/Creation at Concordia University.

Affiliation: gesture bending, ILYA

www.julianstein.net , www.montrealsoundmap.com

Harry Smoak

Harry Smoak is a Montréal-based American artist, PhD candidate in Fine Arts Special Individualized Program (Supervisors: Chris Salter, Sha Xin Wei, and Erin Manning) at Concordia University, and senior research associate at the Hexagram Institute for Research-Creation in Media Arts and Technologies. His current interests revolve around the development of experimental sensor-based interactive media environments. Harry is a graduate of the Georgia Institute of Technology having received his Masters in Human-Computer Interaction in 2004. He was the founding Research Coordinator for the Topological Media Lab at Hexagram-Concordia.

www.harrysmoak.com