IL Y A

Submit your post here !

____________________________________________________________________________________________________________________________________________________

____________________________________________________________________________________________________________________________________________________

 

camera mount

Another solution could be to find the tiniest camera with the highest quality optics and run its wire down the middle of the screen.  At this point I’d rather minimize the physical labor, design time, and focus on the content: good video of “past people” and transformations with powerful affect.

screen construction

On 2013-06-26, at 7:24 PM, Evan Montpellier <evan.montpellier@gmail.com> wrote:

Hi Adam,

Hope all’s well with you. I’m beginning to do some work at the Topological Media Lab on Il Y A, the video membrane installation. I’m mostly focusing on the coding and media sections of the project, but Il Y A also has a strong physical component (the mediating structure/membrane that both captures live video and displays the video feeds), and that part needs someone to look after it as well.

We were talking about the future of Il Y A at the TML yesterday, and you came up in the conversation as a person who’s good at building material objects. Would you be interested in being involved in Il Y A in this capacity?

As I understand it, there are two possible areas of work here:

1. The current Il Y A structure needs repair – the glue holding the panels together has broken down, and so the surface of the panels are being held on with clamps.

2. A new, lighter structure needs to be devised. Whereas the current model uses short-throw projectors, Xin Wei has mentioned the possibility (finances permitting) of basing the new design around large flat panel LED monitors. Other options would certainly be worth discussing.

I’m CCing Xin Wei on this email – Xin Wei, please feel free to correct or supplement any of the info I’ve provided here. Adam, please let us know what you think!

il y a states in 1-simplices ok!

Until we recompile a new state engine according to the Sha-Fortin computation of energy on all nodes — a straightforward but non-trivial bit of real programming –For June, it’s ok to linearize — to traverse only the edges of the simplicial complex:Reflect –> (glass -> ) Discovery
Discovery -> Dry -> Storm -> Dry -> Storm
Storm -> Desert
Desert (-> glass) -> Reflect
So we can use the very nice Navid-Morgan-Tyr function-based state-clocks for this.

il y a states in 1-simplices ok!

Until we recompile a new state engine according to the Sha-Fortin computation of energy on all nodes — a straightforward but non-trivial bit of real programming –For June, it’s ok to linearize — to traverse only the edges of the simplicial complex:Reflect –> (glass -> ) Discovery
Discovery -> Dry -> Storm -> Dry -> Storm
Storm -> Desert
Desert (-> glass) -> Reflect
So we can use the very nice Navid-Morgan-Tyr function-based state-clocks for this.

recap of composition for il y a as states. maybe Sunday?

Dear Tyr, Navid, and Julian as particlar :)I’ll be back Saturday 8 pm-ish. WIl have to fly out early Monday June 18.A. Want to meet up for drinks Saturday night @ SATO food lab? (I don’t know when — I can look for you there maybe w J-A)
B. Can we try IL Y A instruments together say Sunday afternoon some 4 hours — say sometime between 2 – 9 PM?Now that we’ve spiraled back again (to 2010 but better I hope ! :),here’s a recap of my composition for il y a in state language:(0) Reflectinitial mirror becomes glass soon after person approaches

(1) Discovery

glass: soon you discover that your action creates smoke (Navier) out of the image from opposite side
(apply optical flow from ego video to other video)

 

Cycle in convex combination (blend) of 3 states: Discovery + Dry + Storm

 

(2) Dry

make effects, eg particles out of only Intersection of ego and other.
Could be intersection of presence or intersection of motion — your choice depends on what reads most clearly

We could try visual particles confined to intersection of presence
whereas sound is driven by motion (velocity not position!) of particles

 

(3) Storm
Effect (eg particles) spread throughout the union of bodies (presence)

 

Transition to finale Desert state (Use activity-clock)
=> particles no longer confined to union of bodies, bit spreads throughout the entire field of view

 

(4) Desert
Dessication : all sound connote great heat, visual: burning to ashesthen (using activity-clock) turn on gravity so ashes fall down to bottomand clears screen

 

=> return to (0) Glass state

 

Hoping this time finally, we’ll make Viconian progress!

 

on this Sunday ?

Take care,
Xin Wei

Pythonized Eerm => significant scientific work

Dear Michael and Tyr,

 

MF’s Pythonizing Eerm is electrifying because he’s opened the door to the second scientific breakthrough of the lab in 10 years: the full exploration of my topological dynamics approach to potential event :)

Now, my recommendations:
The best next move, before diving further into Yon’s code, would be for you two — MF and Tyr — to read a fragment of some standard text on algebraic topology to understand the mechanics of simplicial complexes, to learn concretely the elementary representation of topological manifold in terms of simplicial complexes, or delta- complexes

See the definitions of a related construction  on pp 102-107 of Hatcher’s Algebraic Topology.
For one (or another) to believe that you understand this, it is necessary to do the exercises, e.g. pp 131-132.

 

That way you can debug both the code, and its conceptual design.

 

A perhaps non-trivial challenge: How can we extend CNMAT’s Max  (RBF?) graphic UI to provide some UI to manipulate the state representation?  We may wish to NOT go down that rabbit hole — it cost already 1.5 TMLabber-years since Atlanta with no scientifically significant product.

 

Speaking of scientifically significant product –
The goal is make different state engines work with SOUND or VIDEO TEXTURES  that evolve PALPABLY in response to HUMAN ACTIVITY in a common PLACE.  Please let’s not dive too deep into the seductive embraces of implementation languages or of algebraic topology.  Let’s keep it grounded to the FELT EXPERIENCE of our system in the TML.  Thus, it is essential that Tyr get the instruments working, then hook those instrument sets DIRECTLY to the state engine –

 

Tyr, your visual instruments should simply read the 5-vector from the state engine, and simply ramp up and down according to their independently fundamental (“pure”) state behaviors associated with the 5 fundamental metaphorical states:   Glass —  Discovery — Storm — Dry — (Finale = Glass)  (See meta-structure http://membranes.posterous.com/ilya-states-compositional-structure  )

 

Xin Wei

 

On 2011-09-15, at 1:37 PM, Michael Fortin wrote:

Hi Xin Wei,

I’m going to re-read the Ozone paper to compare with what I see within the code.  Sorry if my understanding has been broken as I navigate the system, it’s getting better!

 

First, yesterday after our discussion I reviewed the code.  Rather than keep an edge-list and a triangle-list, and so on for other dimensions, Eerm keeps track of mulitple nodes and multiple simplices (which may share nodes).  Each simplex consists of one or more nodes (which determines the dimension of the space formed by the simplex).  Simplicial complexes (combination of simplices) arises from shared nodes.  Yesterday I tracked down the code that allows a node to jump between simplices which leads to the following questions (if the questions make sense it means I finally have a better grasp of the problem):

 

Let S1 be a simplex with nodes N1, N2, and N3.
Let S2 be a simplex with nodes N2, N3, and N4 (should be right order).

 

Each node N1, N2, N3 and N4 form an orthonormal basis within state-space.  The Matrix with N1, N2, N3, and N4 as rows is the identity matrix.

 

Let Token T be at (0.3, 0.3, 0.4, 0.0), or near the middle of S1.

 

The initial problem is moving T to (0, 0.3, 0.4, 0.3), or near the middle of S2.

 

My current understanding says that another issue will crop up.  Let’s say we slingshot T to the edge of N2 and N3 such that T is at (0, 0.5, 0.5, 0).  The velocity of T may be (-0.1, 0, 0, 0) such that it is being pushed away from N1 (indefinitely far – my mistake yesterday was to think in terms of fixed points and springs where a spring implemented using Hookes law will oscillate around a central point like a rubber band – I still need to check the arithmetic to see if this makes sense).

 

Once T hits the edge N2, N3 the velocity should become (0.0, 0.0, 0.0, 0.1) so that current acceleration continues regardless of the edge (and that the token changed dimension)?

 

Furthermore, if we wish to pull the token T towards N4, since N4 is on a nearby simplex should its attractive force such as (0.0, 0.0, 0.0, 0.1) be reflected as a force that repels it from N1 (-0.1, 0.0, 0.0, 0.0)?  Could this be generalized to arbitrary-dimensional space?

 

Hopefully the questions make sense.  If any don’t, then tell me and I’ll jump back and re-adjust my understanding.

 

Hopefully, I don’t distract you too much from preparing your presentation tomorrow.

 

Cheers,
~Michael();

2011/9/15 Sha Xin Wei <shaxinwei@gmail.com>

Dear MF, Morgan, Tyr, Navid,

 

At TML yesterday, MF, M, Tyr and I agreed that

 

Monday Sep 19, 12 noon – 2

 

 

 

is Ozone Pheonix reset day in the lab

Nav, does this work for you?

 

MF’s Python binding exposes Eerm’s guts — so MF Tyr M and I will aggressively push the state engine in Il y a as well as some other apps, e.g. lighting and sound compositions over the coming 4 weeks.

 

Cheers,
Xin Wei

 

__________________________________________________________________________
____
____
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  skype: shaxinwei • +1-

__________________________________________________________________________
____
____

 

 

Begin forwarded message:
From: Michael Fortin <michael.fortin@gmail.com>
Date: September 13, 2011 3:25:25 PM EDT
To: “Prof. Sha Xin Wei” <shaxinwei@gmail.com>, Morgan Sutherland <morgan@msutherl.net>, tyr <electioneering@gmail.com>
Subject: Eerm Update
Hi,

 

Here’s a first version of Eerm where the default SpringForce/Sensor force accumulation can be replaced by Python code.

 

Known bugs:  If Python code with a syntax error is loaded, reloading Python code will not work as expected.

 

What the code looks like:

 

# Import Python math functionality
import math;

 

# Spring force function, as it appears within C.
def SpringForce(coupling, distance):
return -1.0 * coupling * distance

 

# SensorForce is called to compute the sensor force for each component of the force vector.  Index is the current component of the force vector.
# e is a value between 0 and 1 determining how much of an effect the input from the sensors should have
# p is the position vector
# v is the velocity vector
# Compared to C, some damping constants are missing, I’ll be exposing them soon.
def SensorForce(token, index):
e = token.sensorCoupling(index)
p = token.position()
v = token.velocity()
return SpringForce(math.exp(-1.0 * e), p[index]-1.0)-v[index]

 

# Not used, but works:
# Returns a Python (it is Python internally) list of all the nodes within the simplex.
token.simplex.nodes

 

# Unfortunately, nodes have no properties.  I might remove access to the internal
# Simplex object and just give access to the nodes through Token.

 

Cheers,
~Michael();

IL Y A Mascot

IL Y A, the cat version.

On 2011-01-03, at 2:26 PM, laura emelianoff wrote:
meow meow<IMG_2366.jpg>

Il y a: Rick Prelinger Lives of San Francisco

> “You are the soundtrack,” Prelinger told the capacity audience at the> Herbst Theater, and they responded to his mostly silent archival> films by calling out locations, questions, comments, and jokes.
>> They saw footage of a 1941 Market Street parade of allies—floats> representing Malta, Russia, France, Britain—and Kezar Stadium> hosting a ferocious mock battle/demonstration of Army cannon, troops,> and tanks in 1942 and huge naval ships parked at the waterfront> piers in 1945.
>> Sailors cruised the Barbary Coast in 1914 and amateurs piloted> gliders from the vast beach dunes of the Sunset district in 1918> (looking just like the hang-gliders of 90 years later). There was a> sky tram at the Cliff House and four sets of streetcar tracks busy on> Market Street. Impromptu hula dancers drew a crowd on Market in one> decade, and flower stands adorned it in another. Artists filled the> Montgomery Building.
>> All of Treasure Island could be seen burning, and no one present> could remember when it was or what caused it or what happened> aferward.
>> “Fictional narratives push out actual narratives,” Prelinger said.> We remember stories, and what isn’t in them, we forget. It takes> large archives like his, diligently collected and made public, to> free us from selective memory. Constantly reunderstanding the past> goes best when grounded in the true strangeness of what used to go on.
>> –Stewart Brand
>

Il y a: and idea for Dance UC Berkeley, April 15-23

For possible IL Y A during Berkeley Dance Productions student festival April 15-23 2011: Lisa and Peggy @ Dance Department UC Berkeley may work with us to locate archival footage of Isadora Duncan or Martha Graham in the very spaces in which Il y a may be exhibited — the old UC dance studio in converted church, a large airy space lit with amber stained glass windows.

WEDNESDAY! Brown Bag Lunch Presentation HARRY SMOAK

For those of you who will be around on Wednesday and are interested, I’d be delighted to see you there.  My presentation will be very different from the talk I gave recently at the Ephemeral City event.  Especially relevant to the memory+place+architecture+psychology group, I hope.

Cheers,

 

Harry

 

 

 

Begin forwarded message:
From: “Momoko Allard” <hexinfo@alcor.concordia.ca>
Date: November 1, 2010 6:16:37 PM EDT
Subject: Hexagram Research-Creation REMINDER: THIS WEDNESDAY! Brown Bag Lunch Presentation HARRY SMOAK

Research-Creation Brown Bag Lunch Presentation by Harry Smoak

Wednesday, November 3, 12:30-2:00pm, EV 11-705

Join us at Hexagram for a lunchtime presentation by Harry Smoak of his ongoing research-creation work in the PhD Special Individualized Program at Concordia.

Harry Smoak’s presentation will visit the radical empirical work conducted in the first half of the 20thcentury by Adelbert Ames, Jr. during the course of his research into visual and social perception. Following a brief historical detour, Smoak will introduce some of Ames’s ideas as well as his unique physical demonstrations by talking through some of the questions and issues that have arisen from Smoak’s own research and creative practice involving an exploration of lighting and color. In particular, Smoak will discuss his ongoing work “Your Participation Not Required”, a series of interactive multimedia installations exploring the senses of architectural space through the computer-controlled modulation of light and sound.

Everyone is welcome to attend. Light refreshments will be provided, and you are welcome to bring your own lunch.

Please spread the word, and feel free to contact me for more information.

Best wishes,

Momoko

Momoko Allard

Administrative Coordinator

Hexagram-Concordia

EV 11-455

Tel: 514 848-2424, ext.5939

Fax: 514 848-4965

ILYA sw/hw dev mtg Monday 1 hour during 4:00 – 6:00 PM ET ?

Hi Xin Wei,

My schedule is flexible all week. It would be best for me to chat tomorrow++ as I would like to solve the aforementioned state jumping issues.

 

Morgan
On Sun, Sep 5, 2010 at 5:13 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

great !!

i just got back into town…

 

Monday Scott and I will drive down to Onomy 11 AM = 2PM to

 

(1) choose projection surface material;
(2) plan local (SF) budget needs and calendar;

 

in addition, I hope to

 

(3) check out state engine with Morgan
(3.2) Plug it into it into JS’s code (with JS and Morgan ?)

 

(4) check out the Mini with Navid remote.

 

I need to limit this to a single window,  can we do this software  / gear check-in say 4:00 – 5:00 PM ET so I can deal with a ton of stuff back in SF?   By then Scott and I should be able to show you the test screen material.

 

Also then we can plan  the Berkeley “studio” with Lisa Wymore in Berkeley Sep 14,  and Stanford CCRMA Concert Sep 16.

 

Cheers,
Xin Wei
On 2010-09-04, at 6:00 PM, Morgan Sutherland wrote:

Made significant progress on the state jumping. Dealing with hysteresis-like rapid jumping symptoms currently. Once that’s done, we’ll be ready to tune.

2010/9/2 Morgan Sutherland <skiptracer@gmail.com>

Ok – working on implementing state jumping and settings for the entropic clocks. Aiming to be done Sunday.

(I’m coding where I can in the cracks between activities while on vacation. Tonight I’m leaving the island to beat the storm and I’ll be in Cape Cod, then Boston by Friday evening.)

 

 

On Thu, Sep 2, 2010 at 10:20 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Hi Morgan, JS, Navid, Harry, Scott, and local friends of IL Y A,

To respond systematically to Morgan’s query, let me update re. IL Y A on the eve of going away for the next 3.5 days.  I will not have (much) internet access.

 

DEVELOPMENT

 

The racks are powered up in Onomy.   The video gear is working.

 

JS and I are both traveling today.  JS will be back in Montreal Friday.  XW back in SF Sunday.  XW will be back in Montreal Sep 7 (9 pm) through dawn Sep 10.

 

Can we programmers work together in TML Wed and / or Thursday evening ?

 

Code distribution:

 

Mini
Nav’s MSP code
Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition. JS took photos.   
The audio gear’s lights are on.    I haven’t connected the speakers bc I decided it was better use of my  time to pay attention to * instead of using up the speaker wire for a temporary set up in Onomy.  (I read several times Nav’s excellent instructions and looked over the audio rack.)

 

 

Xin Wei’s old laptop
ilya.v.analysis
Morgan’s state code goes here.

 

PowerMac tower
video acquisition — camera input
remaining ilya.v.*

 

State of State Engine:
JS showed Morgan ilya.v.analysis.  State should get data from ilya.v.analysis using the mxr multicast of   ilya.state    param group.  (But will likely run in the same instance of Max )
Morgan’s written out several parallel clocks, turned on InverseSquare, tweaked jumping  across simplex boundaries (very important for getting ILYA to move globally across all the states).  We now have a high-level interface, the state engine, to nudge the entire installation’s behavior across all the states.

 

XW + Morgan have talked thru the state behavior from GLASS to GLASS.

 

State of Visuals:

 

JS has fixed the GLASS & DISCOVERY states (I’d like the system to start with GLASS, never be in a mirror condition — spectator should never see herself  except maybe as silhouette) .  With Nav’d sound composition, these work fine for me now.

 

To Fix for STORM – DESERT: v.main / instruments / PFlow (particles )  to achieve fire+ash+gravity for transition to Desert.

 

 

To implement  STORM :  use intersect motion.ab  to get alpha / mask for rich pre-fab video – check with Jhave  – We should not use synthetic graphics bc I wish to have fx be very different in quality from the other states.   So I want to use video from sampled textures & movement.   I just need to have the utility to swap in various prefab videos until we see something interesting. :)
So this needs a 5th video stream.   We’ll see if the tower + Jitter can handle that.  In any case the programming for STORM should be merely a matter of creating a mask that  grows nicely from motion.ab intersect to whole bodies silhouette to entire video field.

 

 

Revised IL Y A Schedule

 

 

SEP 2-10, IL Y A in Onomy

 

Scott and XW will choose some gray surface (in range 30 – 70% gray).  I’d like as dark a surface that will make the image’s luminosity resemble the background of the ambient room.  So more toward 50% than pure white, likely.   (Scott I’ll let you know if I get the time to go to Flax for the card stock samples today before I have to leave for Gabriele.)

 

Dale and Scott said they may be able to expose our computers (assigned IP’s) so that we can get to them via ARD.   This is tres important.

 

Sep 10-14
Move to Berkeley ?

 

Sep 11
Navid arrives Air Canada AC5259, 1:07 PM SFO (then take BART to 24th & Mission)

 

 

BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM
Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.   Workshop means people milling around trying out the ILYA in studio format, guts hanging out … talking about how this is built with members of IL Y A team.   I’ve inquired re Skype access for Montreal members.

 

This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  Berkeley is 1.5 hours via truck from Stanford / Onomy.

 

 

STANFORD CCRMA Set-up Sep 15, Concert Sep 16
24 hour access
This may require extra help (Maria, Vangelis, Gabriele, Josee-Anne, if you are available some time these days Sep 14-17 would be a nice bonding moment :)

 

See letter of invitation from Bruno Ruviaro in other email.

 

 

* = excavating JS jitter code, arranging venues, starting the grants, …

 

________________________________________________________________________________
Sha Xin Wei, Ph.D.
Visiting Scholar • French and Italian Department • Stanford University
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  http://flavors.me/shaxinwei
+1-650-815-9962 •

1-514-817-3505 (m)  • skype: shaxinwei • calendar

________________________________________________________________________________

 

 

ILYA sw/hw dev mtg Monday 1 hour during 4:00 – 6:00 PM ET ?

great !!

i just got back into town…

 

Monday Scott and I will drive down to Onomy 11 AM = 2PM to

 

(1) choose projection surface material;
(2) plan local (SF) budget needs and calendar;

 

in addition, I hope to

 

(3) check out state engine with Morgan
(3.2) Plug it into it into JS’s code (with JS and Morgan ?)

 

(4) check out the Mini with Navid remote.

 

I need to limit this to a single window,  can we do this software  / gear check-in say 4:00 – 5:00 PM ET so I can deal with a ton of stuff back in SF?   By then Scott and I should be able to show you the test screen material.

 

Also then we can plan  the Berkeley “studio” with Lisa Wymore in Berkeley Sep 14,  and Stanford CCRMA Concert Sep 16.

 

Cheers,
Xin Wei
On 2010-09-04, at 6:00 PM, Morgan Sutherland wrote:

Made significant progress on the state jumping. Dealing with hysteresis-like rapid jumping symptoms currently. Once that’s done, we’ll be ready to tune.

2010/9/2 Morgan Sutherland <skiptracer@gmail.com>

Ok – working on implementing state jumping and settings for the entropic clocks. Aiming to be done Sunday.

(I’m coding where I can in the cracks between activities while on vacation. Tonight I’m leaving the island to beat the storm and I’ll be in Cape Cod, then Boston by Friday evening.)

 

 

On Thu, Sep 2, 2010 at 10:20 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Hi Morgan, JS, Navid, Harry, Scott, and local friends of IL Y A,

To respond systematically to Morgan’s query, let me update re. IL Y A on the eve of going away for the next 3.5 days.  I will not have (much) internet access.

 

DEVELOPMENT

 

The racks are powered up in Onomy.   The video gear is working.

 

JS and I are both traveling today.  JS will be back in Montreal Friday.  XW back in SF Sunday.  XW will be back in Montreal Sep 7 (9 pm) through dawn Sep 10.

 

Can we programmers work together in TML Wed and / or Thursday evening ?

 

Code distribution:

 

Mini
Nav’s MSP code
Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition. JS took photos.   
The audio gear’s lights are on.    I haven’t connected the speakers bc I decided it was better use of my  time to pay attention to * instead of using up the speaker wire for a temporary set up in Onomy.  (I read several times Nav’s excellent instructions and looked over the audio rack.)

 

 

Xin Wei’s old laptop
ilya.v.analysis
Morgan’s state code goes here.

 

PowerMac tower
video acquisition — camera input
remaining ilya.v.*

 

State of State Engine:
JS showed Morgan ilya.v.analysis.  State should get data from ilya.v.analysis using the mxr multicast of   ilya.state    param group.  (But will likely run in the same instance of Max )
Morgan’s written out several parallel clocks, turned on InverseSquare, tweaked jumping  across simplex boundaries (very important for getting ILYA to move globally across all the states).  We now have a high-level interface, the state engine, to nudge the entire installation’s behavior across all the states.

 

XW + Morgan have talked thru the state behavior from GLASS to GLASS.

 

State of Visuals:

 

JS has fixed the GLASS & DISCOVERY states (I’d like the system to start with GLASS, never be in a mirror condition — spectator should never see herself  except maybe as silhouette) .  With Nav’d sound composition, these work fine for me now.

 

To Fix for STORM – DESERT: v.main / instruments / PFlow (particles )  to achieve fire+ash+gravity for transition to Desert.

 

 

To implement  STORM :  use intersect motion.ab  to get alpha / mask for rich pre-fab video – check with Jhave  – We should not use synthetic graphics bc I wish to have fx be very different in quality from the other states.   So I want to use video from sampled textures & movement.   I just need to have the utility to swap in various prefab videos until we see something interesting. :)
So this needs a 5th video stream.   We’ll see if the tower + Jitter can handle that.  In any case the programming for STORM should be merely a matter of creating a mask that  grows nicely from motion.ab intersect to whole bodies silhouette to entire video field.

 

 

Revised IL Y A Schedule

 

 

SEP 2-10, IL Y A in Onomy

 

Scott and XW will choose some gray surface (in range 30 – 70% gray).  I’d like as dark a surface that will make the image’s luminosity resemble the background of the ambient room.  So more toward 50% than pure white, likely.   (Scott I’ll let you know if I get the time to go to Flax for the card stock samples today before I have to leave for Gabriele.)

 

Dale and Scott said they may be able to expose our computers (assigned IP’s) so that we can get to them via ARD.   This is tres important.

 

Sep 10-14
Move to Berkeley ?

 

Sep 11
Navid arrives Air Canada AC5259, 1:07 PM SFO (then take BART to 24th & Mission)

 

 

BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM
Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.   Workshop means people milling around trying out the ILYA in studio format, guts hanging out … talking about how this is built with members of IL Y A team.   I’ve inquired re Skype access for Montreal members.

 

This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  Berkeley is 1.5 hours via truck from Stanford / Onomy.

 

 

STANFORD CCRMA Set-up Sep 15, Concert Sep 16
24 hour access
This may require extra help (Maria, Vangelis, Gabriele, Josee-Anne, if you are available some time these days Sep 14-17 would be a nice bonding moment :)

 

See letter of invitation from Bruno Ruviaro in other email.

 

 

* = excavating JS jitter code, arranging venues, starting the grants, …

 

________________________________________________________________________________
Sha Xin Wei, Ph.D.
Visiting Scholar • French and Italian Department • Stanford University
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  http://flavors.me/shaxinwei
+1-650-815-9962 •

1-514-817-3505 (m)  • skype: shaxinwei • calendar

________________________________________________________________________________

 

 

ILYA Monday Onomy

my  cellphone: 514-4326633

 

On Sat, Sep 4, 2010 at 1:23 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Thanks for works and updates, everyone.  I should be able to come down to Onomy for an hour or so Monday after errands at Stanford.  Maybe then I can check out the Mini.  (Navid do you have  a phone at which I can reach you then?)   I can also check  that you guys can get Into our macs via ARD  – thx to Dale and Scott.

 

Xin Wei
____________________________________________________________
+1-650-815-9962 (m)  • skype: shaxinwei
____________________________________________________________

 

On 2010-09-02, at 3:11 PM, Navid Navab <navid.nav@gmail.com> wrote:

 

Can we programmers work together in TML Wed and / or Thursday evening ?

 

I’m available except from 2-4:30pm on Wed

Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition

 

That’s too bad :(
Could we connect the mac mini to a screen and see if it works. This is important. If it is internally damaged then we need a planB.
BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM
Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.
This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  

 

Lets do it.

 

 -Nav

ILYA Monday Onomy

Thanks for works and updates, everyone.  I should be able to come down to Onomy for an hour or so Monday after errands at Stanford.  Maybe then I can check out the Mini.  (Navid do you have  a phone at which I can reach you then?)   I can also check  that you guys can get Into our macs via ARD  – thx to Dale and Scott.

 

Xin Wei
____________________________________________________________
+1-650-815-9962 (m)  • skype: shaxinwei
____________________________________________________________

 

On 2010-09-02, at 3:11 PM, Navid Navab <navid.nav@gmail.com> wrote:

 

 

Can we programmers work together in TML Wed and / or Thursday evening ?

 

I’m available except from 2-4:30pm on Wed

 

Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition

 

That’s too bad :(
Could we connect the mac mini to a screen and see if it works. This is important. If it is internally damaged then we need a planB.

 

BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM
Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.
This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  

 

Lets do it.

 

 -Nav

ILYA Newsletter: Development; revised work on IL Y A schedule (TML, Sep 8 and 9 evening?); Berkeley open studio; Stanford CCRMA site

By the way, just to keep US Customs from seeming even more evil than they already were, there was no evidence that the Mac Mini was pried off of anything…

 

The bottom side of a Mac Mini has that black plastic disk cover, which is only attached to the fairly heavy aluminum chassis by three little keyhole mounts, which in turn are held to the plastic disk with 6 little plastic welds.  That black plastic piece was used to secure the Mini in the travel case.  I suspect that fairly typical travel shock forces broke those welds, and then the Mini was free to roam around inside the travel case.  The Mini has been repaired, but JS and I re-mounted it in an inverted orientation, so there was lots of surface area for VHB and/or industrial-strength Velcro (the sticky version of this is *very* good for this sort of thing).

 

The Mini was also too heavy for the component it was on top of in the rack, and the faceplate of that component was twist/bending.  We moved it somewhere more solid.

 

Cheers,

 

Scott

 

 

 

From: Navid Navab [mailto:navid.nav@gmail.com]
Sent: Thursday, September 02, 2010 3:12 PM
To: Sha Xin Wei; Morgan Sutherland; Scott L. Minneman; Jean-Sébastien Rousseau; David Jhave Johnston;post@membranes.posterous.com; Harry Smoak gmail; Lisa Wymore; Maria Cordell; Lina Dib; Vangelis L; Josée-Anne Drolet
Subject: Re: ILYA Newsletter: Development; revised work on IL Y A schedule (TML, Sep 8 and 9 evening?); Berkeley open studio; Stanford CCRMA site

 

 

Can we programmers work together in TML Wed and / or Thursday evening ?

 

I’m available except from 2-4:30pm on Wed

 

Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition

 

That’s too bad :(

Could we connect the mac mini to a screen and see if it works. This is important. If it is internally damaged then we need a planB.

 

BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM

Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.

This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  

 

Lets do it.

 

 -Nav

ILYA Newsletter: Development; revised work on IL Y A schedule (TML, Sep 8 and 9 evening?); Berkeley open studio; Stanford CCRMA site

Can we programmers work together in TML Wed and / or Thursday evening ?

 

I’m available except from 2-4:30pm on Wed

Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition

 

That’s too bad :(
Could we connect the mac mini to a screen and see if it works. This is important. If it is internally damaged then we need a planB.
BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM
Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.
This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  

 

Lets do it.

 

 -Nav

ILYA Newsletter: Development; revised work on IL Y A schedule (TML, Sep 8 and 9 evening?); Berkeley open studio; Stanford CCRMA site

Hi Morgan, JS, Navid, Harry, Scott, and local friends of IL Y A,

To respond systematically to Morgan’s query, let me update re. IL Y A on the eve of going away for the next 3.5 days.  I will not have (much) internet access.

 

DEVELOPMENT

 

The racks are powered up in Onomy.   The video gear is working.

 

JS and I are both traveling today.  JS will be back in Montreal Friday.  XW back in SF Sunday.  XW will be back in Montreal Sep 7 (9 pm) through dawn Sep 10.

 

Can we programmers work together in TML Wed and / or Thursday evening ?

 

Code distribution:

 

Mini
Nav’s MSP code
Mini was pried off its mount, and its case damaged while in Customs / Air Canada custody.   We do not know its condition. JS took photos.   
The audio gear’s lights are on.    I haven’t connected the speakers bc I decided it was better use of my  time to pay attention to * instead of using up the speaker wire for a temporary set up in Onomy.  (I read several times Nav’s excellent instructions and looked over the audio rack.)

 

 

Xin Wei’s old laptop
ilya.v.analysis
Morgan’s state code goes here.

 

PowerMac tower
video acquisition — camera input
remaining ilya.v.*

 

State of State Engine:
JS showed Morgan ilya.v.analysis.  State should get data from ilya.v.analysis using the mxr multicast of   ilya.state    param group.  (But will likely run in the same instance of Max )
Morgan’s written out several parallel clocks, turned on InverseSquare, tweaked jumping  across simplex boundaries (very important for getting ILYA to move globally across all the states).  We now have a high-level interface, the state engine, to nudge the entire installation’s behavior across all the states.

 

XW + Morgan have talked thru the state behavior from GLASS to GLASS.

 

State of Visuals:

 

JS has fixed the GLASS & DISCOVERY states (I’d like the system to start with GLASS, never be in a mirror condition — spectator should never see herself  except maybe as silhouette) .  With Nav’d sound composition, these work fine for me now.

 

To Fix for STORM – DESERT: v.main / instruments / PFlow (particles )  to achieve fire+ash+gravity for transition to Desert.

 

 

To implement  STORM :  use intersect motion.ab  to get alpha / mask for rich pre-fab video – check with Jhave  – We should not use synthetic graphics bc I wish to have fx be very different in quality from the other states.   So I want to use video from sampled textures & movement.   I just need to have the utility to swap in various prefab videos until we see something interesting. :)
So this needs a 5th video stream.   We’ll see if the tower + Jitter can handle that.  In any case the programming for STORM should be merely a matter of creating a mask that  grows nicely from motion.ab intersect to whole bodies silhouette to entire video field.

 

 

Revised IL Y A Schedule

 

 

SEP 2-10, IL Y A in Onomy

 

Scott and XW will choose some gray surface (in range 30 – 70% gray).  I’d like as dark a surface that will make the image’s luminosity resemble the background of the ambient room.  So more toward 50% than pure white, likely.   (Scott I’ll let you know if I get the time to go to Flax for the card stock samples today before I have to leave for Gabriele.)

 

Dale and Scott said they may be able to expose our computers (assigned IP’s) so that we can get to them via ARD.   This is tres important.

 

Sep 10-14
Move to Berkeley ?

 

Sep 11
Navid arrives Air Canada AC5259, 1:07 PM SFO (then take BART to 24th & Mission)

 

 

BERKELEY “OPEN STUDIO”?:  Sep 14, 4:00-6:00 PM
Lisa Wymore and I plan a VIP (potential faculty allies and artist practitioners) EVENT Sep 14 @ Zellerbach.   Workshop means people milling around trying out the ILYA in studio format, guts hanging out … talking about how this is built with members of IL Y A team.   I’ve inquired re Skype access for Montreal members.

 

This is strategically very important, but  can we do this ?  It would mean a day of moving and setting up.  Berkeley is 1.5 hours via truck from Stanford / Onomy.

 

 

STANFORD CCRMA Set-up Sep 15, Concert Sep 16
24 hour access
This may require extra help (Maria, Vangelis, Gabriele, Josee-Anne, if you are available some time these days Sep 14-17 would be a nice bonding moment :)

 

See letter of invitation from Bruno Ruviaro in other email.

 

 

* = excavating JS jitter code, arranging venues, starting the grants, …

 

State Clocks Diagram

a membrane should draw attention through itself, not to itself

Yeah, as Scott said (and Harry),  that’s been done time and again,

for ex in 2009 they did that at the Montreal TV show when I went backstage — the singers came out through a water screen with an umbrella even :)

 

And I saw the Canadian rep to the Venice Biennale a few years back do a video projection onto a waterdrop wall.   It was faint and the grains were annoying.   At least it sort of made sense bc the scene included a woman in a river (lake, sea) with a bucket, but the clanky clunky machinery weakened the impact.

 

And in CS circles Blast Theory Desert rain.
In those cases there was an intrinsic reason driven by the event  or content.
Otherwise I’d like to get away from the obsession over projections onto planar 2D surfaces as much as possible , as far as the lab strategy’s concerned.

 

For IL Y A one experiential goal is to avoid fixing attention on the surface but instead to pull attentions of both visitors through the membrane to the other.

 

One way would be perhaps to correlate some of sound as if it comes from the other body, but I don’t know if we’ll have that kind of spatial control given our cheap speaker array.  (Navid?)

 

Hey Montrealers, When shall we do our media review?   

 

Wish us luck!
Xin Wei

 

 

PS. Here’s some nice eye candy (reminds me of Paul Kaiser’s Pedestrian project, earlier)

 

 

On 2010-07-27, at 1:31 PM, Scott L. Minneman wrote:
Fog screens are/were pretty common in the tradeshow world a few years back.

 

They were all the rage for a while.  The floor gets wet.  The novelty wears off pretty darn quick.

 

slm

 

From: Michael Fortin [mailto:michael.fortin@gmail.com]
Sent: Tuesday, July 27, 2010 1:23 PM
To: Sha Xin Wei
Cc: Jean-Sébastien Rousseau; Morgan Sutherland; Harry Smoak; Scott L. Minneman; Lina Dib
Subject: Re: donation needed: Touch Projection Glass

 

Time to bring back to life one of my crazier ideas: projected wall of paper bits / waterfall of water.  What if we built a screen made of falling particles?  People could walk through it; the falling material would simply need to be able to hold an image.

 

Interesting materials – hole-punch discards, water, chalk, etc.  Even water with impurities to make it whiter (milk-screen?) upon which we could project.  Still; walking through it is an issue.

 

Then there’s transparent plastic strips embedded with impurities to capture an image.  Strips are dangling, and people could walk through them – but the image would not distort as expected – unless it was very precisely scripted how a person could go through the screen – which defies the purpose.

 

Ideally the screen would be millions of floating dust particles that have a tendency to stay in position, even after someone swipes at them.  I’ll throw out wind and magnetism as potential ways.  The other issue is the ability for the particles to hold an image.

 

A thick gas?  Maybe replicate the aurora borealis?

 

What about a gas that is about the same weight as air but appears to be solid?

 

The more I think about this – the more dangerous/toxic the screen becomes…  (as well as physically unrealistic)

 

Cheers
~Michael();

 

On Tue, Jul 27, 2010 at 16:04, Sha Xin Wei <shaxinwei@gmail.com> wrote:
Thanks very much, JS.Yes, last time we costed that stuff it would’ve cost too much in the dimensions we wanted.
(But $10,000 for “100 inch” Hologlas seems not bad compared to our present costs.  )
Scott said the Hologlas is quite faint.    I convinced myself that for this version of IL Y A we (well, JS and Michael) want to control the visual fx and not have to worry about confusing the spectator with “mis-registration” and the visual clutter of (untreated) background.

However, I would indeed like to get our hands on that stuff for the lab, for further development.  I  imagine re-thinking synthesizing fx that work only as overlays.   In order to read, it’s quite likely that the graphics would have to be pretty crisp– so we’d be restricted to ugly synthetic graphics.   But under ideal lighting conditions, we could work with textures, too, perhaps.   anyway,  IF we get a donation,there’s lots we could do with it I’m sure.

But if one of you can make friends with a rep from the company.  I’d be happy to follow up and see if we can get in-kind.

For example, we could ask for sponsorship for a Chinese-designed “fashion show” for SIGGRAPH 2011 Vancouver that I’m beginning to plan.  Jeremy, our potential partner and I have been talking about it for about  month.   I’ll have more info after the SIGGRAPH pre-meeting Aug 3

One technical problem for the SIGGRAPH application (among many) is that the performer may need to (appear to) walk through the image.

Cheers,
Xin Wei

On 2010-07-27, at 12:45 PM, Jean-Sébastien Rousseau <jsrousseau@gmail.com> wrote:

> Probably the same as the holographic projection surfaces from VIP… Here are some files …
> <HoloPro Intro.pdf>
> <HoloPro 2008 USD$ List Pricing.pdf>
> <ViP Interactive foil to window.pdf>
> <ViP 2008 USD$ List Pricing.pdf>
>
>
>
> Le 2010-07-27 à 15:39, Sha Xin Wei a écrit :
>
>> I want it for the lab.
>> How much does it cost for 3m x 2m ?
>>
>> Xin Wei
>>
>> On 2010-07-27, at 12:33 PM, Morgan Sutherland <skiptracer@gmail.com> wrote:
>>
>>> I’m sure y’all did your research, but I came across this photo on the NUI forum:
>>>
>>> http://nuigroup.com/?ACT=28&fid=74&aid=3777_KLVHIes1xEaD3GMkXMaB
>>>
>>> using the touch foil that Jerome and JS and I have used…

 

________________________________________________________________________________
Sha Xin Wei, Ph.D.
Visiting Scholar • French and Italian Department • Stanford University
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  topologicalmedialab.net/xinwei
+1-650-815-9962 •

1-514-817-3505 (m)

• skype: shaxinwei • calendar
________________________________________________________________________________

 

Lina – jhave, Re: ILYA STORM state: algae bloom

nice-

i like the screen. it may look enveloping and ghostly layered with the old footage.
although the idea of filming the pseudo-archival sequence on site seems a viable option for less visual clutter, i’ve still put together a small sequence of archival footage to try with js’s effects – hopefully we can get together tomorrw and record the layers of video (archive – effects – screen) and send a copy to see how it looks.
take care everyone
l

Quoting david jhave johnston <jhave2@gmail.com>:

hey folks,

hope you are all entropically (in the thermal and temporal tropics) well,

my temporal madness continues

i can’t tell if i am standing still or if the world is blurred

too many tasks, not enuff linearity

tht aside, i made a few tests (download them,

they are just a bit above web-resolution mp4s,

hopefully in range of being feasible for jitter to handle):

candle <http://glia.ca/conu/ilya/mp4/candle.mp4>

hands <http://glia.ca/conu/ilya/mp4/hands.mp4>

wet rock <http://glia.ca/conu/ilya/mp4/wetRock.mp4>

screen <http://glia.ca/conu/ilya/mp4/screen.mp4>

nothing here (except for candle and tiny segments of hands) too satisfying

but was playing around

trying to create black grnd around figures tht might mask easily

or join together mirrored

let me know if it seeds any thought

and i’ll continue to let il-y-a hover in the multitask bin

jhav

On Tue, Jun 29, 2010 at 12:02 PM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

(thx for the Frankenstein link. i can’t see your striking moving image

poetry on my ipad which is the only working machine i have at the moment!

apple still blockades flash on iphone & ipad !)

re. ILYA line — for example, morphing from  sparks igniting from “rubbing”

A and B –> algae blooms?  they could desiccate

- xw

Option B re ghost footage for Yerba Buena: on-site footage

Talking things over with Lina last week, I convinced myself that the best thing to do for visual quality in the Yerba Buena, might be to take Toni Dove’s suggestion seriously and try to shoot “ghost” footage in the actual space.

 

The most elegant way to do this would be to use the very same camera as mounted on the actual frame, and shoot on site as installed.   This means adding a recording feature.  Should be simple, since this would be based on simply saving some files.   No need to separate shoot and edit on-site.

 

In addition to this “on-site” footage,  we’ll still look for historical footage as localization material to be blended in, perhaps in GLASS and transition from GLASS to DISCOVERY.
When people are farther away from the installation, they should be able to see ghost – ghost activity.  This is where historicaL street scenes (Martinez and Market St)  etc would be great to establish the remote past temper.   As a visitor walks up to the screen, present video from her side will replace the canned historical footage for that side.

 

We’ll need to define some sort of graceful transition (partly based on ACTIVITY-CLOCK) from historical footage to on-site footage which should show the other side of the membrane, but with ghosts .  Ideally these ghosts would be dressed in period costume.   It’s okay by me in this SF / Yerba Buena version for anachronistic conflict between background and figure because (1) it’s a modern gallery, not a historical space, (2) there will be opportunities for other historical footage and sites once we have this “lens” built to tour.

 

Comments?

 

- Xin Wei

ILYA shooting stuff

Hi Jhave,

In case JS has not yet replied,

BG color should ultimately be black because we should project light objects on black background.
But at this stage, it can be any complementary  color not in the foreground object of course.
JS should supercede this as necessary.

 

Again the gaol is to generate a lot of different sorts of vitalist imagery — whether modeled on fire or plants or whatever you can find.   The idea would be an object that is not a part of an animal, but that can be given animate behavior that is parameterized by movement featured extracted from live video.

 

Thanks a lot,
Xin Wei
On 2010-06-23, at 7:00 AM, david jhave johnston wrote:

Hi All,

I’ll b around FG in bb most of Thurs. (arriving around 11:30)
After talking with XW, wondered about shooting upward through a lit piece of glass,
film of water on glass, drop liquid onto it so it spreads outward. So the contact points would erupt outward.

JS: is green screen optimum or blak backgrnd?

Jhave

On Wed, Jun 23, 2010 at 5:52 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Hi ,

 

Jhave said (kindly) that he may be able to shoot some stuff moving along paths (either compact objects along a one-dimensional sort of straight path, could be falling object ), or serpentine thing extending along a one-dimensional trajectory…  One thing to do may be to parasite the set up in the FG bb:  drop (or lower??)  brightly side- lit, light colored objects against black wall.  Any ideas on what?   Or drag something along black marley?    We may need fishing line — or better, matte black thread?

 

Lina, if you’re up for it, maybe we can meet Thursday and shoot with Jhave for an hour-ish (if I remember Jhave may be around Th too)

 

Ideally we need something that has the quality of life — whether induced by an animating  hand or bc the material itself is living tissue — especially plant vines, shot somehow.

 

Cheers!
Xin Wei

shooting video material for STORM

Great, I plan to be mostly in the BB from 10 – 6, except when JS calls me up to TML during the afternoon.

 

Jhave’s intuition is good:  the final video material does not have to be a linear track.

 

We just have to have some way to controllably “grow” vitalist patterns out from the intersections of the moving parts of bodies A and B, initially re-forming vaguely along the silhouettes of A and B, but then filling all space.   The pattern can be like sparks of fire that turn into a riot of tendrils and leaves.

 

Each spark is a particle in a particle system which is created at the barycenter of a connected region whose density exceeds a threshold.

 

 Its initial velocity is set by the optical flow from the more active side A, or B (or it could be a sum of the optical flows from bodies A and B.)

 

The riot of tendrils and leaves can at first grow out along the contours of the union of the entire silhouette of bodies A and B (whole body – not just moving parts of body, ie from background substraction*).   Then they’ll keep on growing till they fill the entire frame of the video.

 

It does not have to be vegetal pattern.  It should suggest living matter, and be visibly very contrasting with whatever visuals can be generated algorithmically (synthetically) in Jitter or GL.

 

We could use particle systems to animate pieces of texture-mapped video.
The number of particles and the envelopes on their velocities and lifetimes could be exposed parameters, or maybe some aggregate param like “density” and “aggressivity” or “stickiness” (how close they stick to the target bodies).    And also 1 or 2 parameters controlling the visual rendering.

 

The dynamics should not be driven by random, but by optical flow, PLUS perhaps an added wind velocity whose speed and direction are exposed parameters.  (The wind field itself may be slightly randomized for richness.)

 

Today is just to collect material so Michael F and JS have some material to work with that’s as rich as possible.   I expect that there’ll be need (opportunity) to generate more in the coming weeks, I think, so we’ll be as efficient in time as possible.

 

Thank you :)
Xn Wei

 

*  I assume that we can simultaneously access movement-based as well as background subtraction-based “bodies” from cv.a, cv.b.

 

On 2010-06-23, at 7:12 PM, lina dib wrote:
hi y’all
i can come by around 2 if that works. i can stick around until the evening and we can test some footage. i will see if i can find the falling rocks i shot. i’ll also try to bring a rough sequence of archival sf footage. if we have time we could see how it looks with the layers of the different sates and effects. im a bit worried it might look busy ?? a voir..
xw- we can meet to chat about ed any time during the day. i will bring my notes.
a demain,
lina

 

On Jun 23, 2010, at 10:00, david jhave johnston <jhave2@gmail.com> wrote:

 

Hi All,I’ll b around FG in bb most of Thurs. (arriving around 11:30)
After talking with XW, wondered about shooting upward through a lit piece of glass,
film of water on glass, drop liquid onto it so it spreads outward. So the contact points would erupt outward.

JS: is green screen optimum or blak backgrnd?

Jhave

 

On Wed, Jun 23, 2010 at 5:52 AM, Sha Xin Wei <shaxinwei@gmail.com> wrote:

Hi ,

 

Jhave said (kindly) that he may be able to shoot some stuff moving along paths (either compact objects along a one-dimensional sort of straight path, could be falling object ), or serpentine thing extending along a one-dimensional trajectory…  One thing to do may be to parasite the set up in the FG bb:  drop (or lower??)  brightly side- lit, light colored objects against black wall.  Any ideas on what?   Or drag something along black marley?    We may need fishing line — or better, matte black thread?

 

Lina, if you’re up for it, maybe we can meet Thursday and shoot with Jhave for an hour-ish (if I remember Jhave may be around Th too)

 

Ideally we need something that has the quality of life — whether induced by an animating  hand or bc the material itself is living tissue — especially plant vines, shot somehow.

 

 

particle-based chemistry for STORM

Hi Michael,

Yes, good - how about if we try as a goal to build an infrastructure for “chemical” reactions in IL Y A.

Agreeing with Michael that  for IL Y A particles would be better than repositioning (al jit.repos) , how about the following:

 

CHEMISTRY
Three species of particles would be enough to start with.  I’m imagining one for each person A, B.   And perhaps a third which is “born” as a function of the density and momentum density, and energy density.  (In other words, even if the particles are modelled efficiently as a particle-matrix, I would like to have access to them as jit.matrices in some sub-sampled resolution.   An operator that maps any set of particles into  2D jit.matrices that represent space-averaged: mass density, momentum density, and kinetic energy density.   Each particle should have its own mass, of course.

 

This would provide an engine on top of which we can animate GL geometry and apply textures from Jhave’s video.

 

I think JS already has build some of this.

 

 

GENERATING PARTICLES
The particles could be generated in at least two ways:

 

(1) from a set of pre-fab geometries, eg a set of control points for a set of (disconnected) pre-fab NURBs
(Anything but planes!!!!  See bubble external that Jerome found for VVV)

 

(2) Birth each particle in a region of sufficient density — I really want this so there is some dynamical method to create particles based on  highlights in the video input!!   Assign velocity according to average optical flow in that region.

 

Assume 1-plane matrix (mass-density) values between 0 and 1.  Pick threshold L, 0< L < 1.
Find regions inside isobars >= L, ie whose cell mass-density exceeds L.   Create a particle at the weighted center of the region with mass equal to the integral of mass-density. (* See definition below.)   This particle should be assigned a velocity which is the average optical flow for that region.   (Use integral analogous to the weighted center definition below.)

 

I do not know if your method of making each pixel into a particle would work — my algorithm is independent of the resolution of the video matrix.  The parameters would include a threshold on a cell norm, and some scalings on velocity.

 

 

RENDERING
We can render the particles all sorts of ways.

 

these three species of particles can be rendered as anything–
for example,
as we make the transition from DISCOVERY to STORM,
we can begin to ramp the threshold DOWN from 1 toward lower value, so particles begin to form out of the general field for each of A and B cv.a cv.b MOTION

 

Then we can turn on CHEMICAL reaction:
where cv.a  * cv.b motions > a different threshold M, we can create third species of particles C which
first are rendered as FIRE SPARKS, then according to a second parameter, begins to be replaced by  little blankets of Jhave videos that are texture mapped onto them (or sets of these particles C

 

DEFINITION WEIGHTED CENTER
Weighted center =  1/M * ( integral ( mass-density[X] * X) dx )
where X is position vector , and M = integral (mass-density[X]  dx ), and dx is the unit cell area element (in the discrete case dx = 1*1 = 1 :)

 

Compliments!
Xin Wei

 

 

June 23, cameras, speakers, physical design

      Hi Scott,

Good.   5′ at that distance seems adequate to me, especially bc the visitor will approach in a cone, so let’s go.   I’m cc;ing Jane so as soon as Harry says go she can order ti on my credit card. )

 

thx
Xin Wei
On 2010-06-23, at 10:38 AM, Scott L. Minneman wrote:
Xin Wei,

 

Xin Wei (and Harry (and All)),

 

I think we should try using the Philips SPZ6500 webcam.  It is wide-angle, and should capture over 5’ of image width at 4’ from the lens (if I’m interpreting the manufacturer’s data correctly – nobody is good about specifying the optics for these puppies).  Getting a couple on order for California would be good, plus whatever you need for Montreal.  I’m not sure what they’re going to offer me in terms of ways to mount them securely, but I’ll figure something out.  I have a couple of plans for how they’ll work in to the screen structure, neither of which I’m really thrilled with (but that’s not surprising….it’s just a hard thing to do in an attractive way).

 

I’ll be around on Friday, and then I’ll be away until next Friday, so we may overlap for a few hours if we work at it.  Let’s talk.  I have speakers now, and I think they’ll work nicely.  No sign of projectors yet.

 

I’m trying to get my shop working on the arches and screen frame while I’m away, but it’s hard to do with total confidence without projectors.  I may get him working on the arches and worry about the other parts when I return…the CNC tubing bend is the thing that might hold us up the most.  I think I’m meeting with him today to discuss schedule and options.

 

More soon, but I think those are the most pressing bits.

 

Cheers,

 

Scott

 

Scott Minneman, PhD
CEO/CTO – Onomy Labs, Inc.
415 505-7234 – cell

 

From: Sha Xin Wei [mailto:shaxinwei@gmail.com]
Sent: Wednesday, June 23, 2010 2:33 AM
To: Scott L. Minneman
Cc: Harry Smoak gmail; membranes@posterous.com
Subject: cameras, speakers, physical design

 

Hi Scott,

 

How’s it going out west?

 

Just a note to let you know where we’re at.   Harry talked us thru the gear, and the physical design as we last got it from you.   Have you settled on the camera question yet?    Inquiring minds — namely the guys doing the  visuals — want to know the exact lens, camera specs so they can mock up the angles, see what it does to actual people in position, and design their processing.  It’d be useful to see a grown-up  may actually look like from the height (27″?) and angle that you’re considering, from a camera & lens like the one we’ve been talking about (Logitech?).

 

I do want to capture enough width at the distance we’re talking about — ≥ 4 feet — to permit people to walk laterally across the camera.  Maybe we can accept a cone where at 4′ they fill the view some % say 30-40% width, and then at 5′-7′ distance the camera sees down to their knees(?) and they can walk laterally to give enough optical flow to provide interesting dynamics, yet their limbs will cover enough pixels (6-10% of width??) to provide interesting areas from which ti grow effects.

 

I’ll be back on Friday — and head directly down to CCRMA.  But maybe we can chat on the weekend, or get together at Onomy asao next week (Monday?) to do a mock-up with whatever code I can bring back from JS and Navid.

 

Speaking of sound art, Harry will be in touch about maybe having you ship to TML a spare pair of the BA speakers you’re talking about, for testing here.    It may be cheaper, and much faster.  Sorry I don’t have the details — so I’ll defer to Harry, Navid.

 

When I get back I’ll also get in touch with Rick about getting some hi res xfers.   We should more what we might want at the end of next week.

 

See you soon!
Xin Wei

speakers for IL Y A

Scott,In follow-up to pour conversation of yesterday, Navid’s current design consists of 10 each of these KEF HTS3001 speakers, plus 1 each KEF Kube-1 subwoofer.

Product information:
http://www.kef.com/us/surroundsound/satellite/hts3001se
http://www.kef.com/gb/subwoofer/kube/kube-1

Manual with mounting options and templates for HTS3001:

 

Please let me know if you need any more information.

Harry

gestural sound (ilya granulator)

The ilya gestural granulator has been (re)born after hours of re-programming.

 

The underlying audio buffer is pre-composed allowing for fast and rich spectral morphologies. The specific mappings to gesture and audio content will be (is) state-driven and interpolatable.
The following video shows me while testing one of the states (parameter-spaces).

 

password: ilya

 

-Nav

ILYA sound, film footage

This is the Market Street footage I was mentioning…pre quake/fire, no less.http://www.archive.org/details/TripDown1905

I have a note in to Rick for a chat about additional footage he might be
able to point us at (and how to get this at even better quality).

Cheers,

slm

> Hi Scott,
>
> Thanks for the offer of intro. I’d like to go with you asap, with Lina in
> the email loop since she’s the archive / site researcher en route to
> Montreal :)
> I sent an email to Rick but we’ve missed. We met only the first time I
> visited the Library months ago. So it’d be great to really talk with
> Rick about archival footage (and site) with you or your intro.
>
> Thx,
> Xin Wei
>
> On 2010-05-11, at 12:07 PM, minneman@onomy.com wrote:
>
>> That Martinez reel is very nice.
>>
>> Just to keep this to-do item on our plate(s)…
>>
>> Rick Prelinger does a recurring show of San Francisco found footage.
>> This
>> past year, there was a really sweet sequence shot from a streetcar going
>> down market Street, with lots of nearby pedestrians and cars and such
>> moving through the frame. There was also some Playland footage, and
>> some
>> great home movies from a family collection. I think it’d be brilliant
>> stuff to use, and the Market Street shots would be an appropriate local
>> scene for both Yerba Buena and Grey Area exhibit installation
>> possibilities.
>>
>> Who should take the lead in organizing a visit to the Prelinger Archive?
>> I know Rick, and could arrange it, if desired.
>>
>> Cheers,
>>
>> Scott
>>
>>> Hi Navid,
>>>
>>> The sound material is evocative. I’m auditioning it in this large hall
>>> in
>>> Green library — interesting superposition, but it will NOT be the
>>> installation space ;)
>>>
>>>
>>>
>>> Here’s some interesting footage that Lina found:
>>> http://www.archive.org/details/Martinez1927
>>> We’re looking for closer shots of bodies, with good lateral movement.
>>> Outdoors is fine for now, since that gives us good light, more diverse
>>> choice of content, and a further bit of magic displacement.
>>>
>>> Cheers,
>>> Xin Wei
>>>
>>>
>>>
>>>
>>>
>>
>>
>
> ________________________________________________________________________________
> Sha Xin Wei, Ph.D.
> Visiting Scholar • French and Italian Department • Stanford University
> Canada Research Chair • Associate Professor • Design and Computation Arts
> • Concordia University
> Director, Topological Media Lab • topologicalmedialab.net/ •
> topologicalmedialab.net/xinwei
> +1-650-815-9962 • 1-514-817-3505 (m) • skype: shaxinwei • calendar
> ________________________________________________________________________________
>
>
>
>
>
>
>
>

ILYA media session Wednesday late afternoon? specs for design

I was looking around for projectors for my last project and had come across this one:ultra short throw – just released by epson

http://www.epson.com/cgi-bin/Store/jsp/Product.do?BV_UseBVCookie=yes&sku=V11H…

And yes, indeed, the film footage will look best adjusted to the projector’s native resolution.

Quoting minneman@onomy.com:

> Xin Wei (and everybody else),
>
> I’ve been poking around, and I’m not seeing any 1080p projectors that are
> really appropriate for the ILYA physical configuration. There are
> starting to be affordable 1080p projectors that are bright enough, but
> their lenses are typically fairly traditional, and would require that the
> projector be 8 or so feet from the projection surface to get the sort of
> image size we’re talking about. Some of them have some lens shift we
> could take advantage of, but not as much as we’d want/need. We could also
> come in at an angle from above (or, possibly, the side), but if we use
> electronic keystone correction, the image is typically quite degraded from
> this processing. There’s not really any way to do any of this without
> unacceptable shadowing from the user. Wider-angle lenses and adapters are
> sometimes available, but they blow the budget.
>
> I think we need to be looking at the various short-throw projectors that
> have been introduced in recent years. These have the ability to throw an
> acceptably-large image from just 3-4 feet away. Shadowing will be
> minimized from people standing in an optimal viewing location (although
> there are still some issues, like a projector very close overhead, if we
> project from the “ceiling” configuration (we may need to angle in a little
> and apply minor keystone correction (yes, it’ll have some quality impacts
> here, too))). These projectors max out at 1280 x 800 for widescreen and
> 1024 x 768 for traditional aspect ratio. They’re bright, and fairly
> affordable. We’ll need to see about whether the lenses are suitable, or
> if we need to choose something else so we have C-mount lens options to
> work with/from.
>
> I’ll be reviewing these projector options with Xin Wei tomorrow, and will
> continue to poke around for other possibilities. Onomy owns one of the
> possible projectors, so we can take a look at the image and the
> configuration this class of image source would require. We had another
> project recently where we were up against similar issues, so I don’t think
> I’m missing any possible avenues (but pipe up if you think so).
>
> If we go this direction, it probably impacts other choices in the image
> pipeline. In particular, if we’re limited to 720p on the projection side,
> we might not want/need to capture and process images at 1080p. There are
> several good 720p webcams available, which are compact and the data comes
> in through USB ports, which might make the data acquisition and exchange
> (the data must be sent over to the other computer, right?) more tractable.
>
> In these viewing conditions and configuration, we’ll want to select a LCOS
> projector, to avoid the screen-door look of a DLP display (which does
> nothing more than underscore any resolution compromises we may have had to
> make along the way to a workable system).
>
> Ok…enough for now, but please muse about this and comment.
>
> Cheers,
>
> Scott
>
> —————————–
>
>> The next window for an ILYA work session will be Wednesday afternoon.
>> I’ll be working approximately 3:00 – 6:30 pm Montreal.
>> Can we have a media work session toward the end of this window? We can
>> look at the TML & Onomy workspaces in Skype video.
>>
>> Meanwhile, we’ll need final recommendations on machines to inform the
>> specs for design. I propose this gear set up:
>> 1 Mac Mini for sound (if Navid says sound demands a laptop, then it may
>> help to discuss with how a laptop would fit with Harry and then Scott)
>> 1 G5 from lab for acquiring video
>> 1 new Mac Pro tower good GPU/CPU, 4GB RAM, minimal disk
>>
>> JS, Harry may have recommended specs on cameras and related gear: A2D,
>> lenses, cables ?
>> Can hardware system diagrams be sketched to pin a concrete design for the
>> system architecture.?
>>
>> Scott’s researching components (including projectors as well) for Wed, and
>> will make some initial proposals so hope Harry and the guys with info re
>> gear requirements can be present Wed.
>>
>> Cheers,
>> Xin Wei
>>
>>
>>
>>
>
>
>
>

ILYA media session Wednesday late afternoon? specs for design

I was just reading this over, and noticed that I’d added a sentence about
webcam lenses to a projector paragraph, where it made no sense. My
apologies to anyone who tried to digest that particular point in my
earlier message. It’s fixed below:> Xin Wei (and everybody else),
>
> I’ve been poking around, and I’m not seeing any 1080p projectors that are
> really appropriate for the ILYA physical configuration. There are
> starting to be affordable 1080p projectors that are bright enough, but
> their lenses are typically fairly traditional, and would require that the
> projector be 8 or so feet from the projection surface to get the sort of
> image size we’re talking about. Some of them have some lens shift we
> could take advantage of, but not as much as we’d want/need. We could also
> come in at an angle from above (or, possibly, the side), but if we use
> electronic keystone correction, the image is typically quite degraded from
> this processing. There’s not really any way to do any of this without
> unacceptable shadowing from the user. Wider-angle lenses and adapters are
> sometimes available, but they blow the budget.
>
> I think we need to be looking at the various short-throw projectors that
> have been introduced in recent years. These have the ability to throw an
> acceptably-large image from just 3-4 feet away. Shadowing will be
> minimized from people standing in an optimal viewing location (although
> there are still some issues, like a projector very close overhead, if we
> project from the “ceiling” configuration (we may need to angle in a little
> and apply minor keystone correction (yes, it’ll have some quality impacts
> here, too))). These projectors max out at 1280 x 800 for widescreen and
> 1024 x 768 for traditional aspect ratio. They’re bright, and fairly
> affordable.

[rogue sentence was at the end of this previous paragraph ]

> I’ll be reviewing these projector options with Xin Wei tomorrow, and will
> continue to poke around for other possibilities. Onomy owns one of the
> possible projectors, so we can take a look at the image and the
> configuration this class of image source would require. We had another
> project recently where we were up against similar issues, so I don’t think
> I’m missing any possible avenues (but pipe up if you think so).
>
> If we go this direction, it probably impacts other choices in the image
> pipeline. In particular, if we’re limited to 720p on the projection side,
> we might not want/need to capture and process images at 1080p. There are
> several good 720p webcams available, which are compact and the data comes
> in through USB ports, which might make the data acquisition and exchange
> (the data must be sent over to the other computer, right?) more tractable.

We’ll need to see about whether the webcam lenses are suitable, or
if we need to choose something else so we have C-mount lens options to
work with/from. [the was the rogue sentence ]

> In these viewing conditions and configuration, we’ll want to select a LCOS
> projector, to avoid the screen-door look of a DLP display (which does
> nothing more than underscore any resolution compromises we may have had to
> make along the way to a workable system).
>
> Ok…enough for now, but please muse about this and comment.
>
> Cheers,
>
> Scott
>
> —————————–
>
>> The next window for an ILYA work session will be Wednesday afternoon.
>> I’ll be working approximately 3:00 – 6:30 pm Montreal.
>> Can we have a media work session toward the end of this window? We can
>> look at the TML & Onomy workspaces in Skype video.
>>
>> Meanwhile, we’ll need final recommendations on machines to inform the
>> specs for design. I propose this gear set up:
>> 1 Mac Mini for sound (if Navid says sound demands a laptop, then it may
>> help to discuss with how a laptop would fit with Harry and then Scott)
>> 1 G5 from lab for acquiring video
>> 1 new Mac Pro tower good GPU/CPU, 4GB RAM, minimal disk
>>
>> JS, Harry may have recommended specs on cameras and related gear: A2D,
>> lenses, cables ?
>> Can hardware system diagrams be sketched to pin a concrete design for
>> the
>> system architecture.?
>>
>> Scott’s researching components (including projectors as well) for Wed,
>> and
>> will make some initial proposals so hope Harry and the guys with info
>> re
>> gear requirements can be present Wed.
>>
>> Cheers,
>> Xin Wei
>>
>>
>>
>>
>
>
>

Scott Minneman: 1080p projectors infeasible?

On 2010-05-11, at 1:34 PM, minneman@onomy.com wrote:

Xin Wei (and everybody else),

I’ve been poking around, and I’m not seeing any 1080p projectors that are
really appropriate for the ILYA physical configuration.  There are
starting to be affordable 1080p projectors that are bright enough, but
their lenses are typically fairly traditional, and would require that the
projector be 8 or so feet from the projection surface to get the sort of
image size we’re talking about.  Some of them have some lens shift we
could take advantage of, but not as much as we’d want/need.  We could also
come in at an angle from above (or, possibly, the side), but if we use
electronic keystone correction, the image is typically quite degraded from
this processing.  There’s not really any way to do any of this without
unacceptable shadowing from the user.  Wider-angle lenses and adapters are
sometimes available, but they blow the budget.

I think we need to be looking at the various short-throw projectors that
have been introduced in recent years.  These have the ability to throw an
acceptably-large image from just 3-4 feet away.  Shadowing will be
minimized from people standing in an optimal viewing location (although
there are still some issues, like a projector very close overhead, if we
project from the “ceiling” configuration (we may need to angle in a little
and apply minor keystone correction (yes, it’ll have some quality impacts
here, too))).  These projectors max out at 1280 x 800 for widescreen and
1024 x 768 for traditional aspect ratio.  They’re bright, and fairly
affordable.  We’ll need to see about whether the lenses are suitable, or
if we need to choose something else so we have C-mount lens options to
work with/from.

I’ll be reviewing these projector options with Xin Wei tomorrow, and will
continue to poke around for other possibilities.  Onomy owns one of the
possible projectors, so we can take a look at the image and the
configuration this class of image source would require.  We had another
project recently where we were up against similar issues, so I don’t think
I’m missing any possible avenues (but pipe up if you think so).

If we go this direction, it probably impacts other choices in the image
pipeline.  In particular, if we’re limited to 720p on the projection side,
we might not want/need to capture and process images at 1080p.  There are
several good 720p webcams available, which are compact and the data comes
in through USB ports, which might make the data acquisition and exchange
(the data must be sent over to the other computer, right?) more tractable.

In these viewing conditions and configuration, we’ll want to select a LCOS
projector, to avoid the screen-door look of a DLP display (which does
nothing more than underscore any resolution compromises we may have had to
make along the way to a workable system).

Ok…enough for now, but please muse about this and comment.

ILYA sound, film footage

Hi Scott,

Thanks for the offer of intro.  I’d like to go with you asap, with Lina in the email loop since she’s the archive / site researcher en route to Montreal :)
I sent an email to Rick but we’ve missed.   We met only the first time I visited the Library months ago.   So it’d be great to really talk with Rick about archival footage (and site) with you or your intro.

 

Thx,
Xin Wei
On 2010-05-11, at 12:07 PM, minneman@onomy.com wrote:

That Martinez reel is very nice.

Just to keep this to-do item on our plate(s)…

Rick Prelinger does a recurring show of San Francisco found footage.  This
past year, there was a really sweet sequence shot from a streetcar going
down market Street, with lots of nearby pedestrians and cars and such
moving through the frame.  There was also some Playland footage, and some
great home movies from a family collection.  I think it’d be brilliant
stuff to use, and the Market Street shots would be an appropriate local
scene for both Yerba Buena and Grey Area exhibit installation
possibilities.

Who should take the lead in organizing a visit to the Prelinger Archive?
I know Rick, and could arrange it, if desired.

Cheers,

Scott

Hi Navid,

The sound material is evocative.  I’m auditioning it in this large hall in

Green library — interesting superposition, but it will NOT be the

installation space ;)

Here’s some interesting footage that Lina found:

http://www.archive.org/details/Martinez1927

We’re looking for closer shots of bodies, with good lateral movement.

Outdoors is fine for now, since that gives us good light, more diverse

choice of content, and a further bit of magic displacement.

Cheers,

Xin Wei

 

________________________________________________________________________________
Sha Xin Wei, Ph.D.
Visiting Scholar • French and Italian Department • Stanford University
Canada Research Chair • Associate Professor • Design and Computation Arts • Concordia University
Director, Topological Media Lab • topologicalmedialab.net/  •  topologicalmedialab.net/xinwei
+1-650-815-9962 •

1-514-817-3505 (m)

• skype: shaxinwei • calendar
________________________________________________________________________________

 

 

ilya.v in svn

The patch is growing. It’s all in SVN. (I will clean it up a bit more tuesday night, but this week I am already loaded… If Michael feels like digging into it at anytime, please do so, I left a bunch of hints about the things to do in the patch, but more importantly :1. Derive meaningful overlay data from optical flows overlays/colisions/cancellation, etc
2. Color palette creation from automatic sampling of static or moving imagery (to add colors to greyscale imagery, to remove color from color imagery…)
3. Presets interpolation (non linear, all states at once…)

You’ll notice that the framerate drops when presets are automatically changing. That’s normal, it’s caused by the GUI updates, and it will not happen in the final/optimized/future versions …
Here is a screenshot of the frontend so far :

 

ILYA media session Wednesday late afternoon? specs for design

The next window for an ILYA work session will be Wednesday afternoon.  I’ll be working approximately 3:00 – 6:30 pm Montreal.
Can we have a media work session toward the end of this window?   We can look at the TML & Onomy workspaces in Skype video.

 

Meanwhile, we’ll need final recommendations on machines to inform the specs for design.  I propose this gear set up:
1 Mac Mini for sound  (if Navid says sound demands a laptop, then it may help to discuss with how a laptop would fit with Harry and then Scott)
1 G5 from lab for acquiring video
1 new Mac Pro tower good GPU/CPU, 4GB RAM, minimal disk

 

JS, Harry may have recommended specs on cameras and related gear: A2D, lenses, cables ?
Can hardware system diagrams be sketched to pin a concrete design for the system architecture.?

 

Scott’s researching components (including projectors as well) for Wed, and will make some initial proposals  so hope Harry and the guys with info re gear requirements can be present Wed.

 

Cheers,
Xin Wei

ILYA sound, film footage

Hi Navid,

The sound material is evocative.  I’m auditioning it in this large hall in Green library — interesting superposition, but it will NOT be the installation space ;)

 

 

Here’s some interesting footage that Lina found:   http://www.archive.org/details/Martinez1927
We’re looking for closer shots of bodies, with good lateral movement.  Outdoors is fine for now, since that gives us good light, more diverse choice of content, and a further bit of magic displacement.

 

Cheers,
Xin Wei

 

ilya: snd: soundscape(Glass)

Hello all,

you can listen to a demo sound-sample here: http://soundcloud.com/navid/intro-may4
This is a sketch of what the installation would sound like when it is in its glass-state. A few factors like ghost-activity or values of the glass state 0-1 will effect the semi-composed soundscape, slow it down, bring it to life, intensify it, fade it away, etc.
p.s. it’s best to listen to this through headphones just before going to bed.

ILYA a provisional dummy state engine: ilya.state.fake (replaces content of controls.state in ilya.visuals)

I’ve put up a provisional dummy state engine:
TML/pro/ilya/state engine/ilya.state.fake.maxpat

 

You can run a copy of this in your local machine’s Max for testing.  This is not stable yet, bc the  state vector’s  dimension (6 or 8)  is not settled.   It should replace [p controls.state] subpatch of ilya.visuals.   Safest bet is to use named state coefficients.  For example to parameterize your instrument for OVERLAP, look at state.overlap, along with some function of the other state coefficients, state.glass, state.dry, and state.A, state.B…

 

Four Epoch States: Glass, Affect, Overlap, Dry
Fake state coefficents, each  from 0 to 1.0
Code your instruments for a given state to ramp up as that  state coefficient increases to 1.
Neighboring state coeffs modulate how to ramp or fade.
For what is a neighbor look at the state diagram (eg glass – affect – dry are neighbors, meaning any combination of affect + dry can go to glass, for example, when total activity A+B goes to zero

 

Activity States:
I have not yet decided whether there are Two or four  Activity States ?   May change!
A: A active, B inactive
B: B active, A inactive
AB: both sides active
O: neither side active

 

Fake state coefficents, each  from 0 to 1.0
Code your instruments for a given state to ramp up as that  state coefficient increases to 1.
Neighboring state coeffs modulate how to ramp or fade.
For what is a neighbor look at the state diagram (eg glass – affect – dry are neighbors, meaning any combination of affect + dry can go to glass, for example, when total activity A+B goes to zero

 

I may supply both a single state vector of dimension 4+4 or 2+4, as well as individual named state values, like state.overlap.

 

- Xin Wei

dev and test

Thanks JS,

At this point the demo video patch that I need doesn’t need to do everything. It should just be able to send me a a few numbers. So if it is incomplete but does “something” then it’s useful for me.

 

-Nav

On Tue, Apr 27, 2010 at 8:11 AM, Jean-Sébastien Rousseau <jsrousseau@gmail.com> wrote:

still a bit behind schedule.

I’ll clean the path up and post it tonight.
Js
Le 2010-04-27 à 06:34, Sha Xin Wei a écrit :

 

Harry and I talked briefly about testing the compositional structure.

Harry will work with JS, Navid, and Michael on the development — making sure the dev goes as necessary.
organizing the people tests as soon as you can get some fx running,
first with manually set fixed states,
but importantly at full-size scales.

 

And thus test the compositional ideas, recommend timings and ordering, staging.

 

Navid needs a test patch from JS demonstrating the video output from 2 cameras to test with.

 

It is absolutely crucial that Navid gets a stub program asap so he can develop without bottleneck and on his own schedule.     JS already has one, and said he’d check it into svn yesterday — so try it out!

 

It is essential to supply connected up scratch patches that you will throw away and replace by more real ones, so Harry and I can test the entire video-sound-installation at scale, from the get go.

 

I’ll trust Harry since he’s local, with the tests and moving things along.

 

Xin Wei

dev and test

Harry and I talked briefly about testing the compositional structure.

Harry will work with JS, Navid, and Michael on the development — making sure the dev goes as necessary.
organizing the people tests as soon as you can get some fx running,
first with manually set fixed states,
but importantly at full-size scales.

 

And thus test the compositional ideas, recommend timings and ordering, staging.

 

Navid needs a test patch from JS demonstrating the video output from 2 cameras to test with.

 

It is absolutely crucial that Navid gets a stub program asap so he can develop without bottleneck and on his own schedule.     JS already has one, and said he’d check it into svn yesterday — so try it out!

 

It is essential to supply connected up scratch patches that you will throw away and replace by more real ones, so Harry and I can test the entire video-sound-installation at scale, from the get go.

 

I’ll trust Harry since he’s local, with the tests and moving things along.

 

Xin Wei

ILYA machine

Thanks for the notes, Michael and JS.

So what is the hardware recommendation, then, bottom line?
Can the sound run on a strong Mini?

 

Let me summarize the overall perceptual desiderata, based on conversations with JS & Michael.  Then maybe we can determine the technical constraints.

 

Visual:
Analog lenses, because we want small small but hi quality cameras (like Elmos).
2 streams hi-def video input (1080), but in color.
(I would like to distinguish between present – in color, and ghost — colorized or monochrome.  Also I would like to do Navier-Stokes where input color is accentuated and turned in to heat.  See Yoichiro example where he rubbed red of mouse laser into red smoke that drifted away, a bit of magic that could never have existed with real light.)

 

2 streams output @ 1080.

 

Motion:
Do motion calc in lowest useful resolution = 320 ??

 

Physics:
Do physics perhaps in even lower res than motion analysis.

 

Visual Effects:
I will want more variety than Navier-Stokes :)
For IL Y A, we want a LOT more qualitative variety than our raft of PDE-based effects.  I’ll have to think about it.

 

I don’t know yet what all the fx we should use, — that depends on JS’ aesthetic offerings,   But please refer to my “state” descriptions in http://membranes.posterous.com .  I emphasize that for IL Y A, we can pull back from the deepest applications of the physics, and “cheat” as necessary for best artistic/ experiential  / compositional effect.   For example, if we want to make it appear as vines grow out of the loamy earth = intersection of motion in Middle State , then we may get away with clever jit.repos + pre-prepared hi-res video of a timelapse of growing vines, together with video of embers flying up form a dying fire.

 

I would mock up in Jitter with canned footage if necessary, and then make  a parametrizable realtime synthesis of it.

 

What Jitter-implemented instruments do we have available to use?
(1) Yannick’s very nicely done bg subtraction-based mortal-immortal: in which motionlessness => you become part of background, and but moving body is invisible, etc.

 

(2)  All sorts of particle dynamics, rendered a million ways.  But tracking whole video field not mouse, as say velocity field as wind.  This is CRUCIAL for meaning of the work.  Make A motion the attractor grid, then increase gravity constant very high should cause particles to tightly whirl to the shape of A.

 

(2.1) Under gravity
(2.2) Attracted to lattice of attractors
(2.3) Under general velocity field (wind from video, e.g. optical flow)

 

(3)  Navier-Stokes

 

(4)  Timespace

 

(4)  Diffusion (pixelweight)

 

(5) waves: see the Jitter demo of kernel for wave equation

 

(Maybe Navid, you could advise me what inexpensive but serviceable fader box to get and how to plug it into some Max patch params so I can play with multidimensional morphing in JS & Michael’s visuals.)

 

On 2010-04-26, at 3:34 PM, Michael Fortin wrote:
<rant>

The problem with a Mac Pro is that it’s about half the speed for double the cost of the Dell machine.  It’s as though the machines are a year old.

 

If this significantly simplifies the project system architecture, and simplifies JS’ programming, and is fast enough to compute every effect we want then so be it.    Remember I want the most visually effectively effects that JS can create, which may use relatively trivial computation, not necessarily (just) Navier-Stokes.  We’ll just replace the Mac Pro by up to date machine next year.

 

On the other hand, we could have one video synth laptop for each of the two projectors. Except I’m worried about robustness.

 

 

For clarity sake – a quad Nehalem Xeon = i7.  (so we can call a Mac Pro an unofficial i7).  It’s closer to an i7 than a Core2Duo…

 

Memory-wise.  Fluid simulation will take about 200MB of RAM to operate at 1080P…  It’s peanuts compared to the n-gigabytes loaded into the machines.

Price-wise – if you want a Mac – iMac (27″ with i7) and Macbook Pros (15inch with i7) are the best that you can get for your money at the moment.  (JS might want to correct me).  Only go for the Mac Pro if you want the 8-core monstrosity.

 

 

In order of priority:
(1) Sufficient computation power to make desired effects.
(2) Making JS and Navid’s programming architecturally easy.
(3) Fit ok into physical construction.

 

Cost of computers is not a significant constraint.
Though of course we do want optimal bang for buck.  And I would prefer to ship smaller rather than bigger machines.

 

 

For a Mac Pro – if we want to go that route – I’d need to take the fluid simulation, set it to run at 1080P, and see how well it handles all 8 cores.  If I can monopolize about 7 cores – then the GPU and 1 core could be free for JS.  :P  Off the top of my head, in such a set up – the data from the fluid simulation would have to be used to drive both displays (unless if it isn’t running at 1080P – even then it’s not guaranteed that the hardware can do 1080P).

 

Multiple GPUs means that each video-out’s GPU gets to work independently on the same machine.  (it won’t speed up full-screen work unless you’re clever about it).  How to use multiple GPUs on OS X; http://developer.apple.com/mac/library/technotes/tn2008/tn2229.html  (it isn’t prettier on any other platform…)

 

If the video-input is just 640×480 – we might as well have 2 machines (something like a laptop or iMac) – one per screen with one doing state+sound (Mac Mini).  The only issue is if these machines need to be serviced…  (why the 100% Mac Mini route is so elegant – (but the 9400M is most likely too slow upon rereading the specifications) – and so easy to hide in the casing of the installation.).

 

End of rant – except with one note which is a rant – go for the fastest GPU!  Please!

 

 

So, would the laptops or Minis have fast enough GPU’s .  I assume the fastest GPUs are on the Mac Pro, not the laptop or mini of same year.

What takes me a few days to optimize and get running quickly on the CPU can sometimes be done at optimal speeds on the GPU in a day or less.  (of course – the underlying stream processors determine what optimizations to take.  For nVidia – they’re organized into warps, and warps have to access memory in a coalesced (if memory serves) way…  Then there are the 4 different types of memory, etc. etc. etc.  Once a piece of hardware is chosen, I’ll have to research that specific piece of hardware to see how it likes the presented algorithms to be served (and how smart the compilers are – which plays as big a role in performance as the hardware).

 

I want  to invest Michael’s time in  richness and diversity of effects, rather than optimization.  For example, we may need help on custom particle behavior, especially in state 2 STORM.

 

So tell me, what does this all imply?

 

2 cameras  - can A2D converters ONLY supply max 640 from analog lenses ?  Can we get 1080 ?
2 laptops
2 minis
1 Mac Pro + 1 or 2 GPU cards?

 

1 laptop for sound ?  (Can it be a Mini ? — preferred)

ILYA States, compositional structure

Definition:

A = ego

B = other

Call the two sides of the membrane East and West (arbitrary names — they are symmetric).

My State Engine patch will supply a state vector of dimension 6 (probably): EAST, WEST, GLASS, DISCOVERY, STORM, DRY
depending on where the state of the event is, here are the possible constraints:
EAST + WEST = 1
GLASS + DISCOVERY + STORM = 1
GLASS + STORM + DRY = 1
GLASS + DRY+ DISCOVERY = 1
RELATIVE ACTIVITY STATES

 

One set of states can be dependent on relative activity between East/West and the total amount of activity East + West.

If the activity on one side goes below threshold then ghost appears — video from that side’s camera is replaced by pre-recorded footage.
The more active side takes control:  becomes the side that drives the effects, somehow.
 
 
 
SEASONAL: GLASS –  DISCOVERY - STORM – DRY  STATES

On top of that EAST / WEST dependent state rockIng the agency back and forth between the two sides, there’s superposed, this tetrahedral state, of which the top facet (DISCOVERY – STORM – DRY) is not filled.)

 

 

 

 
GLASS State
The membrane appears to be like transparent glass.   A sees B only, perhaps with the faintest reflection of A himself or herself.

DISCOVERY State: A discovers her movement affects B’s image

Body image should be filled in silhouette bounded by outline of motion. Motionless => you vanish.
Where the two bodies interfere (defined by multiplying their motion density, not shown)
make smoke appear (with wind from one side)
If you do not move => you “see through” the membrane the other person, undistorted

Maybe smoke as well as reverse-smoke:  Where A moves, that part of B comes into being from a general fog.

Or vice-versa: where A moves A’s motion provides velocity to B — and B smokes away:  but make B’s matter have LOTs of momentum (low viscosity?) so it carries the impulse beyond the end of A’s gesture.

A’s gesture modulates impulse to some physical models (voices or aspects of the total sound)  that should sound quite distinct from the physical models carrying Navid’s  pre-composed gestures.

 

STORM State:  Luxurient growth from interference .
The bodies’ motion density are shown as two very different looking viscuous gels.
Where the two bodies interfere, their multiplied motion density is used to make third substance
that.
Requires high contrast for the effect to work — think of glitter of sun off of water at shallow angle.

Or much more living matter:

For example, maybe the A intersect B can burst into flame (see Michael flame vimeo with Adrian’s trick),
or catches fire, and gives off sparks like those magnesium flare sticks.
Then it burns to red cinders in ash.
But then the wriggling cinders keep on wriggling to suggest worms that burrow into the black earth.  Then green leafy sprouts appear where the worms were, and spread vines across both A and B, and trail into the counterspace as well.
For example, this is where I would use a video of a living vine grown in a straight line.  Then use video retargeting code to make it twist and turn in parameterized directions.   Video retargeting is now in commercial industrial apps (even AE?), and therefore also in the open source world perhaps. I’d like to ask Michael for help with this.

 

DRY State:
(theme of ashes Increase video memory (like cv.jit.mean ) , but use past video density as parameter which renders as ashes or flakes of burnt material (pre-edited texture)
But this ashes should fall (under very low gravity field) when one of the two A or B leave the membrane,
leaving behind a transparent window again.

In Late State, I imagine very dessicating crackly sound — pointillistic so spatialisation could be effective

MIXTURES — how the states can morph into one another:
DISCOVERY - STORM – DRY evolve in a circular sequence.  Only two of them can be mixed, not all three. DISCOVERY can morph into STORM, or DRY, but not both STORM and DRY.

 

[gview file="https://phaven-prod.s3.amazonaws.com/files/document_part/asset/623404/kmxxpDHhT9x2eFe-bp4iGeYzeEg/PastedGraphic-2.pdf"]

 

 

 

 

 

same for states 2 – 3,  and 3 – 1

ILYA machine

Very strongly agreeing with    “b. do not try to push the hardware.  Software optimization is a time-consuming task.  It can eat days away which could be used for creative rather than engineering purposes.”

 

Since this is an actual production, to the extent allowed by budget, we will invert the pyramid of labor investment by throwing money at lower level problems like speed in order to focus our energy and  maximize human time for creative rehearsal.  Spending a few extra thousand dollars to buy a tower is perfectly fine with me if that means movingJS  (Michael)’s time and attention from speeding up a few machine cycles to giving  me + Harry and you 2 precious days of creative rehearsal of the final palpable effects, and you and Navid precious days to creatively zipper together visuals with sound.

 

 

My rule of thumb is that anything short of 10 x CPU or GPU  speed-up is insignificant with respect to human time of making the creative play and design of the palpable effect.

 

So let’s buy a 4 GB Mac Pro if that’s what it takes.  Let’s discuss that option and see what it solves, whether we can get 1080 resolution with 2 inputs (or at least with 2 outputs) at say 20 fps.   Certainly it will save time with the plumbing — hooking apps together.

 

(Our budget saved $ by going to conventional projection on opaque surface, freeing $ for Michael and for computers, I think.  I will confirm this with Harry and Jane.)

 

Then after physical design discussion with Scott, maybe you guys can take a PO  to Apple Store online or on St Catherine, whichever is quicker, unless Concordia can fill our spec’d system.  (Jane has the account number with a signed blanks)

 

Cheers,

Xin Wei
On 2010-04-26, at 12:09 PM, Michael Fortin wrote:

This bounced as it was sent to your sympatico address – forwarding below:

~Michael();

———- Forwarded message ———-
From: Michael Fortin <michael.fortin@gmail.com>
Date: Mon, Apr 26, 2010 at 12:07
Subject: Re: Machine
Cc: Jean-Sébastien Rousseau <jsrousseau@gmail.com>, Sha Xin Wei <xinwei@sympatico.ca>, Harry Smoak <harrycs@gmail.com>, Navid Navab <navid.nav@gmail.com>
Here’s my rant on everything so far (bunch of subjects mixed in together… but it all fits!):

1. For the machine:  This was discussed – for the record – I don’t care what you chose as long as a piece of hardware is chosen sooner rather than later so hardware/software specifics can be sorted out.  C++ also does not mean portable – code written in C++ on an Apple machine might be stuck to run on Apple machines if you aren’t careful…  I know my current C code-base is bound to GCC (and maybe even stuck on UNIX variants when it comes to quick ports).  The OpenCL code is (to an extent) Apple specific.  OpenGL headers are located in different locations on different OS’s.  Don’t assume C-based languages to be portable – not even across different processors (I’ve been writing different C code for PowerPC and Intel just since they are a bit quirky…).  I have to agree though – until there’s a hardware refresh, the apple tax is quite excessive.  Only Apple’s recently updated notebooks seem to make sense price-wise (it’s year-old hardware without any price adjustments)….
2. For resolution:  I like 1080p.  It’s just hard to keep the machine running at that speed (on the CPU at least).  The video-card needs a very high fill-rate.  Keeping a modern CPU busy means using all the cores = multithreading = headache and multiple days wasted when things go wrong…
3. For high-level control languages (mentionned during the super) – in video-games – Lua is used.  All the fast internals are written in C/C++ – mappings (which can be complex) are all written in Lua.  It’s a very flexible language – it comes with its own garbage collector – and has a minimal memory footprint.  It’s not as easy to use as a few simple patches, but definitely easier to use than a mess of patch-chords…

 

Other considerations:
a. running the fluid simulation full-screen at 1920×1080 (not lower-resolution using particles for fine detail) can easily monopolize a compute unit (GPU or CPU).  I can cut – but where the simulation is the slowest it’s hard to cut (advection phase)….  So if you only need to run the simulation at 640×480 then do plenty of effects on it – that’s easier.
b. do not try to push the hardware.  Software optimization is a time-consuming task.  It can eat days away which could be used for creative rather than engineering purposes.
~Michael();

2010/4/26 Jean-Sébastien Rousseau <jsr@eskistudio.com>

 

On the machine I spec’d there is even an option to add this (awesome) video card (for only 100$ extra) :

Le 2010-04-25 à 23:54, Jean-Sébastien Rousseau a écrit :

 

Hello,

Concerning machines, one option could be to NOT buy an apple machine for the graphics AND buy one Mac Mini for the sound + state (running Max5).
For the visuals if we do everything in C++, we could go for something really powerful and cheap , like the following (for under 2k$) :
http://configure.dell.com/dellstore/config.aspx?oc=dsx9000_f_3e&c=ca&l=en&s=dhs&cs=cadhs1&kc=desktop-studio-xps-9000
All OSC stuff could still be shared between machines.
Just an idea …

JS

Jean-Sébastien Rousseau
Programmeur / Designer

Interfaces & Systèmes

 

ESKI Inc.
1751 Richardson, Suite 4311
Montreal, QC, CANADA, H3K 1G6

 

(T)  +1 888-889-5777 ext. 448  • (C) +1 514-240-3321  •  (F)  +1 514-510-5888

 

 

mated sound+video, sex

Sound is essential, without sound mated even trivially to JS’ video we cannot mock up the installation today.   So I really do need to hear some trivial ≥ 2 parameter mapping, if only to check latency effects, and get a feel for the material.

 

I may cancel my May conference so if necessary I can fly back to work with you on the media substances.  Much more fun.

 

Speaking of mock-up, I do look forward to trying out the double monitor that JS built.
Even if we don’t use plasma or LED displays for ILYA, I think we should use this rig for other play.    This is a very nice way to break out of the usual (1) screen prison : A staring into glowing rectangle ignoring the rest of the world; (2) tele telematics, where A and B are in geographically discontiguous spaces.  Instead we can play with the configuration of two people in the same space, playing “through” a membrane.   Later, we’ll get a pair of multitouches back to back.

 

Of course, a silly sublimation of sex.

 

Looking forward today,
Xin Wei

state engine; rapid protoyping: midi fader to video?

Hi JS,

 

One way for me to work with you and Navid would be for me to do the state engine.

 

I can reuse my membrane state engine patch which already has the A/B -based, and add the 0-early-middle-late state cycle based on
Morgan’s activity-based clock.

 

 

If you expose at least two params for each of your significant visual substances (or modes, we used to call them instruments but I would like to change the name now to get us to think in a very different way)  - then we can do 3 things:

 

(1)
For prototyping, map a MIDI fader to your exposed params so you and I can play with the multidimensional params  now in MTL, and later in SF.

 

(2)
When we figure out what is interesting from manually twiddling, I’ll write functional mappings myself in my engine, and emit them to you.  I can also send info to Naivd as well, but  I’d rather leave that level of parameterization to Navid, at least until i get a handle on the overall behavior, and work with JS’ and Michael’s visual matter.   See Michael’s videos:  http://www.youtube.com/mifortins

 

 

(3)  (much) More vitalist or alchemical state 2.   The current bag of physics-based fx are sufficient to 0 -> 1, 1, and possibly 3.  But for state 2, I’m looking for something QUITE different in quality.   For example, maybe the A intersect B can burst into flame (see Michael flame vimeo with Adrian’s trick, but then it burns to red cinders in ash.  But then the wriggling cinders keep on wriggling to suggest worms that burrow into the black earth.  Then green leafy sprouts appear where the worms were, and spread vines across both A and B, and trail into the counterspace as well.

 

For example, this is where I would use a video of a living vine grown in a straight line.  Then use video retargeting code to make it twist and turn in parameterized directions.   Video retargeting is now in commercial industrial apps (even AE?), and therefore also in the open source world perhaps. I’d like to ask Michael for help with this.

 

I leave to you, Michael and Navid the mating of sound and video.   Now it’s just camera -> video -> sound.  Here’s an idea: the sequence is important, bc by having the sound respond to condition of the synthesized visuals, we avoid saddling sound synth with compensating for latency of the first arrow.  Sound only needs to sync with what JS produces.   But also  having  other sound processes (other voices) respond directly to the mic or to camera will help add palpability by direct connect to visitor’s movement.   I may work on providing more movement features using cv.jit , if I have time,  tho  cv algorithms are heavy weight, we may not need much to yield a lot more expressivity than optical density or flow alone.   I’m thinking for example, of circularity to detect outstretchedness of limbs.   (or it may simply not be necessary, which would be good!)

 

Compliments,
Xin Wei

working in parallel

Hi Michael,I’m quite happy that you are interested in how we can do lovely stuff with visual matter. I very much wish to hear some sound partially nuanced by your matter. So perhaps if you can show Navid your Max patch that emits params we can get off the ground.

This work is essential to get access to some fx that may be more powerful slightly less familiar than the usual physicky fx that we now see everywhere. The more important aspect for me is to have ready to hand several substances that each can be paramterized in MORE than 2 dimensions through QUALITATIVELY radically different quasi-physical feel. For example viscosity, or chunking.One technical issue — is the fluid fx , because they satisfy the equation of continuity, always feel the “same.” Can we build some instabilities into the fluid so that it may blow up like nitroglycerin under some conditions? (I’m thinking that A crossing B’s shadow could cause A (or B) to scatter as well as condense)

So contrary to what I said last night, this would be definitely worth you coming in today, if you so incline :)

Cheers,
Xin Wei

force-multiplier on attractor density formed from a target (image)

by the way,

for visuals, I’d like to see if we can implement what I suggested to Delphine Nain (2004) as an application of Navier Stokes.

 

These laplacian effects diffuse: ie initially sharp pattern gets fuzzier.  I’d like to reverse it in certain states:  B’s fuzzy distribution becomes more definite shape as A waves (in certain ways) — so A’s motion causes B to take more definite form.

 

How?  Here’s an example:
 I’ll phrase it for gravity:
Assign a lattice of force multipliers to the lattice of attractors.
Assign a target bitmap to be lattice of attractors.  (This target can be for example, the silhouette of a ghost or of A or B.)
Map the density distribution of the attractors to that lattice of force multipliers.
Increase force multipliers to max will cause the particles to bind tight to the target.
Decrease force multipliers to 0 will allow particles to fly free.

 

In fact, you can also impose another force field (like magnetic field due to A’s hand, or to particle-particle forces) so the particles can do their own thing too, but that behaviour is added to the target-based force.

 

Xin Wei

ILYA test – almost there

 

 

Hello friends. Here is a preliminary test app for ILYA. Don’t get too excited, it’s far from done.

It finally does what I wanted, in one app, at a decent frame rate (and my laptop) :
- 2 live inputs (320×240) from USB
- 2 live videos
- motion tracking on two inputs
- optical flow on two inputs
- fluids simulation- particles system (50k particles)
- no need to say that all this at one is impossible with Jitter…

It is still messy (a lot), and not everything is linked or functionnal.Also, there’s not much meaning poured into the visuals yet, nor any relation between the visuals altogether. Just a performance test. Particles are linked to mouse position, but that’s it for now. Some shaders are activated, some are not. Not OSC yet, but its coming.

How to :
Just unpack the app, start it, and click spacebar to see the menu. Then click the top left button , which will allow you to flip throught the controls pages.
You might need snow leopard, but not sure. You definately need an Intel machine.

JS

update re. design with Scott Minneman

I met with Scott Minneman April 14, and discussed ILYA at my place, 366 San Carlos.

 

(1)
We discussed several sites, and options for entry:
• JD and Scott know galleries, eg Southern Exposure, Meridian, and will connect when I get back to N America next week.
• Zero-One, since I know the directors.
(other leads that I would like to follow up from conversations with others:
SF Art Institute, acquaintances at Yerba Buena to be contacted ca. April 29)

 

 

(2)
Although of course designing to a specific site would offer the most opportunities for the most refined physical construction, we can make a perfectly fine portable sculpture out of pre-fab parts like 80-20 pipe, opaque screen, and the best short throw hi angle projectors we can afford.  Also this allows tighter production schedule; decouples a bit from site selection — in particular we can design to a frame before finalizing the site, as long as we fix general conditions, as in my previous post; and concentrates creative energy (and $) on the dynamical behaviour and qualia rather than the mechanics of display technology.

 

(3)
I showed Scott the timetable, and overall budget envelope, and it seems we can work something out within our constraints.  A possible set of milestones for Scott:
In May Scott will mock-up in SF some frame based on 80-20, to hold 2 projectors, speaker,  2 cameras, screen, (computer(s)) on a floor-standing (no suspension from ceiling).  Time to rough out media on mock-up in SF.
Order parts shipped to Montreal.
In June he can come to Montreal to assemble the final rig @ TML with Harry, JS, Navid.

 

After consulting with Harry, JS, Navid, Jane in a few days,  if  this works all around, we should probably open the communications loop between Scott and the team in Montreal.

 

(4)
Not directed related to ILYA, here are some prior works Scott’s done with JD Beltran

 

Downtown Mirror Airplanes, San Jose, CA
JD Beltran, Scott Minneman

 

Downtown Mirror,  San Jose, CA
JD Beltran, Scott Minneman

 

Unexpected Reflections, Material Language, Meridian Gallery SF
JD Beltran, Scott Minneman, Rebecca Hind

 

 

ILYA graphics soon…

I am makings huge gains in performance by switching to Openframeworks (C++)  for developping the visuals…

I accomplished in a couple of weeks what we have been talking about for some time (particles and fluids, color mixes, energy, heat, etc…) (with the help of open-source code… (that helps!) )

 

I won’t release any demo this week, but next week for sure… So around April 20th, I will send you one application that will :

 

1. Render graphics really fast (think 5X more efficient than what we’re used to in Max). Will run on intel only, snow leopard.
2. Will send OSC data to one single client (giving you access to camera info/tracking/optical flow, etcs, mostly for sound processing)
3. Accept one OSC connection in, for sequencing, controlling the states of the installation.

 

The workshop period will then be mostly spend on :

 

1. Optimizing those visuals (I already talked briefly to Michael about some of it)
2. Finding the right sound setup/mappings/tweaks (I still have to get more in touch with Navid about this)
3. Deal with the ‘state engine’
4. More, more, more …

 

Then will come more debugging, and more optimizing …

 

I still dont have any projection setup in the TML, but I have my small test bench made of two really cheap LCDs. It works but its not really impressive…

 

The more I think about it and the more I think we should get an Intel MacPro tower (but we should wait for the i7 models to come out, a month or two – I guess… )
So just that computer is about 4-5K $…

 

JS

 

tease(lets call this automn):

 

update

Hi Everyone

 

No site yet, but as Harry reminds me we need to know now some parameters in order to design the mock ups.
I’ve been asking local long term people like Minh-ha and Jean-Paul; Prelinger; Scott M, etc

 

But I think we can and should proceed with narrowing in on the site conditions:
• Indoors.
• Controlled lighting;
dark enough for projection method to work.
• 2m radius x 2.5m height minimum cylindrical installation volume,
with at least extra 4 A floor area for approach and withdrawal –
at minimum enough room for 2 people within 1m either side of screen, plus another 4 visitors nearby.
I would like to find a rather large warehouse space so there are long sight lines.
• Quiet;  but it may be unfortunately reverberant if its a typical storage space or gallery.

 

I think it’s enough to give the illusion of a glass window using video.    In fact, I like very much the conceit of a video trompe l’oeil.   And there are strong practical advantages.   I think an opaque screen mounted in well-designed screen could work better than an actual transparent film bc seeing the person on the opposite side mis-registered with the projected image, ( plus the projector beam ).

 

Let’s assume that
• the space is quiet, and aside from diagetic objects, has only this one installation vying for attention.
• we can control the lighting (including darkening space as necessary — or we’ll simply show it at night in this test showing.

 

I managed a talk with Scott Minneman @ Onomy last Saturday.  He and JD (his partner) are going to think about sites, too.  He said he’ll sketch some ideas based on my description after he gets back from Vegas to SF today, so we can talk more.   He’s seen the display technologies we’ve discussed, including the Holoscreen ( has a bit of fogging / corona-lization).    We can decide on whether and how it makes sense to proceed with Scott or Onomy or Sebastien in Montreal based on further conversation end of this week.

 

- Xin Wei

Levinas il y a

re. Life / Time, by Stefano Franchi
Levinas’s goal is to produce an account of the relationship between human beings and the world, and of the relationships among human beings. He wants to provide an account of all these terms in isolation, as separated from each other, and then build, on this basis, a fuller discussion of how they interrelate. In line with the traditional phenomenological accounts, both in its Husserlian as well as Hegelian incarnations, he starts with a characterization of being (être, Sein) in general as the absolutely undifferentiated structure that precedes any individual existing being (étant, Seiende). This he calls the pure “there is” (il y a). Levinas’s effort, in other words, amounts to capturing the pure structure of beings’ being, their bare, naked existing, before each being assumes an individual existence and turns into an “existent.” Levinas proposes a phenomenological experience to the reader, a global epoché: “let us imagine—he asks—all things, beings and persons, returning into nothingness. Are we going to meet nothingness itself? [On the contrary] what remains after this imaginary destruction of every thing is not something, but the fact that there is (le fait qu’il y a).” (TA, 25/46). What is left after this imaginary destruction is an anonymous field of forces, the sheer fact of existing, a chaotic, irresistible plenum that envelops everything from within. Nothing can be pulled or, a fortiori, pull itself apart from the il y a. As Levinas notes, there are no names for the there is, because every substantive term involves a separation of the named thing from the background, and what is characteristic of existing is precisely that no such separation is possible. As the pure field of ontologically primeval forces, the il y a is a verb, a sheer pointer toward the impassable force of being.

 

 

p. 187-188, Section 3 Life, Chapter 2 Time, from The Passion of Life, an unpublished manuscript by Stefano Franchi, Helga Wild, and Niklas Damiris 2005.

Holographic films

I have been specing projections materials for ESKI in the past weeks. Lots of companies make holographic film which could probably fit our needs for IL Y A. Here is the link for one of them (VIP) :http://www.visualplanet.biz/products/holographic/ . And here are the info page and the pricelist that I got from a rep :

[gview file="https://phaven-prod.s3.amazonaws.com/files/document_part/asset/623336/SdUbC4D91mrH6xZwxATgBL2CFiI/HoloPro_2008_USD_List_Pricing.pdf"]

 

[gview file="https://phaven-prod.s3.amazonaws.com/files/document_part/asset/623337/f8QFyQhWtiMwwWkR_rOORnAL0HA/HoloPro_Intro.pdf"]

 

IL Y A: Membranes Video Calligraphiques

membrane is a genre of live interactive video projections in the form of thin translucent screens suspended in mid-air in a public space.   A membrane is not a mirror but a lens.  These membranes act as active lenses that transform people’s views of each other and of their surroundings according to their own movement.   Using computer processing of live camera input which is then projected back on to the translucent screens, these membranes dynamically treat live video like physical matter stirred by the viewers’ own movements.  (See accompanying video.)   Standing on one side of a membrane, your movement changes the image of the other, and symmetrically, people moving on the other side of the membrane can stretch or transform your body image, or even make it entirely dissolve and re-form out of a field of densities.  Like a lens, a membrane is an anti-object and ideally should draw attention not to itself but through itself.  Unlike a video monitor installation or a mirror, membrane comes into existence only when two or more people encounter or engage with each other through the membrane.   In this way these membranes acting as lenses entangle people with each other.

 

 

IL Y A is a particular series of membranes inserted into a physical place to explore the emergence of social density in a common space shared by two or more people.  Your movement distends what you see of the other side like smoke or some quasi-physical material.    The effect is symmetrical — any movement by the other reshapes your image as well.   When one side of the membrane has no present body, IL Y A substitutes historical footage of people from the past.   These historical ghosts’ movements affect the video just as the movement in the live video. IL Y A also is symmetrical between the past and the present: as you relax the figures of the dead will reappear and re-inhabit the present.  In fact, their movements and gestures will drag and perturb your image as well.    The affect of this work is not morbid or nostalgic, but elegiac, with a composed event structure accented by accidents.

 

Moving bodies from the past can act on your image just as you act on their bodies or the bodies of present others.    Since the effect is symmetrical, the dead and the living intertwine and can play with the forms of each other’s bodies with each other with dynamically shifting but equal agency. When no one at all is in the room, the membrane bears only historical documentary footage of the populated site, before living memory of that place.   Using historical footage of activity local to the site, IL Y A will act as a lens into past as well as the present of the given site.

 

IL Y A will be installed in galleries as well as community sites, localized with images from the sites’ historical past.  Assuming an archeological and ethical disposition, IL Y A connects to my larger research project concerning the material and architectural substrates to sociality.

 

I explore how the dead can play with the living body, and how living bodies play with each other, making the dead’s activity act more symmetrically via real-time software calligraphic effects on the video streams of the living.  As in the Van Nelle Fabriek Membrane study at the Dutch Electronic Art Festival in Rotterdam 2004, the state evolution software rocks the agency of the effect back and forth between the two sides of the membrane according to the relative activity on either side. We can think of the IL Y A installation as a historical-time lens that allows us to gesture our way into deeper and deeper layers of historical time.   I say historical time, because these IL Y A installations will let the visitor fall from deep past into the present, and back again.  In this way, IL Y A will intertwine corporeal activities that occurred in that space before living memory.

 

 

- from the FQRSC proposal.

states for visuals

Hi JS,Some preliminary ideas.

In the DEAF2004 membrane that Harry and I did with Sponge, the state rocked back and forth between the persons A and B. There were 4 states: activity (A) ~ 0 ; activity (B) ~ 0; activity (A) > activity (B) ; activity (B) > activity (A).

But here I would like to superpose another layer, which would be a sequencing among three states
Early, Middle, Late (names may change!)

I haven’t decided what they should be, but remember JS we talked once about the four seasons for Remedios?

I can imagine for example that in Early state, the membrane does something like:
Early State:
Where the two bodies interfere (defined by multiplying their motion density, not shown)
make smoke appear (with wind from one side)
If you do not move => you “see through” the membrane the other person, undistorted
Middle State:
The bodies’ motion density are shown as two very different looking viscuous gels.
Where the two bodies interfere, their multiplied motion density is used to make third substance
that catches fire, and gives off sparks like those magnesium flare sticks.
Requires high contrast for the effect to work — think of glitter of sun off of water at shallow angle.

Late State:
(theme of ashes Increase video memory (like cv.jit.mean ) , but use past video density as parameter which renders as ashes or flakes of burnt material (pre-edited texture)
But this ashes should fall (under very low gravity field) when one of the two A or B leave the membrane,
leaving behind a transparent window again.

In Late State, I imagine very dessicating crackly sound — pointillistic so spatialisation could be effective.

Cheers,
Xin Wei

thin display

Sebastien’s examples are too big and thick.  Can we find other displays, much less present?

 

The physical installation must feel like a membrane in the scale of the site.  And in fact its presence as a physical object should be phenomenologically zero.  (Robert Irwin’s, or Dan Graham’s refraction sculptures, but even less :)

 

It’s essential that people do not ever think that they are looking at an installation-as-object.
They should not be aware of seeing the membrane-as-an-installation-object but instead be aware of the other person on the other side of the membrane.

 

Even approaching the piece they should not see the box as an object — ie as a positive (sculptural) presence.  This is the ideal of course.  This is not just a matter of physical size or thickness.  Much could be achieved by lighting, but that depends on the site of course.    The lighting is not a static problem bc  people will be moving around it, and moreover this piece must come into being as two spectators appear.

 

If only one spectator approaches,  a different work appears: a window through which the present-person “plays” with the dead-person.

 

- Xin Wei

San Francisco

In brief, the goal is to exhibit a “membrane” installation in a site in San Francisco Bay Area in August 2010.   The apparatus we build should be portable and re-usable in other cities, with video footage that references the local site’s history.

 

Basically it seems that given the tight schedule, we’ll keep the creative discussion moving, and pretty focussed with us over the next 4-6 weeks.   we can use the IL Y A blog that JS established a long time ago (thanks).  Then it’ll loosen up as we break out into our areas.  Does that sound right?  We’ll see from Harry’s outline (thanks).

Just to record what we discussed in terms of roles and credits — how does the following sound ?

Concept:  Xin Wei

Creative Direction: Xin Wei, Harry, JS

Production Roles:
Technical Direction / Project Management:    Harry
Video (realtime and edited diagetic):    JS
Sound (diagetic and environmental):    Navid
Environmental Lighting (and video?):        Harry
Fabrication:    TBD
Installation (SF):    TBD
Archival & Site Research:    Lina
Documentation / Admin / … :    TBD

Designer : Sebastien Dallaire

Today I ran into Sebastien at ESKI. He is the industrial designer I was telling you about yesterday. I roughly explained to him what we’ll be doing in the next weeks/months, saying that we might need someone like him, depending on which avenue we take. Here are two installations he worked on (design/production/assembly) :

 

@Flickr – Multiscreens
@FaceBook – Vertical LED display – realtime video

The blog is up. Fill it up.

First meeting was today at TML. Please post your preliminary notes. I believe we will be sharing the real outline / timetable / gearlist on GoogleDocs as it will probably be changing over time. Note that this blog is private, so if you want to add more contributors just let me know. – JS
Flickr - Vimeo - Facebook © The Topological Media Lab 2013