Here’s a quick and dirty video from the quick and dirty show.
This is just a clip of Aaron and I playing in the two different setups. So it gives a sense of what the project looks like right now.
But what I don’t have on video, and what I wish I had, were a number of moments throughout the night when people had really emotional responses to using it.
For the show I had two installations set up. Each of them was running a Kinectron server. Then I was running a website on the local network that was listening for the two servers, and then combining the data from the two users on a single p5 canvas. Different interactions would happen when users feet or hands were touching in the virtual space.
I was running the install by the main lobby and I had volunteers in the other install that was pretty far from most of the projects. When people came by, I would start by introducing the project and showing them how it worked—my volunteer and I would do some moving around and interacting. Then, several people decided to try it themselves. Over time the interactions in the virtual space got more and more interesting. I think this was a result of my volunteers (first Sejo, then Pierre) getting more comfortable and more experimental with the interactions. There were some really nice moments where people hugged each other in the space, and danced together. And a lot of fun moments of head butting and “kung fu” fighting.
A number of people were excited about the project. One woman immediately thought it could be a great tool for theatre. Another thought it would be so nice to be able to use to “visit” friends and family from out of the country.
I got really excited about the moments when bodies were overlapping. In the current version, the bodies turn to just the joints when they are overlapping. So you can still see both people, but the images are gone. Interestingly, people would sortof pause and move around in this shared space. This is what led to the hugging in that space as well. Seeing the bodies intertwined felt so intimate.
Now that I have depth and depth color working together in real time (see my last post omg). I want to see if I can get that from two people running in the same browser. If so, I’d like to continue playing with this idea of blending the images of two close or touching people in real time. I think this could be really beautiful.
The other exciting bit was that Kyle Greenberg asked if I wanted help bringing the project to Shanghai this week. Which I DO! So we are talking about that this week. I’m hopeful that it will happen and that I will be able to begin to test my ideas over greater distance.