This week I met with:
I emailed Greg Borenstein, creator of Skelestreamer, a streaming library for Kinect V1, but have not yet heard back. It seems as though he’s not keeping it up, but it is doing the same thing that Kinectron does on the older Kinect version.
The highlights of my meetings and research were:
Matt — Had a useful conversation about letting development drive the story. Since I am developing a new technology, it’s hard to know what is possible until I pass some development benchmarks. Although we are always told to not let technology drive the story, I think in this case, it makes sense. Matt also showed me the following resources:
Kat — Had a really helpful discussion about a story-based thesis, versus a process-based thesis. Kat also shared with me:
Ava I-Wen Huang’s thesis from 2016 — Ava did a thesis about humanizing refugees. This is a helpful and interesting story-based thesis example.
Andrew — Andrew and I talked mostly about story. Andrew was drawn to the possibility of creating a story about borders using this technology. Everyone that I’ve talked to about the borders idea has been drawn to it, which makes me think I should be following this direction. He gave me the following references:
Astro Noise by Laura Poitras — Andrew pointed specifically to the way documents were presented at the end of this exhibition. This is an example that could be related to my idea of putting live avatars in confined spaces.
Kathy Grove — Kathy removes women from iconic photographs featuring women. The idea is to make a show of the thing that you are trying to bring attention to. This is a reference to my idea of putting people who are barred from entering the country into small boxes. It exploits the very thing that I’m speaking out against.
Things I want to watch and read:
More of the readings from Margaret
Watch Oculus videos recommended by Todd
I’ve updated my answers to the questions presented on the TJ:
What else is out there like it?
Skelestreamer is the closest that I’ve found, although it’s made for the Kinect V1, and it doesn’t appear to be kept up.
Depth Kit and 8i are doing great work with volumetric video. They are not streaming that I know of. Depth Kit is working with Kinects, I’m not sure about 8i.
PerceptiveIO / Holoportation are working on telepresence. I don’t think they are using the Kinect, rather I think they are using similar, but higher end, cameras.
Mimesys is working on telepresence as well. They are working on a tele-working solution.
PerceptiveIO, Holoportation, and Mimesys are all working with HMDs right now. I haven’t seen any solution for working in the browser or with holograms.
Microsoft Room to Room is a research project with projected 2d avatars in 3D space. It does use the Kinect and works in real time across distance.
How is yours different?
How does it improve what exists?
It creates an accessible tool for people wanting to develop with live streams of Kinect data.
What audience is it for?
It is for artists and activists. It will require some technical knowledge to use, but not too much, and it will be relatively inexpensive.
What is the world/context/ market that your project lives in?
It is used by students, artists and activists working with creative technology to make live installations and performances.
In your initial research, have you found something you didn’t expect? Is it an interesting path to follow?
What do you need to know about the content/story?
I need to have a better understanding of what is possible with the technology that I built so far before I can pursue story.
What do you need from a tech standpoint?
Better knowledge of C++
Who are your influences?
Kinect to OSC, Kinect2—R. Luke DuBois, Surya Mattu, Wouter Verweirder, Shawn Van Every
People Staring at computers—Kyle Mcdonald
The Elastic Self—Tricia Wang