Volumetric Video Tests

This week I worked on volumetric video. I first got Yao’s Processing code working with two Kinects. I didn’t spend time calibrating or moving the camera around, but spent just enough time to generally understand how things were working.


https://vimeo.com/206466673

I then set up two Kinectron servers and connected to both of them in one browser. As soon as I started running both raw depth feeds in the same browser in three.js, the browser slowed almost to a halt. I ran some tests and was able to figure out that it was the processing of the raw depth data from data url to array of numbers that was slowing me down.

Interestingly, I was able to get the two feeds running simultaneously and well in side-by-side browsers, although it still was running at a high CPU usage (above 90%). Nonetheless, this should mean that I can get both to process in one window by changing the code to be able to run the image processing functions simultaneously on both feeds.

https://vimeo.com/206470938

I also talked generally with Craig about shaders and webgl in three.js. Yao used shaders in her Processing code to do the image processing. I am using the standard three.js point cloud at the moment. Craig thought I might be able to get a more efficient result using the buffered geometry cloud — I still need to test this. He also thought that I could use shaders to apply color to my image, and he gave me some resources to learn about shaders should I decide to go this route.

In general, the technical challenges of going this route are pretty big—they involve a level of graphics processing knowledge that I don’t have, and I’m not sure that I’ll realistically be able to tackle in the coming weeks.

Update on Autorigging and Skeletons

Although my main focus was moving volumetric video forward this week, I also made some progress on avatars/rigging.

  • I had office hours with Todd. He pointed me to the Kinect for Unreal rigging structure and also suggested that I dig through the Kinect SDK for the rigging structure. He also said that Mixamo does have a 25-bone rigging option, as I thought, but that I needed to upload an empty avatar for that option to appear.
  • I created an avatar in fuse, then uploaded it to Mixamo and got my 25-bone avatar and a sample animation to go with it.
  • I talked with Eve and Or about their project for SXSW, which uses prerecorded animations in Open Frameworks.
  • Kyle gave me two new avatars with 25-bone skeletons and animations — Marlon dancing!

Next Steps

  • Get feedback from midterm and Refest.
  • Follow up with Adaora for a meeting to talk about story.

Submit a comment

Your email address will not be published. Required fields are marked *