Development Updates!

It’s been a while since I’ve posted, but a lot has happened. So here’s a quick update, and I’ll post documentation from the Quick and Dirty after this.

Overall Idea
Based on some additional reading, feedback and discussion, I’ve decided to reframe my thesis in terms of envisioning the “internet of experiences.”

Practically speaking, this means doing exactly what I’ve been doing and have proposed to do, but it gives me a different driving force. Rather than focusing on creating a tool to represent people in 3D, I will shift my focus toward creating examples that envision a browser-based internet of experiences.

Based on feedback from the Q&D show, I’m completely convinced this is an exciting avenue for me to pursue. I will update my thesis statement to reflect this before the week is over.

Rigging
I turned back to looking at rigging this weekend. The code that I’m working with is the 3D Elmo code from Laura Chen’s Microstories Live project with Shawn. In this project, Laura was able to get Kinect joints working with an export from Blender is js format.

I was able to rebuild her code and get it working with Kinectron.

But unfortunately I still don’t understand a lot of what she did to make this work. Including:

  • I am wondering how she made the elmo.js file, and why she used 9-bone instead of 25-bone skeleton.
  • I am also interested to know how she got the elmo bones to match with the bone locations of the Kinect. My experiments with other files always end up not matching.
  • Finally, I am wondering if she knows of other projects or research that use the Kinect skeleton in an avatar in three.js. I am having trouble finding other documentation about this.

I reached out to Laura by email to see if she can help me answer any of these questions.

Before getting Elmo working, I had my own little break through where I found the joints in a DAE file. That was exciting, because I could manipulate the DAE in real time, however, I haven’t yet matched the joints. So, it looked a bit like Triplets of Belleville.

Belleville

My DAE

Hmm. Maybe they aren’t that similar.

Anyway. I’m still really far away on this. But the big takeaway for me here, is I’m not that excited by this. Based on where I’m going—and based on Q&D feedback (see upcoming post)—I don’t find puppetting that exciting. For now at least, I’d like to push my energy into working on volumetric video.

Volumetric Video
I had a reeeaaaaaaaallllly exciting breakthrough last weekend on the volumetric side. I had actually sat down on Saturday to prove to myself that I couldn’t get two feeds working simultaneously in the same browser. And I was ready to leave it there and focus on rigging.

I had two things to try, one was a suggestion from Craig to work with three.js buffered geometries, instead of the standard point cloud geometry in my example in order to speed things up. I did that, and it worked and actually helped with performance. It removed the heavy processing of the garbage collector in my code. So that was a good step forward, but it didn’t solve my overall problem, which was that my process raw depth function was still choking every time I would try to run more than one raw depth feed.

I then went on a long arduous journey in my process raw depth function to see if I could speed it up. It was seriously long and arduous, but at the end, I made a happy accident, where I used a setTimeout function to try to slow down the processing, but the function itself ended up allowing the function to run more efficiently.

I honestly don’t completely understand why this is happening, and Shawn was a bit confused by it as well. What I understand at this point is that it is clearing the callstack. Whatever that means. Haha. I think I will ask Kat about this in office hours this week.

But, the big takeaway here is that it has cleared the way for me using multiple raw depth feeds in real time in one browser. And THAT is super exciting.

As next steps, I have started reading about registering multiple raw depth feeds. And I have reached out to Moon to see if he can explain to me how he registered more than one Kinect image for his thesis two years ago.