Autorigging, Kinectron, Three.js Tests

This week my goal was to do a quick and dirty dive into autorigging with live Kinect data in three.js. Although I don’t have something beautiful to show for it, I now have a good understanding of the Kinect joint data and how avatar animations work in three.js. Here are some of the thing I did:


I live puppeteered a 2D cat in three.js using Kinectron.

To do this, I used existing code and images made by Shawn Van Every and Laura Chen for Micro Stories Live.

I then placed all my joints on cubes in 3D space with 3D orientation in three.js and tuned in simultaneously on three computers on my home network.

To do this, I spent a lot of time digging into the Kinect2 SDK to learn about the 11 placement properties on each Kinect joint. I learned that the joint properties that I should be using for 3D work are the camera and orientation joints. I learned how to scale the avatar in 3D in three.js, although I still have some work to do on this front (ie. the abnormally long arms. Note that the legs are breaking up because my desk is blocking the Kinect from seeing the floor, that’s not a scaling issue.) I also encountered Vector4 and homogenous coordinates for the first time, which still is confusing to me. If I continue down this route I will need to learn more about life in four dimensions.

I overwrote animation data with my own recorded Kinectron data.

Okay, so I’m a long way from getting the live Kinect data to move this avatar in any humanly possible way, but I learned a lot and now understand the challenge here. I dug into the three.js avatar and animation example and now understand how the bone and animation structures are written. The animations are ordered by bone, then by key frame/time, which is the opposite of how they are ordered when coming from the Kinect, which group all joints by the moment in time that they share –– obviously this is how it has to work in real time. Knowing all of this, I then recorded a short 20-frame sequence in Kinectron, and wrote a node.js script to convert it to the hierarchy structure used by the three.js animation. While the animation didn’t translate correctly, it feels like a good start.

Most of this work is in the Kinectron rigging branch examples on Github.

Here are the things that I would need to learn more about to move forward with this:

  • Create or find an avatar with a 25-bone skeleton suited for Kinect data (talk to Todd Bryant)
  • Match the avatar, world and Kinect joint data scaling
  • Change the animation pipeline in three.js to work in real time
  • I may have to alter the Kinect2 library to change the smoothing factor on the Kinect data. Right now it’s very jittery.

Next steps and story update

This upcoming week I’m going to dive deeper into the possibility of working with more than one Kinect and volumetric video. Yao worked on connecting two Kinects with processing last year, and shared her code with me (thanks Yao!!).  So, I’m going to try to get that working in Processing, then with Kinectron in three.js this week.

Also, Adaora and I are working on scheduling a time to talk about story. She’s in Europe, but I think we will get a chance to sit down next week.

Finally, I confirmed with Shawn that I will be presenting Kinectron at REFEST on March 5. This is a collaboration with Witness and Culture Hub—two organizations that would be awesome potential users/collaborators on Kinectron. I’m excited to have some real world feedback.