Software that relies on neural networks, deep learning, and the shape and motion of the human body could be used to animate or control human-like avatar characters, according to a New York tech start-up.

Body Labs, formed in 2013, is drawing on its research and development in how people’s bodies are shaped and how they move – and the resulting statistical models – to predict and generate animation from videos and even single photographs.

The company is deploying its research into mobile, vr, ar, and gaming apps where the human body is a central feature, or where the human body might be used as a controller itself. The area of markerless motion capture is another potential use of the software, as Cartoon Brew finds out from Body Labs CTO and co-founder Eric Rachlin.

Body Labs is a company built off of research into generating a statistical model of the human body. This research was done at Brown University and the Max Planck Institute for Intelligence Systems, led by Professor Michael Black. The latest incarnation of that research is something Body Labs calls ‘SOMA: Human-Aware AI.’

The fancy name is tied in with deep learning, that is, taking known data about humans from scans, measurements, photos, videos, and other sources, and using it to predict human shape and motion. The idea is to be able to take everyday RGB photos or videos, extract key parts of the body shape in those photos and videos, and use a deep learning algorithm to generate the likely motion that that body would take.

“As a simple example,” explains Rachlin, “suppose you have raw motion capture data along with a 3D scan of someone. Body Labs’ tech can turn both types of data into 3D bodies, allowing us to combine the high quality shape information that is present in a 3D scan with the high quality animation that comes from motion capture.”

This image illustrates how a human performer's body is analyzed and taken over by a CG avatar.
This image illustrates how a human performer’s body is analyzed and taken over by a CG avatar.

“Since the resulting bodies always have a consistent structure,” adds Rachlin, “a library of pre-existing texture maps, or other animation assets, could then be applied on top of it. What’s more, if some of the original mocap or scan data is noisy, or missing, the statistical nature of our model allows us to intelligently de-noise the data using anatomically-aware algorithms.”

In practical terms, Body Labs is offering its ‘SOMA Shape API’ to interested developers. The API works by referencing a single image, plus height and weight, to produce a full 3D model and digital measurements of any subject.

But what about facial animation? Several facial tracking apps already exist for taking a still image of your face and generating an avatar, or for using a video of a face to drive some kind of other animated character. Rachlin says “facial tracking is on our roadmap, but we are currently heavily focused on improving SOMA’s 3D shape and motion prediction capabilities from RGB video and photos.”

One possible use of SOMA is to turn expressions made with the human body into accurate game controllers – rather than hand-held ones – such as extending an arm to fire an energy beam or crossing arms to block an attack. Already the API is aimed at apps designed for consumers in trying on clothes that fit their virtual profiles.

Another application of SOMA might also be in markerless motion capture, that is using only raw video of a person without any suit or markers attached to their body to drive animation. “SOMA outputs a full 3D body, not just a stick figure,” notes Rachlin. “The result is a tool for creating realistic character animation that just works as opposed to one that requires a team of specialized technicians.”

However, Body Labs does not yet see SOMA as a substitute for high-end motion capture, but believes the accuracy of its algorithms will improve over time. “Today,” says Rachlin, “we see apparel, gaming, visual effects, and augmented reality as places where SOMA can provide the most value.”

Developers can request access to SOMA’s APIs at www.bodylabs.com/soma.

Ian Failes

Ian Failes is a writer covering visual effects and tech for Cartoon Brew.

Latest News from Cartoon Brew