Director Ang Lee’s Gemini Man, in which actor Will Smith will star as a retiring NSA agent facing off against a younger clone of himself, is set for an October 2019 release date from Paramount. It’s likely to be made possible with advancements in de-aging visual effects techniques, the kind seen in films such as The Curious Case of Benjamin Button, Captain America: The First Avenger, and other recent Marvel releases.

But Gemini Man was originally slated to be made several years ago – at Disney – at a time when the visual effects technology possible to pull off such a feat was still in its infancy.

In the early 2000s, Disney had artists from Walt Disney Feature Animation and (the now-defunct vfx studio) The Secret Lab carry out tests to capture a human performance and create a corresponding – and younger – digital human avatar. The results, impressive for their time, were not enough to push the project forward. However, the people behind the technology were able to present their findings at SIGGRAPH 2002 in a short film called Human Face Project, directed by Hoyt Yeatman, and later at SIGGRAPH 2005 as part of a course on digital face cloning.

A final frame from Human Face Project - the live-action actor is on the left and his younger cg version on the right.
A final frame from Human Face Project – the live-action actor is on the left and his younger cg version on the right.

Cartoon Brew spoke to two members of the original team behind Human Face Project to find out more about the Gemini Man test including, interestingly, how the aborted Jim Carrey version of The Incredible Mr. Limpet played a key part.

The test before the test

The goal of creating a living, breathing digital human character, or a human-like creature, has been a goal for some time. Among many of the projects imagined during the boom of cgi work in the late 1990s was a proposed Jim Carrey remake of The Incredible Mr. Limpet in which Carrey would voice a cg fish character that also closely resembled the very expressive face of the actor. Several vfx studios are said to have provided tests for this planned film.

Modeler Hiroki Itokazu was working at Warner Bros. when the Mr. Limpet project was in the testing phase (Itokazu had also modeled the Giant for The Iron Giant). He wasn’t assigned to the Mr. Limpet film, but he used the opportunity to research facial expressions, making a realistic facial animation of Carrey without using motion capture (although the actor’s movements had been captured for the various tests). Itokazu constructed a working model of Carrey’s head, and used an anatomy textbook to construct preliminary blendshapes of all the human facial muscles for it.

Artist Bob Camp posted several sketches and pieces of artwork on his blog created for the proposed Jim Carrey film.
Artist Bob Camp posted several sketches and pieces of artwork on his blog created for the proposed Jim Carrey film.

Lance Williams, who was the technical architect on Human Face Project, described for Cartoon Brew how Itokazu approached the Jim Carrey/Mr. Limpet animation. “What Hiroki did was he selected a set of characteristic and extreme facial expressions from Carrey’s performance, and attempted to match them with his model. For each selected frame, he got as close as he could by orienting the head and posing the muscle blendshapes he’d built. The next step was fitting the frame by remodeling the control vertices of the head and blendshapes to match the image.”

“He also mapped the changes he’d sculpted back to the blendshapes according to the degree each was activated,” added Williams. “By the time the remodeled blendshapes fit expression number three, they no longer worked as well for expression number one, so Hiroki painstakingly iterated this process until his model head and blendshapes fit them all.”

The result was a portrait model that could be animated by controlling the activation of a complete set of individual facial muscles – facial muscles that were not generic, but captured the surface changes they brought about on a particular performer. “We called this technique ‘Hirokimation,’” Hiroki told Cartoon Brew. “It was a very large honor to have my name used.”

Gemini methodology
Price Pethel's life mask scan.
Price Pethel’s life mask scan.

Ultimately, Mr. Limpet did not go ahead, reportedly because of some of the unsatisfactory test results. But the above approach to modeling and animation was one that Walt Disney Feature Animation’s Jinko Gotoh -a producer on the Gemini Man project – saw as promising. She asked Itokazu to come on board.

What Disney wanted to see in the Gemini Man test was a human actor interacting with their younger self. The approach from The Secret Lab and Walt Disney Feature Animation was to re-create the real actor as a cg avatar and generate this younger version, also in cg, by re-targeting the original performance. That original performance came from Price Pethel, then a creative director at The Secret Lab, which had formerly been Dream Quest Images before being bought by Disney (Pethel also helped develop the compositing software Nuke at Digital Domain).

Pethel was filmed carrying out numerous expressions, with markers on his face, and acquired via a Cyberware laser scan. That was used to generate a 3D model, but what was also required was texture reference. For that, Hoyt Yeatman (an Oscar winner in vfx for The Abyss, and also the visual effects supervisor on films such as Mighty Joe Young and Armageddon), set up a number of synchronized 35mm film cameras around Pethel’s face. These had polarizing filters to avoid capturing highlights in the skin – an important part of obtaining neutral and clean textures. A life mask of Pethel and dental molds of his teeth, also laser scanned, rounded out the capture techniques.

Scanned expressions.
Scanned expressions.

The team then had a scanned head in 3D form and this is where Hiroki’s approach to modeling and animation came in. “I created all of the actor’s muscle shapes in Maya using the cyberscan data as reference,” said Itokazu. “I created blend shapes very carefully to match to the actor’s expressions. Then we applied motion capture data to the blend shapes. That way, we didn’t have to apply motion capture data directly to the model. So the model was clean and easy to edit with blend shapes.”

Using the ‘Hirokimation’ approach, Itokazu would try to match Price’s expressions from the footage, essentially by pushing and pulling the 3D model to match facial features (this was also done using what was called an ‘optimizer’).

A comparison of the live-action reference with a marked up face, and the frame-by-frame animation of the cg mode - the Hirokimation approach.
A comparison of the live-action reference with a marked up face, and the frame-by-frame animation of the cg mode – the Hirokimation approach.

That re-targeting took things to a realistic level in terms of animation (significant work was also done for tracking and animating the eyes), but one of the hardest parts of pulling off the Gemini Man test was rendering skin. Around that same time, sub-surface scattering algorithms were starting to be used to simulate the way light appears through flesh. The Gemini Man team relied on different methods to realize skin textures and craft pores and wrinkles in the face, and to replicate the correct lighting in which Pethel acted out the scene, which was a bar set.

Another crucial aspect of the test, of course, was to produce the younger (cg) version of Pethel. “That was the goal,” said Williams, “but people were actually a little puzzled that we spent as much time as we did taking Price and duplicating his performance, that is, playing Price through the current Price. But that was the only way we could tell if we really got the performance. When you played it back as the younger Price, you could say, ‘Yeah, well, he’s sort of doing the same thing. But if you did it through the cg model first then you’ve established you’ve got a 1-to-1 mapping.”

The cg Price Pethel and the original photography.
The cg Price Pethel and the original photography.

The team demonstrated a re-mapping to the younger version of Pethel in the final short film, along with other re-mapped models of animated characters. Their method of doing this involved taking tracks of the muscles from Pethel’s performance and applying them to the different cg character, which could then be adjusted where needed.

A success or failure?

Clearly, since the Gemini Man test, major developments in human and human-like cg characters have been achieved (some very recent examples include the cg Grand Moff Tarkin and Princess Leia in Rogue One, and the cg Hugh Jackman in Logan). The authors of the paper accompanying the 2005 SIGGRAPH course on Human Face Project acknowledged the developments up until that time, but also set out what they feel they had achieved in the test.

“Since the time our test film was produced, significant advances in rendering human flesh have been achieved,” the authors wrote. “In particular, the transillumination of flesh has been modeled by simulated subsurface scattering. In the judgment of the authors, the project was extremely successful in tracking and animating human performance, but less successful in photorealistic rendering. Some of the techniques we developed, however, are complementary to methods of rendering cg humans today, and may be valuable in other contexts. To achieve our goals, we were obliged to animate faces with far more detail than we could track.”

The different cg characters that Pethel's original performance was re-targeted to.
The different cg characters that Pethel’s original performance was re-targeted to.

“I almost achieved the goal but it was not perfect,” added Itokazu in his recent reflection of the work to Cartoon Brew. “It was pretty good and I was happy about the result. However, there was still some room for improvement. For example, it was very difficult to capture the details such as the sticky corners of lips and inside the mouth. I still want to research more about facial expressions.”

Making cg humans is hard
This credits listing from SIGGRAPH 2002 identifies the team behind Human Face Project, which was shown in the conference's Electronic Theater.
This credits listing from SIGGRAPH 2002 identifies the team behind Human Face Project, which was shown in the conference’s Electronic Theater.

While it’s obviously too early to know exactly how Ang Lee’s Gemini Man will generate a younger Will Smith, the approach is likely to draw on recent developments in de-aging or ‘youthification’ methods, many pioneered by the visual effects studio Lola VFX (see for example, Skinny Steve in the Captain America films or young Michael Douglas in Ant-Man). These de-aged characters have relied on a combination of facial scans and facial capture, 3D modeling, and 2D compositing and smoothing techniques directly onto the performance of the older actor.

In any case, Williams, who continued researched into human face-tracking and animation at Nvidia, notes that replicating facial movement with digital visual effects is always going to be extremely challenging in a world where we are looking at each other’s faces all the time.

“As social beings,” Williams said, “humans have evolved with acute sensitivity to the signs and signals of others. Much of this sensitivity is unconscious, meaning that people are extremely alert to deviations from the norm, but may not be able to articulate what it is about a face, voice, or glance that alerts them.”

EDITOR’S NOTE: Cartoon Brew has learned that Lance Williams passed away on August 20, 2017, following a battle with cancer. We extend our sympathies to his family and friends.

Latest News from Cartoon Brew