When it comes to real-time animation, Nvidia doesn’t just talk the talk — it’s now animating the talk, too.
The company’s new Audio2Face tool converts audio files into automated facial animation. In a range of tutorial videos, 3d models come to life when fed lines, speaking them with relatively accurate lip sync.
The characters in the videos (see below) may look a little uncanny, but their performances can be refined with various post-processing parameters. Nvidia boasts that the tool will eventually work with all languages. It also supports import-export with Unreal’s Metahuman Creator tool for the creation of virtual beings.