Blockchain
Reallusion enhances digital characters with NVIDIA AI integration
Reallusion is revolutionizing digital character creation by integrating advanced AI technologies from NVIDIA, according to NVIDIA’s tech blog. This collaboration is set to transform animation workflows for filmmakers, game developers and content creators.
AI-driven animation with Audio2Face
Reallusion uses NVIDIA’s Audio2Face technology, which automatically generates expressive facial animations and lip syncing from audio or text input. Supporting multiple languages, Audio2Face can animate characters speaking or singing, making it a versatile tool for animators. The latest standalone version also includes features for animating realistic facial expressions, with slider and keyframe controls available for detailed adjustments.
Integrated into Reallusion’s Character Creator and iClone applications, Audio2Face enables a seamless AI-assisted animation workflow. Users can prepare an asset for animation with a single click, generating real-time facial movements that match any provided voice track. The resulting animation can then be refined in iClone before being rendered for use in various production environments.
Simplified animation workflow
The collaboration between NVIDIA and Reallusion led to the development of the CC Character Auto Setup plug-in, which simplifies a previously cumbersome 18-step process into a single operation. Users can import a Character Creator asset and select a training template to bring 3D characters to life with realistic facial animations synchronized with any audio input. Further performance modeling can be done using Audio2Face’s motion sliders and keyframe controls before final production refinements in iClone.
iClone provides granular control over every aspect of facial animation, from expression levels to head movements and simulated eye movements, allowing animators to authentically convey a character’s personality. The software can also incorporate head movements from mocap equipment such as AccuFACE or iPhone Live Face.
AccuFACE: Mocap for the face with next generation artificial intelligence
AccuFACE, powered by the NVIDIA Maxine AR SDK, delivers real-time facial capture quality and functionality. Using NVIDIA GPUs with Tensor Cores, the Maxine AR SDK provides AI-powered 3D facial tracking, body pose estimation, and more. AccuFACE translates captured facial data into smooth digital animations, enabling the generation of expressive facial animations and responsive 3D avatars in real time.
AccuFACE’s key features include precise landmark mapping, head pose and deformation detection, face mesh reconstruction, and reliable face detection and localization. These features allow you to capture nuanced facial expressions, essential for conveying emotion and enhancing the authenticity of digital characters.
AccuFACE also offers tools to refine AI-generated tracking for professional-level results. Device settings such as smooth filtering and address tracking artifact removal, while anti-interference cancellation prevents incorrect cross-triggering of facial movements. Additional calibration and refinement can be added to capture distinct expressions and deliver authentic performances.
Availability
NVIDIA Maxine offers high-quality video communications and AI technology for professionals. The latest production release Maxine is included in NVIDIA AI Enterprise, providing access to production-ready features and enterprise support. For early access to new features, users can join the Maxine Early Access program.
Reallusion’s partnership with NVIDIA demonstrates the transformative potential of artificial intelligence in animation, making professional-grade facial motion capture and animation accessible to a broader audience. This advancement allows animators to achieve high-quality results without extensive skills or specialized equipment, revolutionizing digital character animation.
For more information, visit NVIDIA Tech Blog.
Image source: Shutterstock
. . .