Transferring a Labeled Generic Rig to Animate Face Models (original) (raw)
Related papers
Transferring Facial Expressions to Different Face Models
2006
We introduce a facial deformation system that helps the character setup process and gives artists the possibility to manipulate models as if they were using a puppet. The method uses a set of labels that define specific facial features and deforms the rig anthropometrically. We find the correspondence of the main attributes of a generic rig, transfer them to different 3D face models and automatically generate a sophisticated facial rig based on an anatomical structure. We show how the labels, combined with other deformation methods, can adapt muscles and skeletons from a generic rig to individual face models, allowing high quality physics-based animations. We describe how it is possible to deform the generic facial rig, apply the same deformation parameters to different face models and obtain unique expressions. We show how our method can easily be integrated in an animation pipeline. We end with different examples that show the strength of our proposal.
Transferring the Rig and Animations from a Character to Different Face Models
Computer Graphics Forum, 2008
We introduce a facial deformation system that allows artists to define and customize a facial rig and later apply the same rig to different face models. The method uses a set of landmarks that define specific facial features and deforms the rig anthropometrically. We find the correspondence of the main attributes of a source rig, transfer them to different three-demensional (3D) face models and automatically generate a sophisticated facial rig. The method is general and can be used with any type of rig configuration. We show how the landmarks, combined with other deformation methods, can adapt different influence objects (NURBS surfaces, polygon surfaces, lattice) and skeletons from a source rig to individual face models, allowing high quality geometric or physically-based animations. We describe how it is possible to deform the source facial rig, apply the same deformation parameters to different face models and obtain unique expressions. We enable reusing of existing animation scripts and show how shapes nicely mix one with the other in different face models. We describe how our method can easily be integrated in an animation pipeline. We end with the results of tests done with major film and game companies to show the strength of our proposal.
EPFL training example no training with training training example no training with training whistle evil laugh smile surprise prior prior Figure 1: Example-based facial rigging allows transferring expressions from a generic prior to create a blendshape model of a virtual character. This blendshape model can be successively fine-tuned toward the specific geometry and motion characteristics of the character by providing more training data in the form of additional expression poses. Abstract We introduce a method for generating facial blendshape rigs from a set of example poses of a CG character. Our system transfers controller semantics and expression dynamics from a generic template to the target blendshape model, while solving for an optimal reproduction of the training poses. This enables a scalable design process, where the user can iteratively add more training poses to refine the blendshape expression space. However, plausible anima-tions can be obtained even with a single training pose. We show how formulating the optimization in gradient space yields superior results as compared to a direct optimization on blendshape vertices. We provide examples for both hand-crafted characters and 3D scans of a real actor and demonstrate the performance of our system in the context of markerless art-directable facial tracking.
Fully Automatic Facial Deformation Transfer
Symmetry
Facial Animation is a serious and ongoing challenge for the Computer Graphic industry. Because diverse and complex emotions need to be expressed by different facial deformation and animation, copying facial deformations from existing character to another is widely needed in both industry and academia, to reduce time-consuming and repetitive manual work of modeling to create the 3D shape sequences for every new character. But transfer of realistic facial animations between two 3D models is limited and inconvenient for general use. Modern deformation transfer methods require correspondences mapping, in most cases, which are tedious to get. In this paper, we present a fast and automatic approach to transfer the deformations of the facial mesh models by obtaining the 3D point-wise correspondences in the automatic manner. The key idea is that we could estimate the correspondences with different facial meshes using the robust facial landmark detection method by projecting the 3D model to ...
Easy Rigging of Face by Automatic Registration and Transfer of Skinning Parameters
2010
Preparing a facial mesh to be animated requires a laborious manual rigging process. The rig specifies how the input animation data deforms the surface and allows artists to manipulate a character. We present a method that automatically rigs a facial mesh based on Radial Basis Functions (RBF) and linear blend skinning approach. Our approach transfers the skinning parameters (feature points and their envelopes, ie. point-vertex weights), of a reference facial mesh (source) - already rigged - to the chosen facial mesh (target) by computing an automatic registration between the two meshes. There is no need to manually mark the correspondence between the source and target mesh. As a result, inexperienced artists can automatically rig facial meshes and start right away animating their 3D characters, driven for instance by motion capture data.