Position-based facial animation synthesis (original) (raw)
Related papers
Langwidere: A new facial animation system
Computer Animation'94., …, 1994
This paper presents Langwidere, a facial animation system. Langwidere is the basis for a flexible system capable of imitating a wide range of characteristics and actions, such as speech or expressing emotion.
SIGGRAPH Asia 2022 Conference Papers
Figure 1: Animatomy is a high-end facial animation pipeline built on a novel face parameterization using contractile muscle curves. We present the construction and fitting of the muscle curves to a set of dynamic 3D scans for an actor (a), using a passive muscle simulation (b). Muscle contractions (strains) parameterize these scans and are used to learn a manifold of plausible facial expressions (c). The strains, in turn, control skin deformation (d) and readily transfer expression from an actor to characters. In production, the strains can be animated by performance capture (e) and animator interaction (f). ©Wētā FX.
An inverse dynamics approach to face animation
The Journal of the Acoustical Society of America, 2001
Muscle-based models of the human face produce high quality animation but rely on recorded muscle activity signals or synthetic muscle signals that are often derived by trial and error. This paper presents a dynamic inversion of a muscle-based model (Lucero and Munhall, 1999) that permits the animation to be created from kinematic recordings of facial movements. Using a nonlinear optimizer (Powell's algorithm), the inversion produces a muscle activity set for seven muscles in the lower face that minimize the root mean square error between kinematic data recorded with OPTOTRAK and the corresponding nodes of the modeled facial mesh. This inverted muscle activity is then used to animate the facial model. In three tests of the inversion, strong correlations were observed for kinematics produced from synthetic muscle activity, for OPTOTRAK kinematics recorded from a talker for whom the facial model is morphologically adapted and finally for another talker with the model morphology ada...
Modeling Short-Term Dynamics and Variability for Realistic Interactive Facial Animation
IEEE Computer Graphics and Applications, 2000
E motional facial expressions don't relate directly to speech but reflect a person's emotional state and personality. In social interactions, they give clues to a speaker's state of mind and help people interpret the speaker's tone or meaning. To create believable virtual characters, we must address the problem of simulating natural, credible emotional facial expressions. Indeed, previous studies have shown that people perceive synthetic faces displaying no emotional expressions as unrealistic and even unpleasant.
Real-time interactive facial animation
1999
In this paper we describe methods for the generation of real-time facial animation for various virtual actor using high level-actions. These high-level actions allow the user to forget the technical side of the animation, and focus only on the more abstract, more natural, and intuitive part of the facial animation. The mechanisms developed to generate interactive high-level real-time animation are described and details are given on the method we use to blend several action to a smooth facial animation. These techniques have been integrated in a C++ library which functionalities are described. Finally an interactive real-time application performing facial animation and speech using this library is described and graphical results are presented.
Realistic speech animation based on observed 3D face dynamics
2005
We propose an efficient system for realistic speech animation. The system supports all steps of the animation pipeline, from the capture or design of 3D head models up to the synthesis and editing of the performance. This pipeline is fully 3D, which yields high flexibility in the use of the animated character. Real detailed 3D face dynamics, observed at video frame rate for thousands of points on the face of speaking actors, underpin the realism of the facial deformations. These are given a compact and intuitive representation via Independent Component Analysis (ICA). Performances amount to trajectories through this 'Viseme Space'. When asked to animate a face the system replicates the 'Visemes' that it has learned, and adds the necessary coarticulation effects. Realism has been improved through comparisons with motion captured groundtruth. Faces for which no 3D dynamics could be observed can be animated nonetheless. Their visemes are adapted automatically to their physiognomy by localising the face in a 'Face Space'.
Realtime Performance-Based Facial Animation
Figure 1: Our system captures and tracks the facial expression dynamics of the users (grey renderings) in realtime and maps them to a digital character (colored renderings) on the opposite screen to enable engaging virtual encounters in cyberspace.
3D Linear Facial Animation Based on Real Data
2010 23rd SIBGRAPI Conference on Graphics, Patterns and Images, 2010
In this paper we introduce a Facial Animation system using real three-dimensional models of people, acquired by a 3D scanner. We consider a dataset composed by models displaying different facial expressions and a linear interpolation technique is used to produce a smooth transition between them. One-to-one correspondences between the meshes of each facial expression are required in order to apply the interpolation process. Instead of focusing in the computation of dense correspondence, some points are selected and a triangulation is defined, being refined by consecutive subdivisions, that compute the matchings of intermediate points. We are able to animate any model of the dataset, given its texture information for the neutral face and the geometry information for all the expressions along with the neutral face. This is made by computing matrices with the variations of every vertex when changing from the neutral face to the other expressions. The knowledge of the matrices obtained in this process makes it possible to animate other models given only the texture and geometry information of the neutral face. Furthermore, the system uses 3D reconstructed models, being capable of generating a three-dimensional facial animation from a single 2D image of a person. Also, as an extension of the system, we use artificial models that contain expressions of visemes, that are not part of the expressions of the dataset, and their displacements are applied to the real models. This allows these models to be given as input to a speech synthesis application in which the face is able to speak phrases typed by the user. Finally, we generate an average face and increase the displacements between a subject from the dataset and the average face, creating, automatically, a caricature of the subject.
Realtime Facial Animation with On-the-fly Correctives
: Our adaptive tracking model conforms to the input expressions on-the-fly, producing a better fit to the user than state-of-the-art data driven techniques which are confined to learned motion priors and generate plausible but not accurate tracking.
Proceedings Computer Animation '96
Department of Computer Science and Engineering 10 Facial Animation Facial models Propagation of deformation of the skin can be based on a simple distance-based model such that there is increasing attenuation with increasing distance. This can result in direct movement (a), attenuation based on linear distance (b), or distance and 10-36 Department of Computer Science and Engineering 10 Facial Animation Time Alignment of Triphone Videos Need to combine triphone videos Choose portion of overlapping triphones where lip shapes are close as possible Already done when computing D s 10-44 Department of Computer Science and Engineering 10 Facial Animation Data Capture •Actor's face digitized using a Cyberware scanner to get a base 3D mesh •Six calibrated video cameras capture actor's expressions Six camera views 10-54 Department of Computer Science and Engineering 10 Facial Animation Assigning Blend Coefficients Assign blend coefficients for a grid of 1400 evenly distributed points on face 10-73 Department of Computer Science and Engineering 10 Facial Animation Motion Vector Transfer Compute transformation between two coordinate systems Mapping determines the deformed source model motion vectors