Position-based facial animation synthesis (original) (raw)

Building highly realistic facial modeling and animation: a survey

The Visual Computer, 2007

This paper provides a comprehensive survey on the techniques for human facial modeling and animation. The survey is carried out from two different perspectives: facial modeling, which concerns how to produce 3D face models, and facial animation, which regards how to synthesize dynamic facial expressions. To generate an individual face model, we can either perform individualization of a generic model or combine face models from an existing face collection. With respect to facial animation, we have further categorized the techniques into simulation-based, performance-driven and shape blend-based approaches. The strength and weakness of these techniques within each category are discussed, alongside with the applications of these techniques to various exploitations. In addition, a brief historical review of the technique evolution is provided. Limitations and future trend are discussed. Conclusions are drawn at the end of the paper.

Real-time interactive facial animation

1999

In this paper we describe methods for the generation of real-time facial animation for various virtual actor using high level-actions. These high-level actions allow the user to forget the technical side of the animation, and focus only on the more abstract, more natural, and intuitive part of the facial animation. The mechanisms developed to generate interactive high-level real-time animation are described and details are given on the method we use to blend several action to a smooth facial animation. These techniques have been integrated in a C++ library which functionalities are described. Finally an interactive real-time application performing facial animation and speech using this library is described and graphical results are presented.

Realistic speech animation based on observed 3D face dynamics

2005

We propose an efficient system for realistic speech animation. The system supports all steps of the animation pipeline, from the capture or design of 3D head models up to the synthesis and editing of the performance. This pipeline is fully 3D, which yields high flexibility in the use of the animated character. Real detailed 3D face dynamics, observed at video frame rate for thousands of points on the face of speaking actors, underpin the realism of the facial deformations. These are given a compact and intuitive representation via Independent Component Analysis (ICA). Performances amount to trajectories through this 'Viseme Space'. When asked to animate a face the system replicates the 'Visemes' that it has learned, and adds the necessary coarticulation effects. Realism has been improved through comparisons with motion captured groundtruth. Faces for which no 3D dynamics could be observed can be animated nonetheless. Their visemes are adapted automatically to their physiognomy by localising the face in a 'Face Space'.

Realtime Performance-Based Facial Animation

Figure 1: Our system captures and tracks the facial expression dynamics of the users (grey renderings) in realtime and maps them to a digital character (colored renderings) on the opposite screen to enable engaging virtual encounters in cyberspace.

3D Linear Facial Animation Based on Real Data

2010 23rd SIBGRAPI Conference on Graphics, Patterns and Images, 2010

In this paper we introduce a Facial Animation system using real three-dimensional models of people, acquired by a 3D scanner. We consider a dataset composed by models displaying different facial expressions and a linear interpolation technique is used to produce a smooth transition between them. One-to-one correspondences between the meshes of each facial expression are required in order to apply the interpolation process. Instead of focusing in the computation of dense correspondence, some points are selected and a triangulation is defined, being refined by consecutive subdivisions, that compute the matchings of intermediate points. We are able to animate any model of the dataset, given its texture information for the neutral face and the geometry information for all the expressions along with the neutral face. This is made by computing matrices with the variations of every vertex when changing from the neutral face to the other expressions. The knowledge of the matrices obtained in this process makes it possible to animate other models given only the texture and geometry information of the neutral face. Furthermore, the system uses 3D reconstructed models, being capable of generating a three-dimensional facial animation from a single 2D image of a person. Also, as an extension of the system, we use artificial models that contain expressions of visemes, that are not part of the expressions of the dataset, and their displacements are applied to the real models. This allows these models to be given as input to a speech synthesis application in which the face is able to speak phrases typed by the user. Finally, we generate an average face and increase the displacements between a subject from the dataset and the average face, creating, automatically, a caricature of the subject.

Realtime Facial Animation with On-the-fly Correctives

: Our adaptive tracking model conforms to the input expressions on-the-fly, producing a better fit to the user than state-of-the-art data driven techniques which are confined to learned motion priors and generate plausible but not accurate tracking.

Pose-space animation and transfer of facial details

Proceedings of the 2008 …, 2008

This paper presents a novel method for real-time animation of highly-detailed facial expressions based on a multi-scale decomposition of facial geometry into large-scale motion and fine-scale details, such as expression wrinkles. Our hybrid animation is tailored to the specific characteristics of large-and fine-scale facial deformations: Large-scale deformations are computed with a fast linear shell model, which is intuitively and accurately controlled through a sparse set of motion-capture markers or user-defined handle points. ...

Review on 3D Facial Animation Techniques

International Journal of Engineering & Technology

Generating facial animation has always been a challenge towards the graphical visualization area. Numerous efforts had been carried out in order to achieve high realism in facial animation. This paper surveys techniques applied in facial animation targeting towards realistic facial animation. We discuss the facial modeling techniques from different viewpoints; related geometric-based manipulation (that can be further categorized into interpolations, parameterization, muscle-based and pseudo–muscle-based model) and facial animation techniques involving speech-driven, image-based and data-captured. The paper will summarize and describe the related theories, strength and weaknesses for each technique.

Facial Modelling and Animation: An Overview of The State-of-The Art

Iraqi Journal for Electrical and Electronic Engineering, 2021

Animating human face presents interesting challenges because of its familiarity as the face is the part utilized to recognize individuals. This paper reviewed the approaches used in facial modeling and animation and described their strengths and weaknesses. Realistic face animation of computer graphic models of human faces can be hard to achieve as a result of the many details that should be approximated in producing realistic facial expressions. Many methods have been researched to create more and more accurate animations that can efficiently represent human faces. We described the techniques that have been utilized to produce realistic facial animation. In this survey, we roughly categorized the facial modeling and animation approach into the following classes: blendshape or shape interpolation, parameterizations, facial action coding system-based approaches, moving pictures experts group-4 facial animation, physics-based muscle modeling, performance driven facial animation, visua...

Speech-driven facial animation with realistic dynamics

IEEE Transactions on Multimedia, 2005

This article presents an integral system capable of generating animations with realistic dynamics, including the individualized nuances, of 3-D human faces driven by speech acoustics. The system is capable of capturing short phenomena in the orofacial dynamics of a given speaker by tracking the 3-D location of various MPEG-4 facial points through stereovision. A perceptual transformation of the speech spectral envelope and prosodic cues are combined into an acoustic feature vector to predict 3-D orofacial dynamics by means of a nearest-neighbor algorithm. The Karhunen-Loéve transformation is used to identify the principal components of orofacial motion, decoupling perceptually natural components from experimental noise. We also present a highly optimized MPEG-4 compliant player capable of generating audio-synchronized animations at 60 frames per second. The player is based on a pseudo-muscle model augmented with a non-penetrable ellipsoidal structure to approximate the skull and the jaw. This structure adds a sense of volume that provides more realistic dynamics than existing simplified pseudo-muscle-based approaches, yet it is simple enough to work at the desired frame rate. Experimental results on an audio-visual database of compact TIMIT sentences are presented to illustrate the performance of the complete system. 1