3D Face Modeling from Ancient Kabuki Drawings (original) (raw)
Related papers
Animatable face models from uncalibrated input pictures
2009
In networked virtual environments, videoconferences or chatting over the Internet users are often graphically represented by virtual characters. Modeling realistic virtual heads of users suitable for animation implies a heavy artistic effort and resource cost. This paper introduces a system that generates a 3D model of a real human head with a little human intervention. The system receives five input orthogonal photographs of the human head and a generic template 3D model. It requires manual annotation of 94 feature points on each photograph. The same set of feature points must be selected on the template model in a preprocessing step that is done only once. Computing process consists of two phases: a morphing and a coloring phase. In the morphing phase the template model is morphed in two steps using a Radial Basis Function (RBF) to take a shape similar to the shape of the real human head. In the coloring phase the deformed model is colored using the input photographs based on a cubemap projection, which leads to a realistic appearance of the model while allowing for a real-time performance. We show the use of the output model by automatically copying facial motions from the template model to the deformed model, while preserving the compliance of the motion to the MPEG-4 FBA standard.
Model Based Face Reconstruction for Animation
Multimedia Modeling, 1997
In this paper, we present a new semiautomatic method to reconstruct 3D facial model for animation from two orthogonal pictures taken from front and side views. The method is based on extracting the hair and face outlines and detecting interior features in the region of mouth, eyes, etc. We show how to use structured snakes for extracting the profile boundaries
EURASIP Journal on Advances in Signal Processing, 2007
We propose an animation system for personalized human head. Landmarks compliant to MPEG-4 facial definition parameters (FDP) are initially labeled on both template model and any target human head model as priori knowledge. The deformation from the template model to the target head is through a multilevel training process. Both general radial basis function (RBF) and compactly supported radial basis function (CSRBF) are applied to ensure the fidelity of the global shape and face features. Animation factor is also adapted so that the deformed model still can be considered as an animated head. Situations with defective scanned data are also discussed in this paper.
A major unsolved problem in computer graphics is the construction and animation of realistic human facial models. Traditionally, facial models have been built painstakingly by manual digitization and animated by adhoc parametrically controlled facial mesh deformations or kinematic approximation of muscle actions. Fortunately, animators are now able to digitize facial geometries through the use of scanning range sensors and animate them through the dynamic simulation of facial tissues and muscles. However, these techniques require considerable user input to construct facial models of individuals suitable for animation. Realistic facial animation is achieved through geometric and image manipulations. Geometric deformations usually account for the shape and deformations unique to the physiology and expressions of a person. Image manipulations model the reflectance properties of the facial skin and hair to achieve smallscale detail that is difficult to model by geometric manipulation alone.
3D Linear Facial Animation Based on Real Data
2010 23rd SIBGRAPI Conference on Graphics, Patterns and Images, 2010
In this paper we introduce a Facial Animation system using real three-dimensional models of people, acquired by a 3D scanner. We consider a dataset composed by models displaying different facial expressions and a linear interpolation technique is used to produce a smooth transition between them. One-to-one correspondences between the meshes of each facial expression are required in order to apply the interpolation process. Instead of focusing in the computation of dense correspondence, some points are selected and a triangulation is defined, being refined by consecutive subdivisions, that compute the matchings of intermediate points. We are able to animate any model of the dataset, given its texture information for the neutral face and the geometry information for all the expressions along with the neutral face. This is made by computing matrices with the variations of every vertex when changing from the neutral face to the other expressions. The knowledge of the matrices obtained in this process makes it possible to animate other models given only the texture and geometry information of the neutral face. Furthermore, the system uses 3D reconstructed models, being capable of generating a three-dimensional facial animation from a single 2D image of a person. Also, as an extension of the system, we use artificial models that contain expressions of visemes, that are not part of the expressions of the dataset, and their displacements are applied to the real models. This allows these models to be given as input to a speech synthesis application in which the face is able to speak phrases typed by the user. Finally, we generate an average face and increase the displacements between a subject from the dataset and the average face, creating, automatically, a caricature of the subject.
Facial motion cloning with radial basis functions in MPEG-4 FBA
Graphical Models, 2007
Facial Motion Cloning (FMC) is the technique employed to transfer the motion of a virtual face (namely the source) to a mesh representing another face (the target), generally having a different geometry and connectivity. In this paper, we describe a novel method based on the combination of the Radial Basis Functions (RBF) volume morphing with the encoding capabilities of the widely used MPEG-4 Facial and Body Animation (FBA) international standard. First, we find the morphing function G(P) that precisely fits the shape of the source into the shape of the target face. Then, all the MPEG-4 encoded movements of the source face are transformed using the same function G(P) and mapped to the corresponding vertices of the target mesh. By doing this, we obtain, in a straightforward and simple way, the whole set of the MPEG-4 encoded facial movements for the target face in a short time. This animatable version of the target face is able to perform generic face animation stored in a MPEG-4 FBA data stream.
Constructing a realistic face model of an individual for expression animation
2002
This paper presents a method for creating photorealistic textured 3D face models of specific people for dynamic facial expression animation. The modeling approach reconstructs an accurate geometrical face model based on the individual face measurements containing both shape and texture information that is acquired from a laser range scanner. By using a semi-automatic registration and merging technique, a 3D dense face mesh is recovered fro m the partial range data obtained from arbitrary multiple different views. A model editing and adaptive meshing scheme is then used to refine the surface model. Having recovered the facial geometry, we add realism by mapping the model with high-resolution texture images. The resultant synthetic face has been shown to be visually similar to the true face. Based on the geometrically accurate surface model, a physically-based face model with a hierarchical structure of the skin, muscles and skull is developed from anatomical perspective. The dynamic displacement of nodes in the skin lattice under the influence of internal muscular forces is calculated by a numerical integration method. Using our technique, we have been able to generate highly realistic face models and flexible expressions.
Fully Automatic Facial Deformation Transfer
Symmetry
Facial Animation is a serious and ongoing challenge for the Computer Graphic industry. Because diverse and complex emotions need to be expressed by different facial deformation and animation, copying facial deformations from existing character to another is widely needed in both industry and academia, to reduce time-consuming and repetitive manual work of modeling to create the 3D shape sequences for every new character. But transfer of realistic facial animations between two 3D models is limited and inconvenient for general use. Modern deformation transfer methods require correspondences mapping, in most cases, which are tedious to get. In this paper, we present a fast and automatic approach to transfer the deformations of the facial mesh models by obtaining the 3D point-wise correspondences in the automatic manner. The key idea is that we could estimate the correspondences with different facial meshes using the robust facial landmark detection method by projecting the 3D model to ...
Automatic Face Texture Generation from Irregular Texture in 3-D Character Creation Applications
2017
In this paper, we propose a novel face texture generation algorithm to enhance compatibility and reusability of 3-D face reconstruction results of real-world 3-D character creation applications. Our approach can handle irregular types of input textures of 3-D reconstructed face models using the proposed multi-projection texture generation technique. We automatically calculate exact pixel values of the frontal face region in the template texture map by finding correspondences between input and template 3-D models and textures, respectively. After matching tones of the frontal face region and the remaining parts, the final texture of a 3-D face model is successfully generated without manual editing or post-processing of textures. CCS Concepts •Computing methodologies → Texturing; Mesh models; Virtual reality;