Linear tracking of pose and facial features (original) (raw)

A linear estimation method for 3D pose and facial animation tracking

2007

This paper presents an approach that incorporates Canonical Correlation Analysis (CCA) for monocular 3D face pose and facial animation estimation. The CCA is used to find the dependency between texture residuals and 3D face pose and facial gesture. The texture residuals are obtained from observed raw brightness shape-free 2D image patches that we build by means of a parameterized 3D geometric face model. This method is used to correctly estimate the pose of the face and the model's animation parameters controlling the lip, eyebrow and eye movements (encoded in 15 parameters). Extensive experiments on tracking faces in long real video sequences show the effectiveness of the proposed method and the value of using CCA in the tracking context.

Structure and appearance features for robust 3D facial actions tracking

2009 IEEE International Conference on Multimedia and Expo, 2009

This paper presents a robust and accurate method for joint head pose and facial actions tracking, even under challenging conditions such as varying lighting, large head movements, and fast motion. This is made possible by the combination of two types of facial features. We use locations sampled from the facial texture whose appearance is initialized on the first frame and adapted over time, and also illumination-invariant patches located on characteristic points of the face such as the corners of the eyes or of the mouth. The first type of features contains rich information about the global appearance of the face and thus leads to an accurate tracking, while the second type guaranties robustness and stability by avoiding drift. We demonstrate our system on the Boston University Face Tracking benchmark, and show it outperforms state-of-the-art methods.

Local or Global 3D Face and Facial Feature Tracker

2007

We present in this paper a solution for 3D face and facial feature tracking using canonical correlation analysis and a 3D geometric model. This model is controlled with 17 parameters (6 for the 3D pose, and 11 for facial animation), and is used to crop out reference 2D shape free texture maps from the incoming input frames. Model parameters are updated via image registration in the texture map space. For registration, we use CCA to learn and exploit the dependency between texture residuals and model parameter corrections. We compare tracking results using two kinds of texture maps: one local (image patches around selected vertices of the 3D model), and one global (the whole image patch under the 3D model). Experiments evaluating the effectiveness on the approaches are reported.

Smooth adaptive fitting of 3D face model for the estimation of rigid and nonrigid facial motion in video sequences

Signal Processing: Image Communication, 2011

We propose a 3D wireframe face model alignment for the task of simultaneously tracking of rigid head motion and nonrigid facial expressions in video sequences. The system integrates two levels: (i) at the low level, automatic and accurate location of facial features are obtained via a cascaded optimization algorithm of a 2D shape model, (ii) at the high level, we recover, via minimizing an energy function, the optimal motion parameters of the 3D model, namely the 3D rigid motion parameters and seven nonrigid animation (Action Unit) parameters. In this latter inference, a 3D face shape model (Candide) is automatically fitted to the image sequence via a least squares minimization of the energy, defined as the residual between the projected 3D wireframe model and the 2D shape model, meanwhile imposing temporal and spatial motion-smoothness constraints over the 3D model points. Our proposed system tackles many disadvantages of the optimization and training associated with active appearance models. Extensive fitting and tracking experiments demonstrate the feasibility, accuracy and effectiveness of the developed methods. Qualitative and quantitative performance of the proposed system on several facial sequences, indicate its potential usefulness for multimedia applications, as well as facial expression analysis.

Rigid and non-rigid face motion tracking by aligning texture maps and stereo 3D models

2007

Accurate rigid and non-rigid tracking of faces is a challenging task in computer vision. Recently, appearance-based 3D face tracking methods have been proposed. These methods can successfully tackle the image variability and drift problems. However, they may fail to provide accurate out-of-plane face motions since they are not very sensitive to out-of-plane motion variations. In this paper, we present a framework for fast and accurate 3D face and facial action tracking. Our proposed framework retains the strengths of both appearance and 3D data-based trackers. We combine an adaptive appearance model with an online stereo-based 3D model. We provide experiments and performance evaluation which show the feasibility and usefulness of the proposed approach.

Real-time facial feature tracking from 2D+3D video streams

2010 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video, 2010

This paper presents a completely automated 3D facial feature tracking system using 2D+3D image sequences recorded by a real-time 3D sensor. It is based on local feature detectors constrained by a 3D shape model, using techniques that make it robust under pose and partial occlusion. Several experiments conducted under relatively non-controlled conditions demonstrate the accuracy and robustness of the approach.

Performance Driven Facial Animation by Appearance Based Tracking

Lecture Notes in Computer Science, 2005

We present a method that estimates high level animation parameters (muscle contractions, eye movements, eye lids opening, jaw motion and lips contractions) from a marker-less face image sequence. We use an efficient appearance-based tracker to stabilise images of upper (eyes and eyebrows) and lower (mouth) face. By using a set of stabilised images with known animation parameters, we can learn a re-animation matrix that allows us to estimate the parameters of a new image. The system is able to re-animate a 32 DOF 3D face model in real-time.

Face pose estimation and tracking using automatic 3D model construction

2008

This paper presents a method for robustly tracking and estimating the face pose of a person in both indoor and outdoor environments. The method is invariant to identity and that does not require previous training. A face model is automatically initialized and constructed on-line, when the face is frontal to the stereo camera system. To build the model, a fixed point distribution is superposed over the frontal face, and several appropriate points close to those locations are chosen for tracking. Using the stereo correspondence of the two cameras, the 3D coordinates of these points are extracted, and the 3D model is created. RANSAC and POSIT are used for tracking and 3D pose calculation at each frame. The approach runs in real time, and has been tested on sequences recorded in the laboratory and in a moving car.

Face tracking using canonical correlation analysis

This paper presents an approach that incorporates canonical correlation analysis for monocular 3D face track- ing as a rigid object. It also provides the comparison between the linear and the non linear version (kernel) of the CCA. The 3D pose of the face is estimated from observed raw brightness shape-free 2D image patches. A parameterized geometric face model is adopted to crop out and to normalize the shape of patches of interest from video frames. Starting from a face model fitted to an observed hum an face, the relation between a set of perturbed pose parameters of the face model and the associated image patches is learned using CCA or KCCA. This knowledge is then used to estimate the correction to be added to the pose of the face from an observed patch in the current frame. Experimental results on tracking faces in long video sequences show the effectiveness of the two proposed methods.

Face tracking and pose estimation with automatic three-dimensional model construction

IET COMPUTER VISION, 2009

A method for robustly tracking and estimating the face pose of a person using stereo vision is presented. The method is invariant to identity and does not require previous training. A face model is automatically initialised and constructed online: a fixed point distribution is superposed over the face when it is frontal to the cameras, and several appropriate points close to those locations are chosen for tracking. Using the stereo correspondence of the cameras, the three-dimensional (3D) coordinates of these points are extracted, and the 3D model is created. The 2D projections of the model points are tracked separately on the left and right images using SMAT. RANSAC and POSIT are used for 3D pose estimation. Head rotations up to +458 are correctly estimated. The approach runs in real time. The purpose of this method is to serve as the basis of a driver monitoring system, and has been tested on sequences recorded in a moving car.