Tadas Baltrusaitis - Academia.edu (original) (raw)

Uploads

Papers by Tadas Baltrusaitis

Research paper thumbnail of Hand2Face: Automatic synthesis and recognition of hand over face occlusions

2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), 2017

Research paper thumbnail of OpenFace: An open source facial behavior analysis toolkit

2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016

Research paper thumbnail of The Future Belongs to the Curious: Towards Automatic Understanding and Recognition of Curiosity in Children

Workshop on Child Computer Interaction, 2016

Curiosity plays a crucial role in learning and education of children. Given its complex nature, i... more Curiosity plays a crucial role in learning and education of children. Given its complex nature, it is extremely challenging to automatically understand and recognize it. In this paper, we discuss the contexts under which curiosity can be elicited and provide an associated taxonomy. We present an initial empirical study of curiosity that includes the analysis of co-occurring emotions and the valence associated with it, together with gender-specific analysis. We also discuss the visual, acoustic and verbal behavior indicators of curiosity. Our discussions and analysis uncover some of the underlying complexities of curiosity and its temporal evolution, which is a step towards its automatic understanding and recognition. Finally, considering the central role of curiosity in education, we present two education-centered application areas that could greatly benefit from its automatic recognition.

Research paper thumbnail of The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

Translational vision science & technology, 2016

We validate a video-based method of head posture measurement. The Cambridge Face Tracker uses neu... more We validate a video-based method of head posture measurement. The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the ...

Research paper thumbnail of Rendering of Eyes for Eye-Shape Registration and Gaze Estimation

Images of the eye are key in several computer vision problems, such as shape registration and gaz... more Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.

Research paper thumbnail of Decoupling facial expressions and head motions in complex emotions

2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015

Research paper thumbnail of Empirical analysis of continuous affect

2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015

Research paper thumbnail of Cross-dataset learning and person-specific normalisation for automatic Action Unit detection

2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 2015

Research paper thumbnail of The Emotional Computer

Research paper thumbnail of A Facial Affect Mapping Engine (FAME)

Research paper thumbnail of Continuous Conditional Neural Fields for Structured Regression

Research paper thumbnail of Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors

Research paper thumbnail of Synthesizing expressions using facial feature point tracking: How emotion is conveyed

Many approaches to the analysis and synthesis of facial expressions rely on automatically trackin... more Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task.

Research paper thumbnail of CCNF FOR CONTINUOUS EMOTION TRACKING IN MUSIC: COMPARISON WITH CCRF AND RELATIVE FEATURE REPRESENTATION

Whether or not emotion in music can change over time is not a question that requires discussion. ... more Whether or not emotion in music can change over time is not a question that requires discussion. As the interest in continuous emotion prediction grows, there is a greater need for tools that are suitable for dimensional emotion tracking. In this paper, we propose a novel Continuous Conditional Neural Fields model that is designed specifically for such a problem. We compare our approach with a similar Continuous Conditional Random Fields model and Support Vector Regression showing a great improvement over the baseline. Our new model is especially well suited for hierarchical models such as model-level feature fusion, which we explore in this paper. We also investigate how well it performs with relative feature representation in addition to the standard representation.

Research paper thumbnail of Automatic Facial Expression Analysis

Research paper thumbnail of A facial affect mapping engine

Proceedings of the companion publication of the 19th international conference on Intelligent User Interfaces - IUI Companion '14, 2014

Research paper thumbnail of Modeling Latent Discriminative Dynamic of Multi-dimensional Affective Signals

Research paper thumbnail of how emotion is conveyed

Many approaches to the analysis and synthesis of facial expressions rely on automatically trackin... more Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task.

Research paper thumbnail of Crowdsouring in emotion studies across time and culture

Research paper thumbnail of 3D Corpus of Spontaneous Complex Mental States

Abstract. Hand-over-face gestures, a subset of emotional body lan-guage, are overlooked by automa... more Abstract. Hand-over-face gestures, a subset of emotional body lan-guage, are overlooked by automatic affect inference systems. We propose the use of hand-over-face gestures as a novel affect cue for automatic in-ference of cognitive mental states. Moreover, affect recognition ...

Research paper thumbnail of Hand2Face: Automatic synthesis and recognition of hand over face occlusions

2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII), 2017

Research paper thumbnail of OpenFace: An open source facial behavior analysis toolkit

2016 IEEE Winter Conference on Applications of Computer Vision (WACV), 2016

Research paper thumbnail of The Future Belongs to the Curious: Towards Automatic Understanding and Recognition of Curiosity in Children

Workshop on Child Computer Interaction, 2016

Curiosity plays a crucial role in learning and education of children. Given its complex nature, i... more Curiosity plays a crucial role in learning and education of children. Given its complex nature, it is extremely challenging to automatically understand and recognize it. In this paper, we discuss the contexts under which curiosity can be elicited and provide an associated taxonomy. We present an initial empirical study of curiosity that includes the analysis of co-occurring emotions and the valence associated with it, together with gender-specific analysis. We also discuss the visual, acoustic and verbal behavior indicators of curiosity. Our discussions and analysis uncover some of the underlying complexities of curiosity and its temporal evolution, which is a step towards its automatic understanding and recognition. Finally, considering the central role of curiosity in education, we present two education-centered application areas that could greatly benefit from its automatic recognition.

Research paper thumbnail of The Cambridge Face Tracker: Accurate, Low Cost Measurement of Head Posture Using Computer Vision and Face Recognition Software

Translational vision science & technology, 2016

We validate a video-based method of head posture measurement. The Cambridge Face Tracker uses neu... more We validate a video-based method of head posture measurement. The Cambridge Face Tracker uses neural networks (constrained local neural fields) to recognize facial features in video. The relative position of these facial features is used to calculate head posture. First, we assess the accuracy of this approach against videos in three research databases where each frame is tagged with a precisely measured head posture. Second, we compare our method to a commercially available mechanical device, the Cervical Range of Motion device: four subjects each adopted 43 distinct head postures that were measured using both methods. The Cambridge Face Tracker achieved confident facial recognition in 92% of the approximately 38,000 frames of video from the three databases. The respective mean error in absolute head posture was 3.34°, 3.86°, and 2.81°, with a median error of 1.97°, 2.16°, and 1.96°. The accuracy decreased with more extreme head posture. Comparing The Cambridge Face Tracker to the ...

Research paper thumbnail of Rendering of Eyes for Eye-Shape Registration and Gaze Estimation

Images of the eye are key in several computer vision problems, such as shape registration and gaz... more Images of the eye are key in several computer vision problems, such as shape registration and gaze estimation. Recent large-scale supervised methods for these problems require time-consuming data collection and manual annotation, which can be unreliable. We propose synthesizing perfectly labelled photo-realistic training data in a fraction of the time. We used computer graphics techniques to build a collection of dynamic eye-region models from head scan geometry. These were randomly posed to synthesize close-up eye images for a wide range of head poses, gaze directions, and illumination conditions. We used our model's controllability to verify the importance of realistic illumination and shape variations in eye-region training data. Finally, we demonstrate the benefits of our synthesized training data (SynthesEyes) by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.

Research paper thumbnail of Decoupling facial expressions and head motions in complex emotions

2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015

Research paper thumbnail of Empirical analysis of continuous affect

2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015

Research paper thumbnail of Cross-dataset learning and person-specific normalisation for automatic Action Unit detection

2015 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), 2015

Research paper thumbnail of The Emotional Computer

Research paper thumbnail of A Facial Affect Mapping Engine (FAME)

Research paper thumbnail of Continuous Conditional Neural Fields for Structured Regression

Research paper thumbnail of Automatic Detection of Naturalistic Hand-over-Face Gesture Descriptors

Research paper thumbnail of Synthesizing expressions using facial feature point tracking: How emotion is conveyed

Many approaches to the analysis and synthesis of facial expressions rely on automatically trackin... more Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task.

Research paper thumbnail of CCNF FOR CONTINUOUS EMOTION TRACKING IN MUSIC: COMPARISON WITH CCRF AND RELATIVE FEATURE REPRESENTATION

Whether or not emotion in music can change over time is not a question that requires discussion. ... more Whether or not emotion in music can change over time is not a question that requires discussion. As the interest in continuous emotion prediction grows, there is a greater need for tools that are suitable for dimensional emotion tracking. In this paper, we propose a novel Continuous Conditional Neural Fields model that is designed specifically for such a problem. We compare our approach with a similar Continuous Conditional Random Fields model and Support Vector Regression showing a great improvement over the baseline. Our new model is especially well suited for hierarchical models such as model-level feature fusion, which we explore in this paper. We also investigate how well it performs with relative feature representation in addition to the standard representation.

Research paper thumbnail of Automatic Facial Expression Analysis

Research paper thumbnail of A facial affect mapping engine

Proceedings of the companion publication of the 19th international conference on Intelligent User Interfaces - IUI Companion '14, 2014

Research paper thumbnail of Modeling Latent Discriminative Dynamic of Multi-dimensional Affective Signals

Research paper thumbnail of how emotion is conveyed

Many approaches to the analysis and synthesis of facial expressions rely on automatically trackin... more Many approaches to the analysis and synthesis of facial expressions rely on automatically tracking landmark points on human faces. However, this approach is usually chosen because of ease of tracking rather than its ability to convey affect. We have conducted an experiment that evaluated the perceptual importance of 22 such automatically tracked feature points in a mental state recognition task.

Research paper thumbnail of Crowdsouring in emotion studies across time and culture

Research paper thumbnail of 3D Corpus of Spontaneous Complex Mental States

Abstract. Hand-over-face gestures, a subset of emotional body lan-guage, are overlooked by automa... more Abstract. Hand-over-face gestures, a subset of emotional body lan-guage, are overlooked by automatic affect inference systems. We propose the use of hand-over-face gestures as a novel affect cue for automatic in-ference of cognitive mental states. Moreover, affect recognition ...