Hand2Face: Automatic synthesis and recognition of hand over face occlusions (original) (raw)
Related papers
Emotion Categorization from Faces of People with Sunglasses and Facemasks
2021
Emotional information is considered to convey much meaning in communication. Hence, artificial emotion categorization methods are being developed to meet the increasing demand to introduce intelligent systems, such as robots, into shared workspaces. Deep learning algorithms have demonstrated limited competency in categorizing images from posed datasets with the main features of the face being visible. However, the use of sunglasses and facemasks is common in our daily lives, especially with the outbreak of communicable diseases such as the recent coronavirus. Anecdotally, partial coverings of the face reduces the effectiveness of human communication, so would this have hampering effects on computer vision, and if so, would the different emotion categories be affected equally? Here, we use a modern deep learning algorithm (i.e. VGG19) to categorize emotion from faces of people obscured with simulated sunglasses and facemasks. We found that face coverings obscure emotion categorizatio...
Emotion Detection Using Facial Expression Involving Occlusions and Tilt
Applied Sciences
Facial emotion recognition (FER) is an important and developing topic of research in the field of pattern recognition. The effective application of facial emotion analysis is gaining popularity in surveillance footage, expression analysis, activity recognition, home automation, computer games, stress treatment, patient observation, depression, psychoanalysis, and robotics. Robot interfaces, emotion-aware smart agent systems, and efficient human–computer interaction all benefit greatly from facial expression recognition. This has garnered attention as a key prospect in recent years. However, due to shortcomings in the presence of occlusions, fluctuations in lighting, and changes in physical appearance, research on emotion recognition has to be improved. This paper proposes a new architecture design of a convolutional neural network (CNN) for the FER system and contains five convolution layers, one fully connected layer with rectified linear unit activation function, and a SoftMax lay...
UIBVFED-Mask: A Dataset for Comparing Facial Expressions with and without Face Masks
Data
After the COVID-19 pandemic the use of face masks has become a common practice in many situations. Partial occlusion of the face due to the use of masks poses new challenges for facial expression recognition because of the loss of significant facial information. Consequently, the identification and classification of facial expressions can be negatively affected when using neural networks in particular. This paper presents a new dataset of virtual characters, with and without face masks, with identical geometric information and spatial location. This novelty will certainly allow researchers a better refinement on lost information due to the occlusion of the mask.
Automatic analysis of facial affect: A survey of registration, representation and recognition
Abstract—Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite of major efforts, there are several open questions on what the important cues to interpret facial expressions are and how to encode them. In this paper, we review the progress across a range of affect recognition applications to shed light on these fundamental questions. We analyse the state-of-the-art solutions by decomposing their pipelines into fundamental components, namely face registration, representation, dimensionality reduction and recognition. We discuss the role of these components and highlight the models and new trends that are followed in their design. Moreover, we provide a comprehensive analysis of facial representations by uncovering their advantages and limitations, we elaborate on the type of information they encode and discuss how they deal with the key challenges of illumination variations, registration errors, head-pose variations, occlusions and identity bias. This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems.
A Survey on Deep Learning Algorithms in Facial Emotion Detection and Recognition
Facial emotion recognition (FER) forms part of affective computing, where computers are trained to recognize human emotion from human expressions. Facial Emotion Recognition is very necessary for bridging the communication gap between humans and computers because facial expressions are a form of communication that transmits 55% of a person's emotional and mental state in a total face-to-face communication spectrum. Breakthroughs in this field also make computer systems (robotic systems) better serve or interact with humans. Research has far advanced for this cause, and Deep learning is at its heart. This paper systematically discusses state-of-the-art deep learning architectures and algorithms for facial emotion detection and recognition. The paper also reveals the dominance of CNN architectures over other known architectures like RNNs and SVMs, highlighting the contributions, model performance, and limitations of the reviewed state-ofthe-art. It further identifies available opportunities and open issues worth considering by various FER research in the future. This paper will also discover how computation power and availability of large facial emotion datasets have also limited the pace of progress.