Facial expression recognition via a jointly-learned dual-branch network (original) (raw)

Hybrid Deep Neural Network for Facial Expressions Recognition

Indonesian Journal of Electrical Engineering and Informatics (IJEEI)

Facial expressions are critical indicators of human emotions where recognizing facial expressions has captured the attention of many academics, and recognition of expressions in natural situations remains a challenge due to differences in head position, occlusion, and illumination. Several studies have focused on recognizing emotions from frontal images only, while in this paper wild images from the FER2013 dataset have been used to make a more generalizing model with the existence of its challenges, it is among the most difficult datasets that only got 65.5 % accuracy human-level. This paper proposed a model for recognizing facial expressions using pretrained deep convolutional neural networks and the technique of transfer learning. this hybrid model used a combination of two pre-trained deep convolutional neural networks, training the model in multiple cases for more efficiency to categorize the facial expressions into seven classes. The results show that the best accuracy of the suggested models is 74.39% for the hybrid model, and 73.33% for Fine-tuned the single EfficientNetB0 model, while the highest accuracy for previous methods was 73.28%. Thus, the hybrid and single models outperform other state of art classification methods without using any additional, the hybrid and single models ranked in the first and second position among these methods. Also, The hybrid model has even outperformed the second-highest in accuracy method which used extra data. The incorrectly labeled images in the dataset unfairly reduce accuracy but our best model recognized their actual classes correctly.

Deep Neural Networks for Automatic Facial Expression Recognition

Revue d'intelligence artificielle, 2022

Out of all non-linguistic communications, one of the most popular is face expression and is capable of communicating effectively with others. We have number of applications of facial expressions in as sorted arenas comprising of medicine like psychology, security, gaming, Classroom communication and even commercial creativities. Owing to huge intra-class distinction it is still challenging to recognize the emotions automatically based on facial expression though it is a vigorous area of research since decades. Conventional lines for this approach are dependent on hand-crafted characteristics like Scale Invariant Feature Transform, Histogram of Oriented Gradient and Local Binary Patterns surveyed by a classifier which is applied on a dataset. Various types of architectures were applied for restored performance as Deep learning proved an outstanding feat. The goal of this study is to create a deep learning model on automatic facial emotion recognition FER. The proposed model efforts more on pulling out the crucial features, thereby, advances the expression recognition accuracy, and beats the competition on FER2013 dataset.

A Unified Framework of Deep Learning-Based Facial Expression Recognition System for Diversified Applications

Applied Sciences

This work proposes a facial expression recognition system for a diversified field of applications. The purpose of the proposed system is to predict the type of expressions in a human face region. The implementation of the proposed method is fragmented into three components. In the first component, from the given input image, a tree-structured part model has been applied that predicts some landmark points on the input image to detect facial regions. The detected face region was normalized to its fixed size and then down-sampled to its varying sizes such that the advantages, due to the effect of multi-resolution images, can be introduced. Then, some convolutional neural network (CNN) architectures were proposed in the second component to analyze the texture patterns in the facial regions. To enhance the proposed CNN model’s performance, some advanced techniques, such data augmentation, progressive image resizing, transfer-learning, and fine-tuning of the parameters, were employed in t...

Database Development and Recognition of Facial Expression using Deep Learning

Facial expressions reflect human emotions and an individual's intentions. To detect facial expressions by human beings is a very easy task whereas it’s a very difficult task using computers. They perform a vigorous part in everyday life. It is a non-verbal mode that may include feelings, opinions, and thoughts without speaking. Deep neural networks, Convolutional Neural Networks, Neural networks, Artificial Intelligence, Fuzzy Logic, and Machine Learning are the different technologies used to detect facial expressions. To detect facial expressions, static images, video, webcam data, or real-time images can be used. This research paper focused on developing the SMM Facial Expression dataset and proposes a convolutional neural network model to identify facial expressions. The proposed method was tested on two different benchmarked datasets namely FER2013 and CK+ for facial expression detection. We have explored the proposed model on CK+ and achieved 93.94% accuracy and 67.18 % for...

Multi-feature based automatic facial expression recognition using deep convolutional neural network

Indonesian Journal of Electrical Engineering and Computer Science, 2022

Deep multi-task learning is one of the most challenging research topics widely explored in the field of recognition of facial expression. Most deep learning models rely on the class labels details by eliminating the local information of the sample data which deteriorates the performance of the recognition system. This paper proposes multi-feature-based deep convolutional neural networks (D-CNN) that identify the facial expression of the human face. To enhance the accuracy of recognition systems, the multi-feature learning model is employed in this study. The input images are preprocessed and enhanced via three filtering methods i.e., Gaussian, Wiener, and adaptive mean filtering. The preprocessed image is then segmented using a face detection algorithm. The detected face is further applied with local binary pattern (LBP) that extracts the facial points of each facial expression. These are then fed into the D-CNN that effectively recognizes the facial expression using the features of facial points. The proposed D-CNN is implemented, and the results are compared to the existing support vector machine (SVM). The analysis of deep features helps to extract the local information from the data without incurring a higher computational effort.

Deep Learning based Framework for Emotion Recognition using Facial Expression

Pakistan Journal of Engineering and Technology

Human convey their message in different forms. Expressing their emotions and moods through their facial expression is one of them. In this work, to avoid the traditional process of feature extraction (Geometry based method, Template based method, and Appearance based method), CNN model is used as a feature extractor for emotion detection using facial expression. In this study we also used three pre-trained models VGG-16, ResNet-50, Inception-V3. This Experiment is done on Fer-2013 facial expression dataset and Cohn Extended (CK+) dataset. By using FER-2013 dataset the accuracy rates for CNN, ResNet-50, VGG-16 and Inception-V3 are 76.74%, 85.71%, 85.78%s, 97.93% respectively. Similarly, the experimental results using CK+ dataset showed the accuracy rates for CNN, ResNet- 50, VGG-16 and Inception-V3 are 84.18%, 92.91%, 91.07%, and 73.16% respectively. The experimental results showed exceptional results for Inception-V3 with 97.93% using FER-2013 dataset and ResNet-50 with 91.92% using...

A Review on Facial Expression Recognition System using Deep Learning

2021

Human emotions are spontaneous and conscious mental states of feeling that are accompanied by physiological changes in the face muscles implying face expression. Some important emotions are happy, sad, anger, disgust, fear, surprise, neutral, etc. In non-verbal communication, facial expressions play a very important role because of the inner feelings of a person that reflect on faces. A lot of studies have been carried out for the computer modeling of human emotion. However, it's far behind the human vision system. In the area of computer vision, academic research in deep learning, specifically research into convolutional neural networks, received a lot of attention with the fast growth of computer hardware & the arrival of the Big Data era. Many researches & studies on emotion recognition & deep learning methods are carried out to identify emotions. This article presents a survey of Face Expression Recognition (FER) methods, including 3 key phases as pre-processing, extraction ...

A Compact Deep Learning Model for Robust Facial Expression Recognition

International Journal of Engineering and Advanced Technology, 2019

In this paper we are proposing a compact CNN model for facial expression recognition. Expression recognition on the low quality images are much more challenging and interesting due to the presence of low-intensity expressions. These low intensity expressions are difficult to distinguish with insufficient image resolution. Data collection for FER is expensive and time-consuming. Researches indicates the fact that downloaded images from the Internet is very useful to model and train expression recognition problem. We use extra datasets to improve the training of facial expression recognition, each representing specific data source. Moreover, to prevent subjective annotation, each dataset is labeled with different approaches to ensure annotation qualities. Recognizing the precise and exact expression from a variety of expressions of different people is a huge problem. To solve this problem, we proposed an Emotion Detection Model to extract emotions from the given input image. This work mainly focuses on the psychological approach of color circle-emotion relation[1] to find the accurate emotion from the input image. Initially the whole image is preprocessed and pixel by pixel data is studied. And the combinations of the circles based on combined data will result in a new color. This resulted color will be directly correlated to a particular emotion. Based on the psychological aspects the output will be of reasonable accuracy. The major application of our work is to predict a person's emotion based on his face images or video frames This can even be applied for evaluating the public opinion relating to a particular movie, form the video reaction posts on social Medias. One of the diverse applications of our system is to understand the students learning from their emotions. Human beings shows their emotional states and intentions through facial expressions.. Facial expressions are powerful and natural methods that emphasize the emotional status of humans .The approach used in this work successfully exploits temporal information and it improves the accuracies on the public benchmarking databases. The basic facial expressions are happiness, fear, anger, disgust sadness, and surprise[2]. Contempt was subsequently added as one of the basic emotions. Having sufficient well labeled training data with variations of the populations and environments is important for the design of a deep expression recognition system .Behaviors, poses, facial expressions, actions and speech are considered as channels, which convey human emotions. Lot of research works are going on in this field to explore the correlation between the above mentioned channels and emotions. This paper highlights on the development of a system which automatically recognizes the

FACIAL EXPRESSION RECOGNITION USING COMBINED PRE-TRAINED CONVNETS

Automatic Facial Expression Recognition (AFER), has been an active research area in the past three decades. Research and development in this area have become continually active due to its wide range of potential applications in many fields. Recent research in the field presents impressive results when using Convolution Neural Network (CNN's, ConvNets). In general, ConvNets proved to be a very common and promising choice for many computer vision tasks including AFER. Motivated by this fact, we parallelly combine modified versions of three ConvNets to generate an Automated Facial Expression Recognition system. This research aims to present a robust architecture and better learning process for a deep ConvNet. Adding four additional layers to the combination of the basic models assembles the net to one large ConvNet and enables the sophisticated boosting of the basic models. The main contribution of this work comes out of this special architecture and the use of a two-phase training process that enables better learning. The new system we present is trained to detect universal facial expressions of seven\eight basic emotions when targeting the FER2013 and FER2013+ benchmarks, respectively. The presented approach improves the results of the used architectures by 4% using the FER2013 and 2% using FER2013+ data sets. The second round of training the presented system increases the accuracy of some of the basic models by close to 3% while improving the accuracy of the whole net.

The FaceChannel: A Fast & Furious Deep Neural Network for Facial Expression Recognition

2020

Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train. Given the dynamic conditions of FER, this characteristic hinders such models of been used as a general affect recognition. In this paper, we address this problem by formalizing the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks. We introduce an inhibitory layer that helps to shape the learning of facial features in the last layer of the network and thus improving performance while reducing the number of trainable parameters. To evaluate our model, we perform a series of experiments on different benchmark datasets and demonstrate how the FaceChannel achieves a comparable, if not better, performance to the current state-of-the-art in FER. Our experiments include cross-dataset analysis, to estimate how our model behaves on different affective recognition cond...