Optimizing Deep Convolutional Neural Network for Facial Expression Recognitions (original) (raw)
Related papers
Optimizing Deep Convolutional Neural Network for Facial Expression Recognition
European Journal of Engineering Research and Science
Facial expression recognition (FER) systems have attracted much research interest in the area of Machine Learning. We designed a large, deep convolutional neural network to classify 40,000 images in the data-set into one of seven categories (disgust, fear, happy, angry, sad, neutral, surprise). In this project, we have designed deep learning Convolution Neural Network (CNN) for facial expression recognition and developed model in Theano and Caffe for training process. The proposed architecture achieves 61% accuracy. This work presents results of accelerated implementation of the CNN with graphic processing units (GPUs). Optimizing Deep CNN is to reduce training time for system.
Facial Expression Recognition Using Deep Convolution Neural Network With Tensorflow
International Journal of Advance Research and Innovative Ideas in Education, 2020
Facial expression recognition (FER) is an automatic system that manipulates the facial data and plays a vital role in human machine interfaces. Olden machine learning algorithms has attracted incrementing attention from researchers since the early nineties. It approaches often requires complex feature extraction process. In this paper we reflected recent advances in deep learning to Convolutional Neural Networks (CNN). It is an prominent field which uses nowadays applications such as in robots, games and neuromarketing. It is widely used technique uses facial expressions, eye movement and gestures which conveys the emotional status and feelings of persons. The proposed model made pivot on detecting the facial expressions of an individual from a single image. The number of parameters in our proposed networks concentrated by decreasing manner that accelerates the total performance speed and makes this well and make suitable for real time systems
FACIAL EMOTION RECOGNITION USING CONVOLUTION NEURAL NETWORK
Facial expression recognition is a very active research topic due to its potential applications in the many fields such as human-robot interaction, human-machine interfaces, driving safety, and health-care. Despite of the significant improvements, facial expression recognition is still a challenging problem that wait for more and more accurate algorithms. This article presents a new model that is capable of recognizing facial expression by using deep Convolutional Neural Network (CNN). The CNN model is generated by using Caffe in Digits environment. Moreover, it is trained and tested on NVIDIA Tegra TX1 embedded development platform including a 250 Graphics Processing Unit (GPU) CUDA cores and Quadcore ARM Cortex A57 processor. The proposed model is applied to address the facial expression problem on the publicly available two expression databases, the JAFFE database and the Cohn-Kanade database.
CNN based Facial Expression Recognition System
Social Science Research Network, 2021
Training machines to think and behave like humans have fascinated many researchers around the world. Deep learning a subgroup of Machine Learning enables us to develop a system that can extract features, recognize distinct patterns, and classifies them just like a human brain. Facial Expression Recognition (FER) is one of the trending technology in the Human-Machine Interaction field. This paper contributes to a novel system that makes use of Convolutional Neural networks with fewer data samples and high accuracy. The main objective is to develop a system that can classify the different expressions from the human face and identifies them accurately. In our system, we had developed a 15 layer CNN architecture which plays an important role in classifying images and training the system. This existing system can classify 6 basics facial expressions such as happy, angry, fear, sad, disgust, and surprise with an accuracy of 98.1%.
Facial emotion recognition using convolutional neural networks (FERC)
SN Applied Sciences
Facial expression for emotion detection has always been an easy task for humans, but achieving the same task with a computer algorithm is quite challenging. With the recent advancement in computer vision and machine learning, it is possible to detect emotions from images. In this paper, we propose a novel technique called facial emotion recognition using convolutional neural networks (FERC). The FERC is based on two-part convolutional neural network (CNN): The firstpart removes the background from the picture, and the second part concentrates on the facial feature vector extraction. In FERC model, expressional vector (EV) is used to find the five different types of regular facial expression. Supervisory data were obtained from the stored database of 10,000 images (154 persons). It was possible to correctly highlight the emotion with 96% accuracy, using a EV of length 24 values. The two-level CNN works in series, and the last layer of perceptron adjusts the weights and exponent values with each iteration. FERC differs from generally followed strategies with single-level CNN, hence improving the accuracy. Furthermore, a novel background removal procedure applied, before the generation of EV, avoids dealing with multiple problems that may occur (for example distance from the camera). FERC was extensively tested with more than 750K images using extended Cohn-Kanade expression, Caltech faces, CMU and NIST datasets. We expect the FERC emotion detection to be useful in many applications such as predictive learning of students, lie detectors, etc.
Automatic Facial Expression Recognition Method Using Deep Convolutional Neural Network
2021
Facial expressions are part of human language and are often used to convey emotions. Since humans are very different in their emotional representation through various media, the recognition of facial expression becomes a challenging problem in machine learning methods. Emotion and sentiment analysis also have become new trends in social media. Deep Convolutional Neural Network (DCNN) is one of the newest learning methods in recent years that model a human's brain. DCNN achieves better accuracy with big data such as images. In this paper an automatic facial expression recognition (FER) method using the deep convolutional neural network is proposed. In this work, a way is provided to overcome the overfitting problem in training the deep convolutional neural network for FER, and also an effective pre-processing phase is proposed that is improved the accuracy of facial expression recognition. Here the results for recognition of seven emotional states (neutral, happiness, sadness, su...
A CNN based facial expression recognizer
Materials Today: Proceedings, 2021
Facial expression recognition [FER] has gained attraction among many researchers in the field of artificial intelligence. The existing models available for facial expression recognition are developed with the help of native machine learning models. But the accuracies and efficiency achieved by these models are still undergoing extensive research. The proposed research work uses Convolutional Neural Networks (CNN) deep learning models with sufficient Computational power to run the algorithms. This model is able to achieve good accuracy even on the new datasets. Our experimental results achieved an accuracy of 57% in a five-classification task.
International Journal of Electrical and Computer Engineering (IJECE), 2024
Facial expression recognition has gathered substantial attention in computer vision applications, with the need for robust and accurate models that can decipher human emotions from facial images. Performance analysis of a novel hybrid model combines the strengths of residual network (ResNet) and dense network (DenseNet) architectures after applying preprocessing for facial expression recognition. The proposed hybrid model capitalizes on the complementary characteristics of ResNet's and DenseNet's densely connected blocks to enhance the model's capacity to extract discriminative features from facial images. This research evaluates the hybrid model performance and conducts a comprehensive benchmark against established facial expression recognition convolution neural network (CNN) models. The analysis encompasses key aspects of model performance, including classification accuracy, and adaptability with the labeled faces in the wild (LFW) dataset for facial expressions such as anger, fear, happy, disgust, sad, surprise, along neutral. The research observes that the proposed hybrid model is more accurate and efficient computationally than existing models consistently. This performance analysis eliminates information on the hybrid model's perspective to further facial expression recognition research.
Facial Expression Recognition Based on Deep Learning Convolution Neural Network: A Review
Journal of Soft Computing and Data Mining, 2021
Facial emotional processing is one of the most important activities in effective calculations, engagement with people and computers, machine vision, video game testing, and consumer research. Facial expressions are a form of nonverbal communication, as they reveal a person's inner feelings and emotions. Extensive attention to Facial Expression Recognition (FER) has recently been received as facial expressions are considered. As the fastest communication medium of any kind of information. Facial expression recognition gives a better understanding of a person's thoughts or views and analyzes them with the currently trending deep learning methods. Accuracy rate sharply compared to traditional state-of-the-art systems. This article provides a brief overview of the different FER fields of application and publicly accessible databases used in FER and studies the latest and current reviews in FER using Convolution Neural Network (CNN) algorithms. Finally, it is observed that everyo...
Facial emotion recognition using deep learning detector and classifier
International Journal of Electrical and Computer Engineering (IJECE), 2023
Numerous research works have been put forward over the years to advance the field of facial expression recognition which until today, is still considered a challenging task. The selection of image color space and the use of facial alignment as preprocessing steps may collectively pose a significant impact on the accuracy and computational cost of facial emotion recognition, which is crucial to optimize the speed-accuracy trade-off. This paper proposed a deep learning-based facial emotion recognition pipeline that can be used to predict the emotion of detected face regions in video sequences. Five well-known state-of-the-art convolutional neural network architectures are used for training the emotion classifier to identify the network architecture which gives the best speed-accuracy trade-off. Two distinct facial emotion training datasets are prepared to investigate the effect of image color space and facial alignment on the performance of facial emotion recognition. Experimental results show that training a facial expression recognition model with grayscale-aligned facial images is preferable as it offers better recognition rates with lower detection latency. The lightweight MobileNet_v1 is identified as the best-performing model with WM=0.75 and RM=160 as its hyperparameters, achieving an overall accuracy of 86.42% on the testing video dataset.