Improving Facial Emotion Recognition Systems Using Gradient and Laplacian Images (original) (raw)
Related papers
Improving Facial Emotion Recognition Systems with Crucial Feature Extractors
2019
In this work, we have proposed enhancements that improve the performance of state-of-the-art facial emotion recognition (FER) systems. We believe that the changes in the positions of the fiducial points and the intensities capture the crucial information regarding the emotion of a face image. We propose the inputting of the gradient and the Laplacian of the input image together with the original into a convolutional neural network (CNN). These modifications help the network learn additional information from the gradient and Laplacian of the images. However, as shown by our results, the CNN in the existing state-of-the-art models is not able to extract this information from the raw images. In addition, we employ spatial transformer network to add robustness to the system against rotation and scaling. We have performed a number of experiments on two well known datasets, namely KDEF and FERplus. Our approach enhances the already high performance of the state-of-the-art FER systems by 3...
IRJET- Survey on Facial Emotion Recognition with Convolutional Neural Network
IRJET, 2021
Facial Expression Recognition is expeditiously developing branches in the Aritificial Neural Network domain. In this paper, we show the arrangement of FER based on static images, utilizing CNNs, without requiring any pre-processing or feature extraction tasks. The paper additionally represents techniques to improve on future accuracy around here by utilizing pre-processing, which incorporates face identification and brightening correction. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing strategies like facial landmarks and HOG were consolidated into a convolutional neural network and this has accomplished good accuracy when compared with the existing model.
Facial Emotion Recognition using Convolutional Neural Networks
International Journal for Research in Applied Science & Engineering Technology (IJRASET), 2022
Humans use their facial expressions to communicate their emotions, which is a strong tool in communication. Facial expression identification is one of the most difficult and powerful challenges in social communication, as facial expressions are crucial in nonverbal communication. Facial Expression Recognition (FER) is just an important study topic in Artificial Intelligence, with numerous recent experiments employing Convolutional Neural Networks (CNNs). The emotions that have grown in the face image have a significant impact on judgments and debates on a variety of topics. Surprise, fear, disgust, anger, happiness, and sorrow are the six basic categories in which a person's emotional states can be categorized according to psychological theory. The automated identification of these emotions from facial photos can be useful in human-computer interaction and a variety of other situations. Deep neural networks, in particular, are capable of learning complicated characteristics and classifying the derived patterns. A deep learning-based framework for human emotion recognition is offered in this system. The proposed framework extracts feature with Gabor filters before classifying them with a Convolutional Neural Network (CNN). The suggested technique improves both the speed of CNN training and the recognition accuracy, according to the results of the experiments.
Facial Emotion Detection: A Comprehensive Exploration of Convolutional Neural Networks
Digital Innovations & Comptemporary Research in Science, Engineering & Technology, 2023
Facial emotions play a crucial role in non-verbal communication, as they reflect the internal feelings of individuals through expressions on their faces. Recognizing and interpreting these facial expressions have significant applications in various fields, especially human-computer interaction. In this journal, a facial emotion detection system based on convolutional neural networks (CNN) was developed. The primary objective was to classify facial images into different emotional categories. The CNN models were trained using grayscale images, and the training process was optimized by leveraging GPU computation. To accommodate new subjects efficiently, the last two layers of the CNN were selectively trained, reducing the overall training time. Image preprocessing steps were implemented in MATLAB, while the CNN algorithm was implemented in C language using the GCC compiler. A user-friendly Graphical User Interface (GUI) in MATLAB seamlessly integrated all the processing steps, from image preprocessing to facial emotion detection. The performance evaluation was conducted using the FER2013 dataset, achieving an impressive accuracy of 78.2% with an average training time of less than 10 minutes when incorporating 1 to 10 new subjects into the system. This work demonstrates the effectiveness of CNN-based approaches for accurate and efficient facial emotion detection, offering promising results in real-world applications.
Effective Facial Emotion Recognition using Convolutional Neural Network Algorithm
International Journal of Recent Technology and Engineering (IJRTE), 2019
This paper presents the idea related to automated live facial emotion recognition through image processing and artificial intelligence (AI) techniques. It is a challenging task for a computer vision to recognize as same as humans through AI. Face detection plays a vital role in emotion recognition. Emotions are classified as happy, sad, disgust, angry, neutral, fear, and surprise. Other aspects such as speech, eye contact, frequency of the voice, and heartbeat are considered. Nowadays face recognition is more efficient and used for many real-time applications due to security purposes. We detect emotion by scanning (static) images or with the (dynamic) recording. Features extracting can be done like eyes, nose, and mouth for face detection. The convolutional neural network (CNN) algorithm follows steps as max-pooling (maximum feature extraction) and flattening.
Facial emotion recognition using deep learning detector and classifier
International Journal of Electrical and Computer Engineering (IJECE), 2023
Numerous research works have been put forward over the years to advance the field of facial expression recognition which until today, is still considered a challenging task. The selection of image color space and the use of facial alignment as preprocessing steps may collectively pose a significant impact on the accuracy and computational cost of facial emotion recognition, which is crucial to optimize the speed-accuracy trade-off. This paper proposed a deep learning-based facial emotion recognition pipeline that can be used to predict the emotion of detected face regions in video sequences. Five well-known state-of-the-art convolutional neural network architectures are used for training the emotion classifier to identify the network architecture which gives the best speed-accuracy trade-off. Two distinct facial emotion training datasets are prepared to investigate the effect of image color space and facial alignment on the performance of facial emotion recognition. Experimental results show that training a facial expression recognition model with grayscale-aligned facial images is preferable as it offers better recognition rates with lower detection latency. The lightweight MobileNet_v1 is identified as the best-performing model with WM=0.75 and RM=160 as its hyperparameters, achieving an overall accuracy of 86.42% on the testing video dataset.