Recognition of facial expression of fetuses by artificial intelligence (AI) (original) (raw)
Related papers
Recognition of Fetal Facial Expressions Using Artificial Intelligence Deep Learning
Donald School Journal of Ultrasound in Obstetrics and Gynecology, 2021
Fetal facial expressions are useful parameters for assessing brain function and development in the latter half of pregnancy. Previous investigations have studied subjective assessment of fetal facial expressions using four-dimensional ultrasound. Artificial intelligence (AI) can enable the objective assessment of fetal facial expressions. Artificial intelligence recognition of fetal facial expressions may open the door to the new scientific field, such as "AI science of fetal brain", and fetal neurobehavioral science using AI is at the dawn of a new era. Our knowledge of fetal neurobehavior and neurodevelopment will be advanced through AI recognition of fetal facial expressions. Artificial intelligence may be an important modality in current and future research on fetal facial expressions and may assist in the evaluation of fetal brain function.
Human Emotion Recognition using Deep Learning with Special Emphasis on Infant’s Face
IJEER , 2022
This paper discusses a deep learning-based image processing method to recognize human emotion from their facial expression with special concentration on infant’s face between one to five years of age. The work has importance because most of the time it becomes necessary to understand need of a child from their facial expression and behavior. This work is still a challenge in the field of Human Facial Emotion Recognition due to confusing facial expression that sometimes found in some of the samples. We have tried to recognize any facial expression into one of the mostly understood human mood namely Angry, Disgust, Fear, Happy, Sad, Surprise and Neutral. For this purpose, we have trained an image classifier with Convolutional Neural Network with Kaggle's Fer2013 Dataset. After the completion of the project, we achieved good accuracy in most of the prominent emotions by testing with 20 random images for each emotion.
Using deep‐learning algorithms to classify fetal brain ultrasound images as normal or abnormal
Ultrasound in Obstetrics & Gynecology, 2020
What are the clinical implications of this work? These algorithms could help to reduce false negative diagnoses. They are expected to help solve the shortage of sonologists for basic prenatal ultrasound in China and worldwide. This study lays a foundation for further multiclassification research on the diagnosis of fetal intracranial abnormalities and differential diagnosis using deep learning.
To model the dynamics of social interaction, it is necessary both to detect specific Action Units (AUs) and variation in their intensity and coordination over time. An automated method that performs well when detecting occurrence may or may not perform well for intensity measurements. We compared two dimensionality reduction approaches -Principal Components Analysis with Large Margin Nearest Neighbor (PCA+LMNN) and Laplacian Eigenmap -and two classifiers, SVM and K-Nearest Neighbor. Twelve infants were video-recorded during face-to-face interactions with their mothers. AUs related to positive and negative affect were manually coded from the video by certified FACS coders. Facial features were tracked using Active Appearance Models (AAM) and registered to a canonical view before extracting Histogram of Oriented Gradients (HOG) features. All possible combinations of dimensionality reduction approaches and classifiers were tested using a leave-onesubject-out cross-validation. For detecting consistency (i.e. reliability as measured by ICC), PCA+LMNN and SVM classifiers gave best results.
Computer-aided diagnosis for fetal brain ultrasound images using deep convolutional neural networks
International Journal of Computer Assisted Radiology and Surgery, 2020
Purpose Fetal brain abnormalities are some of the most common congenital malformations that may associated with syndromic and chromosomal malformations, and could lead to neurodevelopmental delay and mental retardation. Early prenatal detection of brain abnormalities is essential for informing clinical management pathways and consulting for parents. The purpose of this research is to develop computer-aided diagnosis algorithms for five common fetal brain abnormalities, which may provide assistance to doctors for brain abnormalities detection in antenatal neurosonographic assessment. Methods We applied a classifier to classify images of fetal brain standard planes (transventricular and transcerebellar) as normal or abnormal. The classifier was trained by image-level labeled images. In the first step, craniocerebral regions were segmented from the ultrasound images. Then, these segmentations were classified into four categories. Last, the lesions in the abnormal images were localized by class activation mapping. Results We evaluated our algorithms on real-world clinical datasets of fetal brain ultrasound images. We observed that the proposed method achieved a Dice score of 0.942 on craniocerebral region segmentation, an average F1-score of 0.96 on classification and an average mean IOU of 0.497 on lesion localization. Conclusion We present computer-aided diagnosis algorithms for fetal brain ultrasound images based on deep convolutional neural networks. Our algorithms could be potentially applied in diagnosis assistance and are expected to help junior doctors in making clinical decision and reducing false negatives of fetal brain abnormalities.
Donald School Journal of Ultrasound in Obstetrics & Gynecology, 2014
This paper reviews findings in fetal development research using two-dimensional and four-dimensional ultrasound imaging and how these techniques have been applied to increase understanding of the fetus. The limitations of differences in language and methods used to code and score images between research groups will also be explored, reaching the conclusion that a reliable coding scheme for fetal facial movements is essential. Furthermore, applications of the new technology studies of bonding between parent and fetus, cross-cultural research on fetal facial development and medical applications are discussed.
In this paper, we describe the architecture of neural network based facial emotions recognition which is able to recognize the emotions and what they are feeling or spontaneous reaction of an infant at any particular moment. From birth, both male and female are developing their facial expression as well as their ability to process the facial uses of those they are communicating with. The more an infant is exposed to different faces and expressions, the more able they are to recognize these emotions and then more able they are to recognize their emotions and then mimic them for themselves. Infants are exposed to an array of emotional expression and indicate that they initiate some facial expression and gestures as early as the first few days of life. Our goal is to categorize the facial expression in the given image into 6 basic emotion states: happy, sad, anger, fear, disgust and surprise.
Machine Learning for Developmental Screening based on Facial Expressions and Health Parameters
International Journal of Computing and Digital Systems
The screening tools used by developmental therapists to diagnose sensory processing disorders are primarily manual and based on a static questionnaire. These screening sessions are prohibitively expensive for most parents and there are also no clear scoring criteria. Trained and experienced doctors are needed to examine and treat these youngsters. Early detection and treatment are critically required. Researchers and scientists struggled to model a child's sensory processing pattern. Modern technology makes it feasible to automatically record characteristics related to sensory processing, behavioural factors, and reactions to specific stimuli. A novel dataset is created using smartwatch sensor data and their facial expression as a response to stimuli and a simplified questionnaire. Real-time stress-related health metrics are gathered in response to stimuli. Using the suggested dataset, multiple machine learning models are trained, tested, and validated for the diagnosis of visual sensory processing disorders. With the use of these classification models, behavioral therapists will be able to detect visual sensory processing abnormalities and monitor the efficacy of treatment with less time, reduced effort, and few screening sessions. The experimentation of the proposed system is performed on the novel dataset. The performance of machine learning methods is evaluated using f1-score and standard deviation. The experimentation ensued with promising results. The framework is tested on the live dataset. The experimental results show that the proposed system outperforms the manual method of sensory processing disorder diagnosis by obtaining maximum f1-score of 1 and minimum standard deviation of 0 for decision tree and random forest classifier.
A Review on Deep-Learning Algorithms for Fetal Ultrasound-Image Analysis
2022
Deep-learning (DL) algorithms are becoming the standard for processing ultrasound (US) fetal images. Despite a large number of survey papers already present in this field, most of them are focusing on a broader area of medical-image analysis or not covering all fetal US DL applications. This paper surveys the most recent work in the field, with a total of 145 research papers published after 2017. Each paper is analyzed and commented on from both the methodology and application perspective. We categorized the papers in (i) fetal standard-plane detection, (ii) anatomical-structure analysis, and (iii) biometry parameter estimation. For each category, main limitations and open issues are presented. Summary tables are included to facilitate the comparison among the different approaches. Publicly-available datasets and performance metrics commonly used to assess algorithm performance are summarized, too. This paper ends with a critical summary of the current state of the art on DL algorit...