Hybrid model for speech emotion recognition of normal and autistic children (SERNAC) (original) (raw)

AutisMitr: Emotion Recognition Assistive Tool for Autistic Children

Open Computer Science

Assistive technology has proven to be one of the most significant inventions to aid people with Autism to improve the quality of their lives. In this study, a real-time emotion recognition system for autistic children has been developed. Emotion recognition is implemented by executing three stages: Face identification, Facial Feature extraction, and feature classification. The objective is to frame a system that includes all three stages of emotion recognition activity that executes expeditiously in real time. Thus, Affectiva SDK is implemented in the application. The propound system detects at most 7 facial emotions: anger, disgust, fear, joy, sadness, contempt, and surprise. The purpose for performing this study is to teach emotions to individuals suffering from autism, as they lack the ability to respond appropriately to others emotions. The proposed application was tested with a group of typical children aged 6–14 years, and positive outcomes were achieved.

Assessment on Speech Emotion Recognition for Autism Spectrum Disorder children using Support Vector Machine

World applied sciences journal, 2016

2 Abstract: The affinity between human and machine interaction is a formidable mission that machine should be able to identify and respond human non verbal communication such as emotions through signal processing become more natural. This proposal compares emotion from the Autism Spectrum Disorder children with other Language like Tamil, Telugu and German. It is an extension of recognising and classifies emotion from Autism Spectrum Disorder children of their speech of Tamil language with Telugu database. Discrete Wavelet Transform and Mel Frequency Cepstral Coefficient are the feature extraction method which is used. In speech emotion recognition discipline Support Vector Machine was used for classification method to classify various databases like Berlin Database, speaker dependent Tamil emotion databases of Autism Spectrum Disorder children, Tamil Databases and Telugu Databases carries the data from movies. The experimentation shows that the results of Berlin-DB, Telugu-DB, Tamil...

Emotion recognition system for autism disordered people

Journal of Ambient Intelligence and Humanized Computing, 2019

People with autism spectrum disorders have difficulties with communicating and socially interacting through facial expressions, even with their parents. The proposed approach applies person identification and emotion recognition. The objective of this work is to monitor and identify the people with autism spectral disorder based on sensors and machine learning algorithm. Our proposed system uses neurological sensor to collect the EEG data of patients and Q sensor for measuring stress level. The proposal integrates the facial recognition for identifying emotion recognition. The experimental results obtained from the proposed work performance evaluation are discussed, considering each for Autism Patient and the emotion labels. Proposed work shown the experimental results that can detect emotion with good accuracy compared to other classifiers. The proposed work achieves a 6% better accuracy for Proposed Model than Support Vector machine and 8% more accuracy than back Propagation algorithm.

Emotion in the Speech of Children with Autism Spectrum Conditions: Prosody and Everything Else

3rd Workshop on Child, Computer and Interaction (WOCCI 2012), Satellite Event of INTERSPEECH, 2012

Children with Autism Spectrum Conditions (ASC) may experience significant difficulties to recognise and express emotions. The ASC-Inclusion project is setting up an internet-based digital gaming experience that will assist children with ASC to improve their socio-emotional communication skills, combining voice, face, and body gesture analysis, and giving corrective feedback regarding the appropriateness of the child's expressions. The present contribution focuses on the recognition of emotion in speech and on feature analysis. For this purpose, a database of prompted phrases was collected in Hebrew, inducing nine emotions embedded in short-stories. It contains speech of children with ASC and typically developing children under the same conditions. We evaluate the emotion task over the nine categories including the binary valence/arousal discrimination. We further investigate the discrimination of each emotion against neutral. The results show performances for arousal and valance of up to 86.5% and for nine emotions including neutral of up to 42% unweighted average recall. Moreover we compare and analyse manually selected prosodic features with automatic selected features with respect to their relevance for discriminating each of the eight emotion classes.

Ensemble of machine learning and acoustic segment model techniques for speech emotion and autism spectrum disorders recognition

Interspeech 2013, 2013

This study investigates the classification performances of emotion and autism spectrum disorders from speech utterances using ensemble classification techniques. We first explore the performances of three well-known machine learning techniques, namely, support vector machines (SVM), deep neural networks (DNN) and k-nearest neighbours (KNN), with acoustic features extracted by the openSMILE feature extractor. In addition, we propose an acoustic segment model (ASM) technique, which incorporates the temporal information of speech signals to perform classification. A set of ASMs is automatically learned for each category of emotion and autism spectrum disorders, and then the ASM sets decode an input utterance into series of acoustic patterns, with which the system determines the category for that utterance. Our ensemble system is a combination of the machine learning and ASM techniques. The evaluations are conducted using the data sets provided by the organizer of the INTERSPEECH 2013 Computational Paralinguistics Challenge.

Automatic Emotion Recognition in Children with Autism: A Systematic Literature Review

Sensors, 2022

The automatic emotion recognition domain brings new methods and technologies that might be used to enhance therapy of children with autism. The paper aims at the exploration of methods and tools used to recognize emotions in children. It presents a literature review study that was performed using a systematic approach and PRISMA methodology for reporting quantitative and qualitative results. Diverse observation channels and modalities are used in the analyzed studies, including facial expressions, prosody of speech, and physiological signals. Regarding representation models, the basic emotions are the most frequently recognized, especially happiness, fear, and sadness. Both single-channel and multichannel approaches are applied, with a preference for the first one. For multimodal recognition, early fusion was the most frequently applied. SVM and neural networks were the most popular for building classifiers. Qualitative analysis revealed important clues on participant group construc...

Automatic Classification of Autistic Child Vocalisations: A Novel Database and Results

Humanoid robots have in recent years shown great promise for supporting the educational needs of children on the autism spectrum. To further improve the efficacy of such interactions, user-adaptation strategies based on the individual needs of a child are required. In this regard, the proposed study assesses the suitability of a range of speech-based classification approaches for automatic detection of autism severity according to the commonly used Social Responsiveness Scale™ second edition (SRS-2). Autism is characterised by socialisation limitations including child language and communication ability. When compared to neurotypical children of the same age these can be a strong indication of severity. This study introduces a novel dataset of 803 utterances recorded from 14 autistic children aged between 4 – 10 years, during Wizard-of-Oz interactions with a humanoid robot. Our results demonstrate the suitability of support vector machines (SVMs) which use acoustic feature sets from multiple Interspeech COMPARE challenges. We also evaluate deep spectrum features, extracted via an image classification convolutional neural network (CNN) from the spectrogram of autistic speech instances. At best, by using SVMs on the acoustic feature sets, we achieved a UAR of 73.7 % for the proposed 3-class task.

Voice Emotion Games: Language and Emotion in the Voice of Children with Autism Spectrum Condition

Children with Autism Spectrum Conditions (ASC) may experience significant difficulties to recognise and express emotions. The ASC-Inclusion project set up an internet-based digital gaming experience that assists children with ASC to improve their socio-emotional communication skills, combining voice, face, and body gesture analysis, and giving corrective feedback regarding the appropriateness of the child's expressions. The present contribution focuses on the recognition of emotion in speech. For this purpose, a database of prompted phrases was collected in English, Swedish, and Hebrew, inducing nine emotions embedded in short-stories. It contains speech of children with ASC and typically developing children under the same conditions. We evaluate the emotion task over the nine categories, by investigating the discrimination of each emotion against the remaining ones. The results show performances up to 83.8% unweighted average recall.

The Performance of Emotion Classifiers for Children With Parent-Reported Autism: Quantitative Feasibility Study

JMIR Mental Health, 2020

Background Autism spectrum disorder (ASD) is a developmental disorder characterized by deficits in social communication and interaction, and restricted and repetitive behaviors and interests. The incidence of ASD has increased in recent years; it is now estimated that approximately 1 in 40 children in the United States are affected. Due in part to increasing prevalence, access to treatment has become constrained. Hope lies in mobile solutions that provide therapy through artificial intelligence (AI) approaches, including facial and emotion detection AI models developed by mainstream cloud providers, available directly to consumers. However, these solutions may not be sufficiently trained for use in pediatric populations. Objective Emotion classifiers available off-the-shelf to the general public through Microsoft, Amazon, Google, and Sighthound are well-suited to the pediatric population, and could be used for developing mobile therapies targeting aspects of social communication and...