Emotional recognition of dynamic facial expressions before and after cochlear implantation in adults with progressive deafness (original) (raw)

The recognition of facial expressions of emotion in deaf and hearing individuals

Heliyon, 2021

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

Differences between hearing and deaf subjects in decoding foreign emotional faces

—This work investigates the ability of deaf subjects to correctly label foreign emotional faces of happiness, sadness, surprise, anger, fear, and disgust, in comparison with typically hearing ones. The experiment involved 14 deaf (signing) and 14 hearing subjects matched by age and gender. The emotional faces were selected from the Radboud Database. The results show significant difference between the two groups, with deaf performing significantly poorly in the decoding accuracy and intensity ratings of disgust, surprise, and anger. Considerations are made on the effects of the social and cultural context to leverage the universality of emotional facial expressions.

Multisensory emotion perception in congenitally, early, and late deaf CI users

PloS one, 2017

Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants perfo...

Hearing Impairment and Emotion Identification from Auditory and Visual Stimuli

International Journal of Listening, 2016

In the present pilot study, the researchers investigated how people with impaired hearing identify emotions from auditory and visual stimuli, with people with normal hearing acting as their controls. Two separate experiments were conducted. The viewpoint was in the communication and social function of emotion perception. Professional actors of both genders produced emotional nonsense samples without linguistic content, samples in the Finnish language, and prolonged vowel samples. In Experiment 1, nine Cochlear implant users and nine controls participated in the listening test. In Experiment 2, nine users of a variety of hearing aids and nine controls participated in the perception test. The results of both experiments showed a statistically significant difference between the two testing groups, people with hearing impairment and people with normal hearing, in the emotion identification and valence perception from both auditory and visual stimuli. The results suggest that hearing aids and cochlear implants do not transfer well enough the nuances within emotions conveyed by the voice. The results also suggest difficulties in the visual perception among people with hearing impairment. This warrants further studies with larger samples. Emotion identification is important in social interaction: Those who cannot accurately identify the emotions being expressed by others in conversation may be less effective in building and maintaining Correspondence concerning this article should be addressed to Teija Waaramaa, Kalevantie 4,

Deaf signers outperform hearing non-signers in recognizing happy facial expressions

Psychological Research, 2019

The use of signs as a major means for communication affects other functions such as spatial processing. Intriguingly, this is true even for functions which are less obviously linked to language processing. Speakers using signs outperform non-signers in face recognition tasks, potentially as a result of a lifelong focus on the mouth region for speechreading. On this background, we hypothesized that the processing of emotional faces is altered in persons using mostly signs for communication (henceforth named deaf signers). While for the recognition of happiness the mouth region is more crucial, the eye region matters more for recognizing anger. Using morphed faces, we created facial composites in which either the upper or lower half of an emotional face was kept neutral while the other half varied in intensity of the expressed emotion, being either happy or angry. As expected, deaf signers were more accurate at recognizing happy faces than non-signers. The reverse effect was found for angry faces. These differences between groups were most pronounced for facial expressions of low intensities. We conclude that the lifelong focus on the mouth region in deaf signers leads to more sensitive processing of happy faces, especially when expressions are relatively subtle.

Emotion recognition in children with profound and severe deafness: Do they have a deficit in perceptual processing?

Findings from several studies have suggested that deaf children have difficulties with emotion identification and that these may impact upon social skills. The authors of these studies have typically attributed such problems to delayed language acquisition and/or opportunity to converse about personal experiences with other people. The current study aimed to investigate emotion identification in children with varying levels of deafness by specifically testing their ability to recognize perceptual aspects of emotions depicted in upright or inverted human and cartoon faces. The findings from the study showed that, in comparison with both chronological- and mental-age-matched controls, the deaf children were significantly worse at identifying emotions. However, like controls, their performance decreased when emotions were presented on the inverted faces, thus indexing a typical configural processing style. No differences were found across individuals with different levels of deafness or in those with and without signing family members. The results are supportive of poor emotional identification in hearing-impaired children and are discussed in relation to delays in language acquisition and intergroup differences in perceptual processing.

Visual field bias in hearing and deaf adults during judgments of facial expression and identity

Frontiers in Psychology, 2013

The dominance of the right hemisphere during face perception is associated with more accurate judgments of faces presented in the left rather than the right visual field (RVF). Previous research suggests that the left visual field (LVF) bias typically observed during face perception tasks is reduced in deaf adults who use sign language, for whom facial expressions convey important linguistic information. The current study examined whether visual field biases were altered in deaf adults whenever they viewed expressive faces, or only when attention was explicitly directed to expression. Twelve hearing adults and 12 deaf signers were trained to recognize a set of novel faces posing various emotional expressions. They then judged the familiarity or emotion of faces presented in the left or RVF, or both visual fields simultaneously. The same familiar and unfamiliar faces posing neutral and happy expressions were presented in the two tasks. Both groups were most accurate when faces were presented in both visual fields. Across tasks, the hearing group demonstrated a bias toward the LVF. In contrast, the deaf group showed a bias toward the LVF during identity judgments that shifted marginally toward the RVF during emotion judgments. Two secondary conditions tested whether these effects generalized to angry faces and famous faces and similar effects were observed. These results suggest that attention to facial expression, not merely the presence of emotional expression, reduces a typical LVF bias for face processing in deaf signers.

Influences on Facial Emotion Recognition in Deaf Children

The Journal of Deaf Studies and Deaf Education, 2016

This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The children's teachers or speech therapists also responded to two questionnaires, one on children's linguistic-communicative skills and the other providing personal information. Results show a delay in deaf children's capacity to recognize some emotions (scared, surprised, and disgusted) but not others (happy, sad, and angry). Notably, they recognized emotions in a similar order to hearing children. Moreover, linguistic skills were found to be related to emotion recognition skills, even when controlling for age. We discuss the importance of facial emotion recognition of language, conversation, some characteristics of deafness, and parents' educational level. Deaf children born to hearing parents who have not been exposed to a natural language since early infancy may have difficulty in various areas of development, such as language, verbal intelligence, academic achievement, or social understanding (Dyck & Denver, 2003). In this regard, the main objectives of this research are to study whether there are differences between deaf and hearing children's capacity for facial emotion recognition (specifically, labeling prototypical expressions; see Castro, Cheng, Halberstadt, & Grühn, 2016), and to study to what extent language and the characteristics of deafness may explain these possible differences. Such research is relevant because of its importance for interpersonal communication and social competence (Nelson, Welsh, Trup, & Greenberg, 2011).

PERVALE-S: a new cognitive task to assess deaf people’s ability to perceive basic and social emotions

Frontiers in Psychology, 2015

A poorly understood aspect of deaf people (DP) is how their emotional information is processed. Verbal ability is key to improve emotional knowledge in people. Nevertheless, DP are unable to distinguish intonation, intensity, and the rhythm of language due to lack of hearing. Some DP have acquired both lip-reading abilities and sign language, but others have developed only sign language. PERVALE-S was developed to assess the ability of DP to perceive both social and basic emotions. PERVALE-S presents different sets of visual images of a real deaf person expressing both basic and social emotions, according to the normative standard of emotional expressions in Spanish Sign Language. Emotional expression stimuli were presented at two different levels of intensity (1: low; and 2: high) because DP do not distinguish an object in the same way as hearing people (HP) do. Then, participants had to click on the more suitable emotional expression. PERVALE-S contains video instructions (given by a sign language interpreter) to improve DP's understanding about how to use the software. DP had to watch the videos before answering the items. To test PERVALE-S, a sample of 56 individuals was recruited (18 signers, and 30 HP). Participants also performed a personality test (High School Personality Questionnaire adapted) and a fluid intelligence (Gf) measure (RAPM). Moreover, all deaf participants were rated by four teachers for the deaf. Results: there were no significant differences between deaf and HP in performance in PERVALE-S. Confusion matrices revealed that embarrassment, envy, and jealousy were worse perceived. Age was just related to social-emotional tasks (but not in basic emotional tasks). Emotional perception ability was related mainly to warmth and consciousness, but negatively related to tension. Meanwhile, Gf was related to only social-emotional tasks. There were no gender differences.