Gender Differences in the Recognition of Vocal Emotions (original) (raw)
Related papers
Gender differences in identifying emotions from auditory and visual stimuli
Logopedics Phoniatrics Vocology, 2016
The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.
Vocal cues in emotion encoding and decoding
Motivation and Emotion, 1991
This research examines the correspondence between theoretical predictions on vocal expression patterns in naturally occurring emotions (as based on the component process theory of emotion; Scherer, 1986) and empirical data on the acoustic characteristics of actors' portrayals. Two male and two female professional radio actors portrayed anger, sadness, joy, fear, and disgust based on realistic scenarios of emotion-eliciting events. A series of judgment studies was conducted to assess the degree to which judges are able to recognize the intended emotion expressions. Disgust was relatively poorly recognized; average recognition accuracy for the other emotions attained 62.8% across studies. A set of portrayals reaching a satisfactory level of recognition accuracy underwent digital acoustic analysis. The results for the acoustic parameters extracted from the speech signal show a number of significant differences between emotions, generally confirming the theoretical predictions. Research on the vocal expression of emotion lags significantly behind the study of facial affect expression. The reasons for this relative neglect are manifold (see Scherer, 1982, 1986, for a detailed discussion of this problem). One of the most important factors is the difficulty of obtaining 1This research was supported by a grant from the Deutsche Forschungsgemeinschaft (Sehe 156/8-5). The authors acknowledge collaboration of Westdeutscher Rundfunk, Cologne, in producing professional versions of actor emotion portrayals, and thank
Acoustic markers impacting discrimination accuracy of emotions in voice
2022
The quality of communication depends on how accurately the listener perceives the intended message. In addition to understanding the words, listeners are expected to interpret the speaker's accompanying emotional tone. However, it is not always clear why a neutral voice can be perceived as affective or vice versa. The present study aimed to investigate the differences between the acoustic profiles of angry, happy, and neutral emotions and to identify the acoustic markers that can lead to misperception of emotions conveyed through the voice. The study employed an encoding-decoding approach. Ten professional actors recorded the Latvian word /laba:/ in neutral, happy, and angry intonations, and thirty-two agematched respondents were asked to identify the emotion conveyed in the heard voice sample. A complete acoustic analysis was conducted for each voice sample using PRAAT, which included fundamental frequency (F 0), intensity level (IL), spectral (HNR) and cepstral parameters (CPPs), and duration of a produced word (DPW). The vocal expressions of emotions were analyzed from both encoding and decoding perspectives. The results showed statistically significant differences in the acoustic parameters that distinguish vocally expressed happy and angry emotions from neutral voices and acoustic parameters that were different between happy and angry emotions.
Speech perception and vocal expression of emotion
Cognition & Emotion, 2001
Two experiments using identical stimuli were run to determine whether the vocal expression of emotion affects the speed with which listeners can identify emotion words. Sentences were spoken in an emotional tone of voice (Happy, Disgusted, or Petrified), or in a Neutral tone ...
Vocal expression of emotion is associated with spectral properties of speech
The Journal of the Acoustical Society of America, 1996
The similarities of voice between members of the same family were found to be high when measured both perceptually and acoustically. The recordings of nine subjects (two mothers, two daughters, two sisters, two brothers, and an unrelated male) were made using sentences of different stress and reiterant syllable combinations. The recordings were paired into related and unrelated sets. Unfamilar listeners were instructed to listen to
Encoding Conditions Affect Recognition of Vocally Expressed Emotions Across Cultures
Frontiers in Psychology, 2013
Play-acted emotional expressions are a frequent aspect in our life, ranging from deception to theater, film, and radio drama, to emotion research. To date, however, it remained unclear whether play-acted emotions correspond to spontaneous emotion expressions. To test whether acting influences the vocal expression of emotion, we compared radio sequences of naturally occurring emotions to actors' portrayals. It was hypothesized that play-acted expressions were performed in a more stereotyped and aroused fashion. Our results demonstrate that speech segments extracted from play-acted and authentic expressions differ in their voice quality. Additionally, the play-acted speech tokens revealed a more variable F 0 -contour. Despite these differences, the results did not support the hypothesis that the variation was due to changes in arousal. This analysis revealed that differences in perception of play-acted and authentic emotional stimuli reported previously cannot simply be attributed to differences in arousal, but by slight and implicitly perceptible differences in encoding. Citation: Jürgens R, Hammerschmidt K and Fischer J (2011) Authentic and playacted vocal emotion expressions reveal acoustic differences. Front. Psychology 2:180.
Vocal expression of emotion is associated with formant characteristics
Journal of the Acoustical Society of America, 1995
The similarities of voice between members of the same family were found to be high when measured both perceptually and acoustically. The recordings of nine subjects (two mothers, two daughters, two sisters, two brothers, and an unrelated male) were made using sentences of different stress and reiterant syllable combinations. The recordings were paired into related and unrelated sets. Unfamilar listeners were instructed to listen to
Perception of levels of emotion in prosody
2015
Prosody conveys information about the emotional state of the speaker. In this study we test whether listeners are able to detect different levels in the emotional state of the speaker based on prosodic features such as intonation, speech rate and intensity. We ran a perception experiment in which we ask Swiss German and Chinese listeners to recognize the intended emotions that the professional speaker produced. The results indicate that both Chinese and Swiss German listeners could identify the intended emotions. However, Swiss German listeners could detect different levels of happiness and sadness better than the Chinese listeners. This finding might show that emotional prosody does not function categorically, distinguishing only different emotions, but also indicates different degrees of the expressed emotion.
Emotional expressions are an essential element of human interactions. Recent work has increasingly recognized that emotional vocalizations can color and shape interactions between individuals. Here we present data on the psychometric properties of a recently developed database of authentic nonlinguistic emotional vocalizations from human adults and infants (the Oxford Vocal 'OxVoc' Sounds Database; Parsons, Young, Craske, Stein, & Kringelbach, 2014). In a large sample ( = 562), we demonstrate that adults can reliably categorize these sounds (as 'positive,' 'negative,' or 'sounds with no emotion'), and rate valence in these sounds consistently over time. In an extended sample ( = 945, including the initial = 562), we also investigated a number of individual difference factors in relation to valence ratings of these vocalizations. Results demonstrated small but significant effects of (a) symptoms of depression and anxiety with more negative ratings of adult neutral vocalizations (2 = .011 and 2 = .008, respectively) and (b) gender differences in perceived valence such that female listeners rated adult neutral vocalizations more positively and infant cry vocalizations more negatively than male listeners (2 = .021, 2 = .010, respectively). Of note, we did not find evidence of negativity bias among other affective vocalizations or gender differences in perceived valence of adult laughter, adult cries, infant laughter, or infant neutral vocalizations. Together, these findings largely converge with factors previously shown to impact processing of emotional facial expressions, suggesting a modality-independent impact of depression, anxiety, and listener gender, particularly among vocalizations with more ambiguous valence.