Acoustic markers impacting discrimination accuracy of emotions in voice (original) (raw)

The quality of communication depends on how accurately the listener perceives the intended message. In addition to understanding the words, listeners are expected to interpret the speaker's accompanying emotional tone. However, it is not always clear why a neutral voice can be perceived as affective or vice versa. The present study aimed to investigate the differences between the acoustic profiles of angry, happy, and neutral emotions and to identify the acoustic markers that can lead to misperception of emotions conveyed through the voice. The study employed an encoding-decoding approach. Ten professional actors recorded the Latvian word /laba:/ in neutral, happy, and angry intonations, and thirty-two agematched respondents were asked to identify the emotion conveyed in the heard voice sample. A complete acoustic analysis was conducted for each voice sample using PRAAT, which included fundamental frequency (F 0), intensity level (IL), spectral (HNR) and cepstral parameters (CPPs), and duration of a produced word (DPW). The vocal expressions of emotions were analyzed from both encoding and decoding perspectives. The results showed statistically significant differences in the acoustic parameters that distinguish vocally expressed happy and angry emotions from neutral voices and acoustic parameters that were different between happy and angry emotions.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.