Social and acoustic determinants of perceived laughter intensity (original) (raw)
Related papers
Not only decibels: Exploring human judgments of laughter intensity
2020
Paper presented at the 5th Laughter Workshop, Paris, 27-28 September 2018. While laughter intensity is an important characteristic immediately perceivable for the listeners, empirical investigations of this construct are still scarce. Here, we explore the relationship between human judgments of laughter intensity and laughter acoustics. Our results show that intensity is predicted by multiple dimensions, including duration, loudness, pitch variables, and center of gravity. Controlling for loudness confirmed the robustness of these effects and revealed significant relationships between intensity and other features, such as harmonicity and voicing. Together, the findings demonstrate that laughter intensity does not overlap with loudness. They also highlight the necessity of further research on this complex dimension.
Not All Laughs are Alike: Voiced but Not Unvoiced Laughter Readily Elicits Positive Affect
Psychological Science, 2001
We tested whether listeners are differentially responsive to the presence or absence of voicing, a salient, distinguishing acoustic feature, in laughter. Each of 128 participants rated 50 voiced and 20 unvoiced laughs twice according to one of five different rating strategies. Results were highly consistent regardless of whether participants rated their own emotional responses, likely responses of other people, or one of three perceived attributes concerning the laughers, thus indicating that participants were experiencing similarly differentiated affective responses in all these cases. Specifically, voiced, songlike laughs were significantly more likely to elicit positive responses than were variants such as unvoiced grunts, pants, and snortlike sounds. Participants were also highly consistent in their relative dislike of these other sounds, especially those produced by females. Based on these results, we argue that laughers use the acoustic features of their vocalizations to shape...
Laughter in Conversation: Features of Occurrence and Acoustic Structure
Journal of Nonverbal Behavior, 2000
Although human laughter mainly occurs in social contexts, most studies have dealt with laughter evoked by media. In our study, we investigated conversational laughter. Our results show that laughter is much more frequent than has been described previously by self-report studies. Contrary to the common view that laughter is elicited by external stimuli, participants frequently laughed after their own verbal utterances. We thus suggest that laughter in conversation may primarily serve to regulate the flow of interaction and to mitigate the meaning of the preceding utterance. Conversational laughter bouts consisted of a smaller number of laughter elements and had longer interval durations than laughter bouts elicited by media. These parameters also varied with conversational context. The high intraindividual variability in the acoustic parameters of laughter, which greatly exceeded the parameter variability between subjects, may thus be a result of the laughter context.
Acoustic correlates of emotional dimensions in laughter: Arousal, dominance, and valence
Cognition & Emotion, 2011
Although laughter plays an essential part in emotional vocal communication, little is known about the acoustical correlates that encode different emotional dimensions. In this study we examined the acoustical structure of laughter sounds differing along four emotional dimensions: arousal, dominance, sender's valence, and receiver-directed valence. Correlation of 43 acoustic parameters with individual emotional dimensions revealed that each emotional dimension was associated with a number of vocal cues. Common patterns of cues were found with emotional expression in speech, supporting the hypothesis of a common underlying mechanism for the vocal expression of emotions.
Variation of Sound Parameters Affects the Evaluation of Human Laughter
Behaviour, 2001
Sounds of human laughter compose quite effectual stimuli that usually facilitate positive responses. We have studied the mechanisms of such effects and investigated how changes in particular acoustical signal parameters affect the evaluation of laughter. Effects were assessed by evaluating self-report data of human subjects who had been exposed to playbacks of experimentally modi ed laughter material and, for control, also to samples of natural laughter. The modi ed laughter phrases were generated by rst analysing samples of natural laughter, and then using these data to synthesise new laughter material. Analyses of subjects' responses revealed that not only samples that resembled the rhythm of natural laughter (repetition interval of about 0.2 s) were evaluated positively. Instead we found that series with a wide range of repetition intervals were perceived as laughter. The mode of parameter changes within the model series had an additional clear effect on the rating of a given playback sample. Thus, an intra-serial variation of rhythm or pitch received ratings that were closer to ratings of natural laughter (control) than did a stereotyped patterning of stimuli. Especially stimuli with decreases in pitch were well suited to elicit positive reactions. In conclusion, our results showed that features of parameter variations can make human laughter particularly effectual.
Classifying Laughter: An Exploration Of The Identification and Acoustic Features of Laughter Types
2021
This thesis seeks to improve the classification of laughter by uncovering its purpose in communication, identifiability, and acoustic features. Reviewing the existing literature, this paper identifies three main types of laughter: affiliative, de-escalative, and power. Consulting with research assistants, this paper then classifies 113 instances of laughter from 62 Congressional Committee meetings published on C-SPAN. The interrater classification agreement suggests individuals can identify and categorize the different types of laughter with context. Additionally, 14 participants were recruited to complete exercises designed to elicit archetypes of the three laughter categories. These study recordings, which included 124 laughter bouts, were analyzed for acoustic features (pitch (Hz), energy (dB), duration, and proportion of voiced laughter vs. silence). The audio analysis indicates acoustic features of laughter are not overall significantly different amongst the three categories and therefore suggests social context, including proximal language and visual cues, predominantly explains the identifiability of the laughter types.
On the Correlation between Perceptual and Contextual Aspects of Laughter in Meetings
We have analyzed over 13000 bouts of laughter, in over 65 hours of unscripted, naturally occurring multiparty meetings, to identify discriminative contexts of voiced and unvoiced laughter. Our results show that, in meetings, laughter is quite frequent, accounting for almost 10% of all vocal activity effort by time. Approximately a third of all laughter is unvoiced, but meeting participants vary extensively in how often they employ voicing during laughter. In spite of this variability, laughter appears to exhibit robust temporal characteristics. Voiced laughs are on average longer than unvoiced laughs, and appear to correlate with temporally adjacent voiced laughter from other participants, as well as with speech from the laugher. Unvoiced laughter appears to occur independently of vocal activity from other participants.
Acoustic profiles of distinct emotional expressions in laughter
Journal of The Acoustical Society of America, 2009
Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech.