Classifying Laughter: An Exploration Of The Identification and Acoustic Features of Laughter Types (original) (raw)

Laughter in Conversation: Features of Occurrence and Acoustic Structure

Journal of Nonverbal Behavior, 2000

Although human laughter mainly occurs in social contexts, most studies have dealt with laughter evoked by media. In our study, we investigated conversational laughter. Our results show that laughter is much more frequent than has been described previously by self-report studies. Contrary to the common view that laughter is elicited by external stimuli, participants frequently laughed after their own verbal utterances. We thus suggest that laughter in conversation may primarily serve to regulate the flow of interaction and to mitigate the meaning of the preceding utterance. Conversational laughter bouts consisted of a smaller number of laughter elements and had longer interval durations than laughter bouts elicited by media. These parameters also varied with conversational context. The high intraindividual variability in the acoustic parameters of laughter, which greatly exceeded the parameter variability between subjects, may thus be a result of the laughter context.

Social and acoustic determinants of perceived laughter intensity

2020

Existing research links subjective judgments of perceived laughter intensity with features such as duration, amplitude, fundamental frequency, and voicing. We examine these associations in a new database of social laughs produced in situations inducing amusement, embarrassment, and schadenfreude. We also test the extent to which listeners’ judgments of laughter intensity vary as a function of the social situation in which laughs were produced.

Analysis of the occurrence of laughter in meetings

Annual Conference of the International Speech Communication Association, 2007

Automatic speech understanding in natural multiparty conversation settings stands to gain from parsing not only verbal but also non-verbal vocal communicative behaviors. In this work, we study the most frequently annotated non-verbal behavior, laughter, whose detection has clear implications for speech understanding tasks, and for the automatic recognition of affect in particular. To complement existing acoustic descriptions of the phenomenon, we explore the temporal patterning of laughter over the course of conversation, with a view towards its automatic segmentation and detection. We demonstrate that participants vary extensively in their use of laughter, and that laughter differs from speech in its duration and in the regularity of its occurrence. We also show that laughter and speech are quite dissimilar in terms of the degree of simultaneous vocalization by multiple participants, and in terms of the probability of transitioning into and out of vocalization overlap states.

Not All Laughs are Alike: Voiced but Not Unvoiced Laughter Readily Elicits Positive Affect

Psychological Science, 2001

We tested whether listeners are differentially responsive to the presence or absence of voicing, a salient, distinguishing acoustic feature, in laughter. Each of 128 participants rated 50 voiced and 20 unvoiced laughs twice according to one of five different rating strategies. Results were highly consistent regardless of whether participants rated their own emotional responses, likely responses of other people, or one of three perceived attributes concerning the laughers, thus indicating that participants were experiencing similarly differentiated affective responses in all these cases. Specifically, voiced, songlike laughs were significantly more likely to elicit positive responses than were variants such as unvoiced grunts, pants, and snortlike sounds. Participants were also highly consistent in their relative dislike of these other sounds, especially those produced by females. Based on these results, we argue that laughers use the acoustic features of their vocalizations to shape...

On the Correlation between Perceptual and Contextual Aspects of Laughter in Meetings

We have analyzed over 13000 bouts of laughter, in over 65 hours of unscripted, naturally occurring multiparty meetings, to identify discriminative contexts of voiced and unvoiced laughter. Our results show that, in meetings, laughter is quite frequent, accounting for almost 10% of all vocal activity effort by time. Approximately a third of all laughter is unvoiced, but meeting participants vary extensively in how often they employ voicing during laughter. In spite of this variability, laughter appears to exhibit robust temporal characteristics. Voiced laughs are on average longer than unvoiced laughs, and appear to correlate with temporally adjacent voiced laughter from other participants, as well as with speech from the laugher. Unvoiced laughter appears to occur independently of vocal activity from other participants.

Acoustic profiles of distinct emotional expressions in laughter

Journal of The Acoustical Society of America, 2009

Although listeners are able to decode the underlying emotions embedded in acoustical laughter sounds, little is known about the acoustical cues that differentiate between the emotions. This study investigated the acoustical correlates of laughter expressing four different emotions: joy, tickling, taunting, and schadenfreude. Analysis of 43 acoustic parameters showed that the four emotions could be accurately discriminated on the basis of a small parameter set. Vowel quality contributed only minimally to emotional differentiation whereas prosodic parameters were more effective. Emotions are expressed by similar prosodic parameters in both laughter and speech.

A comparative cross-domain study of the occurrence of laughter in meeting and seminar corpora

2008

Laughter is an intrinsic component of human-human interaction, and current automatic speech understanding paradigms stand to gain significantly from its detection and modeling. In the current work, we produce a manual segmentation of laughter in a large corpus of interactive multi-party seminars, which promises to be a valuable resource for acoustic modeling purposes. More importantly, we quantify the occurrence of laughter in this new domain, and contrast our observations with findings for laughter in multi-party meetings. Our analyses show that, with respect to the majority of measures we explore, the occurrence of laughter in both domains is quite similar.

Third Interdisciplinary Workshop on Laughter and other Non-Verbal Vocalisations in Speech

2012

This study investigates the facial features of schadenfreude laughter in historic illustrations by applying the Facial Action Coding System and assesses the decoding by naïve subjects. Results show that while the encoding of schadenfreude laughter is heterogeneous, schadenfreude is decoded when the facial expression unites markers of joy (Duchenne Display, consisting of the orbicularis oculi pars orbitalis muscle and the zygomatic major muscle), as well as markers of negative emotions (e.g., brow lowering), or in one case, where the initially categorized schadenfreude illustration contained markers distorting the expression of joy (e.g., frowning and the lowering of lip corners). These findings support the hypothesis that schadenfreude may be expressed by a morphologically distinct blend of a positive and a negative emotion, or is expressed by joyful laughter (with the expression being modulated due to social desirability).