Tracking Conversational Gestures of Extraverts and Introverts in Multimodal Interaction (original) (raw)
Related papers
This paper presents a new corpus, the Personality Dyads Corpus, consisting of multimodal data for three conversations between three personality-matched, two-person dyads (a total of 9 separate dialogues). Participants were selected from a larger sample to be 0.8 of a standard deviation above or below the mean on the Big-Five Personality extraversion scale, to produce an Extravert-Extravert dyad, an Introvert-Introvert dyad, and an Extravert-Introvert dyad. Each pair carried out conversations for three different tasks. The conversations were recorded using optical motion capture for the body and data gloves for the hands. Dyads' speech was transcribed and the gestural and postural behavior was annotated with ANVIL. The released corpus includes personality profiles, ANVIL files containing speech transcriptions and the gestural annotations, and BVH files containing body and hand motion in 3D. The corpus should be a useful resource for researchers working on generating human-like and adaptive multimodal behaviors in intelligent virtual agents.
Evaluating the Effect of Gesture and Language on Personality Perception in Conversational Agents
Lecture Notes in Computer Science, 2010
A significant goal in multi-modal virtual agent research is to determine how to vary expressive qualities of a character so that it is perceived in a desired way. The "Big Five" model of personality offers a potential framework for organizing these expressive variations. In this work, we focus on one parameter in this model -extraversion -and demonstrate how both verbal and non-verbal factors impact its perception. Relevant findings from the psychology literature are summarized. Based on these, an experiment was conducted with a virtual agent that demonstrates how language generation, gesture rate and a set of movement performance parameters can be varied to increase or decrease the perceived extraversion. Each of these factors was shown to be significant. These results offer guidance to agent designers on how best to create specific characters.
Ninth Artificial Intelligence and Interactive Digital Entertainment Conference, 2013
Virtual agents used in storytelling applications should display consistent and natural multimodal expressions of emotions. In this paper, we describe the method that we defined to endow virtual narrators with individual gesture profiles. We explain how we collected a corpus of gestural behaviors displayed by different actors telling the same story. Videos were annotated both manually and automatically. Preliminary analyses are presented.
Distinctiveness in multimodal behaviors
2008
While talking, people may move heavily their arms around, remain expressionless, or even display subtle facial movements... These differences may arise from personality, cultural, social factors and many more. In the present work, we are interested in defining a schema that characterizes distinctiveness in behaviors. Distinctiveness encompasses behaviors differences regarding (i) shape (which signals are performed) and (ii) quality (expressivity of movement, the way in which movements are performed). Thus, we aim to define embodied conversational agents (ECAs) that, given their communicative intention and behaviors tendencies definition, present distinctive behaviors.
2007
In order to design affective interactive systems, experimental grounding is required for studying expressions of emotion during interaction. In this paper, we present the EmoTaboo protocol for the collection of multimodal emotional behaviours occurring during human-human interactions in a game context. First annotations revealed that the collected data contains various multimodal expressions of emotions and other mental states. In order to reduce the influence of language via a predetermined set of labels and to take into account differences between coders in their capacity to verbalize their perception, we introduce a new annotation methodology based on 1) a hierarchical taxonomy of emotion-related words, and 2) the design of the annotation interface. Future directions include the implementation of such an annotation tool and its evaluation for the annotation of multimodal interactive and emotional behaviours. We will also extend our first annotation scheme to several other characteristics interdependent of emotions.
Abstract. The virtual characters are being used increasingly in user interfaces to improve human-machine communication. For this reason, it is necessary to improve the interaction of these characters in a similar way like the communication between human beings. This paper presents a general model to characterize and select gestures influenced by personality and emotional state for virtual characters so they can communicate intentions, feelings and ideas.
Effects of Language Variety on Personality Perception in Embodied Conversational Agents
Lecture Notes in Computer Science, 2014
In this paper, we investigate the effects of language variety in combination with bodily behaviour on the perceived personality of a virtual agent. In particular, we explore changes on the extroversion-introversion dimension of personality. An online perception study was conducted featuring a virtual character with different levels of expressive body behaviour and different synthetic voices representing German and Austrian language varieties. Clear evidence was found that synthesized language variety, and gestural expressivity influence the human perception of an agent's extroversion. Whereby Viennese and Austrian standard language are perceived as more extrovert than it is the case for the German standard.
Evaluation of Multimodal Behaviour of Embodied Agents
Human-Computer Interaction Series, 2004
Individuality of Embodied Conversational Agents (ECAs) may depend on both the look of the agent and the way it combines different modalities such as speech and gesture. In this chapter, we describe a study in which male and female users had to listen to three short technical presentations made by ECAs. Three multimodal strategies of ECAs for using arm gestures with speech were compared: redundancy, complementarity, and speech-specialization. These strategies were randomly attributed to different-looking 2D ECAs, in order to test independently the effects of multimodal strategy and ECA's appearance. The variables we examined were subjective impressions and recall performance. Multimodal strategies proved to influence subjective ratings of quality of explanation, in particular for male users. On the other hand, appearance affected likeability, but also recall performance. These results stress the importance of both multimodal strategy and appearance to ensure pleasantness and effectiveness of presentation ECAs.
Annotating Multimodal Behaviors Occurring During Non Basic Emotions
Lecture Notes in Computer Science, 2005
The design of affective interfaces such as credible expressive characters in story-telling applications requires the understanding and the modeling of relations between realistic emotions and behaviors in different modalities such as facial expressions, speech, hand gestures and body movements. Yet, research on emotional multimodal behaviors has focused on individual modalities during acted basic emotions. In this paper we describe the coding scheme that we have designed for annotating multimodal behaviors observed during mixed and non acted emotions. We explain how we used it for the annotation of videos from a corpus of emotionally rich TV interviews. We illustrate how the annotations can be used to compute expressive profiles of videos and relations between non basic emotions and multimodal behaviors.