EVALUATION OF MULTIMODAL BEHAVIOUR OF EMBODIED AGENTS Cooperation between Speech and Gestures (original) (raw)
Related papers
Evaluation of Multimodal Behaviour of Embodied Agents
Human-Computer Interaction Series, 2004
Individuality of Embodied Conversational Agents (ECAs) may depend on both the look of the agent and the way it combines different modalities such as speech and gesture. In this chapter, we describe a study in which male and female users had to listen to three short technical presentations made by ECAs. Three multimodal strategies of ECAs for using arm gestures with speech were compared: redundancy, complementarity, and speech-specialization. These strategies were randomly attributed to different-looking 2D ECAs, in order to test independently the effects of multimodal strategy and ECA's appearance. The variables we examined were subjective impressions and recall performance. Multimodal strategies proved to influence subjective ratings of quality of explanation, in particular for male users. On the other hand, appearance affected likeability, but also recall performance. These results stress the importance of both multimodal strategy and appearance to ensure pleasantness and effectiveness of presentation ECAs.
The effects of speech–gesture cooperation in animated agents’ behavior in multimedia presentations
Interacting with Computers, 2007
Until now, research on arrangement of verbal and non-verbal information in multimedia presentations has not considered multimodal behavior of animated agents. In this paper, we will present an experiment exploring the effects of different types of speech-gesture cooperation in agents' behavior: redundancy (gestures duplicate pieces of information conveyed by speech), complementarity (distribution of information across speech and gestures) and a control condition in which gesture does not convey semantic information. Using a Latin-square design, these strategies were attributed to agents of different appearances to present different objects. Fifty-four male and 54 female users attended three short presentations performed by the agents, recalled the content of presentations and evaluated both the presentations and the agents. Although speech-gesture cooperation was not consciously perceived, it proved to influence users' recall performance and subjective evaluations: redundancy increased verbal information recall, ratings of the quality of explanation, and expressiveness of agents. Redundancy also resulted in higher likeability scores for the agents and a more positive perception of their personality. Users' gender had no influence on this set of results. Ó 2007 Published by Elsevier B.V.
User attitude towards an embodied conversational agent: Effects of the interaction mode
Journal of Pragmatics, 2010
In the majority of existing applications, users are enabled to interact with embodied conversational agents (ECAs) with keyboard and mouse. However, while using a keyboard to communicate with an agent that talks and simulates human-like expressions is quite unnatural, a speech-based user input is a more natural way to interact with a human-like interlocutor. In addition, spoken interaction is likely to become more common in the near future. Humans proved to align themselves in conversations by matching their nonverbal behavior and word use: they instinctively converge in the number of words used by turn and in the selection of terms belonging to 'social/affect' or 'cognitive' categories (Niederhoffer and Pennebaker, 2002). ECAs should be able to emulate this ability; understanding whether and how the interaction mode influences the user behavior and defining a method to recognize relevant aspects of this behavior is therefore important in establishing how the ECA should adapt to the situation.
04121 Abstracts Collection -- Evaluating Embodied Conversational Agents
Dagstuhl Seminars, 2004
From 14.03.04 to 19.03.04, the Dagstuhl Seminar 04121 Evaluating Embodied Conversational Agents was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The rst section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.
Evaluating embodied conversational agents in multimodal interfaces
Computational Cognitive Science, 2015
Based on cross-disciplinary approaches to Embodied Conversational Agents, evaluation methods for such human-computer interfaces are structured and presented. An introductory systematisation of evaluation topics from a conversational perspective is followed by an explanation of social-psychological phenomena studied in interaction with Embodied Conversational Agents, and how these can be used for evaluation purposes. Major evaluation concepts and appropriate assessment instruments-established and new ones-are presented, including questionnaires, annotations and log-files. An exemplary evaluation and guidelines provide hands-on information on planning and preparing such endeavours.
A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction
RO-MAN, 2011 IEEE, 2011
Gesture is an important feature of social interaction, frequently used by human speakers to illustrate what speech alone cannot provide, e.g. to convey referential, spatial or iconic information. Accordingly, humanoid robots that are intended to engage in natural human-robot interaction should produce speech-accompanying gestures for comprehensible and believable behavior. But how does a robot's non-verbal behavior influence human evaluation of communication quality and the robot itself? To address this research question we conducted two experimental studies. Using the Honda humanoid robot we investigated how humans perceive various gestural patterns performed by the robot as they interact in a situational context. Our findings suggest that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech. These findings were found to be enhanced when the participants were explicitly requested to direct their attention towards the robot during the interaction.
Embodied conversational characters: representation formats for multimodal communicative behaviours
This contribution deals with the requirements on representation languages employed in planning and displaying communicative multimodal behaviour of Embodied Conversational Agents (ECAs). We focus on the role of behaviour representation frameworks as part of the processing chain from intent planning to the planning and generation of multimodal communicative behaviours. On the one hand, the field is fragmented, with almost everybody working on ECAs developing their own tailor-made representations, which is amongst others reflected in the extensive references list. On the other hand, there are general aspects that need to be modelled in order to generate multimodal behaviour. Throughout the chapter we take different perspectives on existing representation languages and outline the fundament of a common framework.
Embodied Conversational Agents and Influences
ECAI, 2004
Abstract. In view of creating Embodied conversational agent (ECA) able to display different behaviors depending on factors such as context environment, personality and culture, we propose a taxonomy and a computational model of the influences these factors may induce. Influences act not only on the type of the signals an agent conveys but also on the expressivity of the signals. Thus, to individualize ECAs, we consider not only the influences acting on the agent but also the notion of expressivity. Expressivities arise at various ...