Evaluation of Multimodal Behaviour of Embodied Agents (original) (raw)

EVALUATION OF MULTIMODAL BEHAVIOUR OF EMBODIED AGENTS Cooperation between Speech and Gestures

2000

Individuality of Embodied Conversational Agents (ECAs) may depend on both the look of the agent and the way it combines different modalities such as speech and gesture. In this chapter, we describe a study in which male and female users had to listen to three short technical presentations made by ECAs. Three multimodal strategies of ECAs for using arm gestures with speech were compared: redundancy, complementarity, and speech-specialization. These strategies were randomly attributed to different-looking 2D ECAs, in order to test independently the effects of multimodal strategy and ECA's appearance. The variables we examined were subjective impressions and recall performance. Multimodal strategies proved to influence subjective ratings of quality of explanation, in particular for male users. On the other hand, appearance affected likeability, but also recall performance. These results stress the importance of both multimodal strategy and appearance to ensure pleasantness and effectiveness of presentation ECAs.

User attitude towards an embodied conversational agent: Effects of the interaction mode

Journal of Pragmatics, 2010

In the majority of existing applications, users are enabled to interact with embodied conversational agents (ECAs) with keyboard and mouse. However, while using a keyboard to communicate with an agent that talks and simulates human-like expressions is quite unnatural, a speech-based user input is a more natural way to interact with a human-like interlocutor. In addition, spoken interaction is likely to become more common in the near future. Humans proved to align themselves in conversations by matching their nonverbal behavior and word use: they instinctively converge in the number of words used by turn and in the selection of terms belonging to 'social/affect' or 'cognitive' categories (Niederhoffer and Pennebaker, 2002). ECAs should be able to emulate this ability; understanding whether and how the interaction mode influences the user behavior and defining a method to recognize relevant aspects of this behavior is therefore important in establishing how the ECA should adapt to the situation.

04121 Abstracts Collection -- Evaluating Embodied Conversational Agents

Dagstuhl Seminars, 2004

From 14.03.04 to 19.03.04, the Dagstuhl Seminar 04121 Evaluating Embodied Conversational Agents was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The rst section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

The effects of speech–gesture cooperation in animated agents’ behavior in multimedia presentations

Interacting with Computers, 2007

Until now, research on arrangement of verbal and non-verbal information in multimedia presentations has not considered multimodal behavior of animated agents. In this paper, we will present an experiment exploring the effects of different types of speech-gesture cooperation in agents' behavior: redundancy (gestures duplicate pieces of information conveyed by speech), complementarity (distribution of information across speech and gestures) and a control condition in which gesture does not convey semantic information. Using a Latin-square design, these strategies were attributed to agents of different appearances to present different objects. Fifty-four male and 54 female users attended three short presentations performed by the agents, recalled the content of presentations and evaluated both the presentations and the agents. Although speech-gesture cooperation was not consciously perceived, it proved to influence users' recall performance and subjective evaluations: redundancy increased verbal information recall, ratings of the quality of explanation, and expressiveness of agents. Redundancy also resulted in higher likeability scores for the agents and a more positive perception of their personality. Users' gender had no influence on this set of results. Ó 2007 Published by Elsevier B.V.

Evaluating embodied conversational agents in multimodal interfaces

Computational Cognitive Science, 2015

Based on cross-disciplinary approaches to Embodied Conversational Agents, evaluation methods for such human-computer interfaces are structured and presented. An introductory systematisation of evaluation topics from a conversational perspective is followed by an explanation of social-psychological phenomena studied in interaction with Embodied Conversational Agents, and how these can be used for evaluation purposes. Major evaluation concepts and appropriate assessment instruments-established and new ones-are presented, including questionnaires, annotations and log-files. An exemplary evaluation and guidelines provide hands-on information on planning and preparing such endeavours.

Embodied Conversational Agents and Influences

ECAI, 2004

Abstract. In view of creating Embodied conversational agent (ECA) able to display different behaviors depending on factors such as context environment, personality and culture, we propose a taxonomy and a computational model of the influences these factors may induce. Influences act not only on the type of the signals an agent conveys but also on the expressivity of the signals. Thus, to individualize ECAs, we consider not only the influences acting on the agent but also the notion of expressivity. Expressivities arise at various ...

Influences on embodied conversational agent's expressivity: Towards an individualization of the ecas

… of the Artificial Intelligence and the …, 2004

We aim at creating not a generic Embodied Conversational Agents (ECAs) but an agent with a specific individuality. Our approach is based on different expressivities: the agent's expressivity, the communicative of behavioral expressivity. Contextual factors as well as factors such as culture and personality shape the expressivity of an agent. We call such factors "influences". Expressivity is described in terms of signals (e.g. smile, hand gesture, look at) and their temporal course. In this paper, we are interesting in modelling the effects of influences may have in the determination of signals. We propose a computational model of these influences and of the agent's expressivity. We have developed a taxonomy of signals according to their modality (i.e. face, posture, gesture, or gaze), to their related meaning and to their correspondence to expressivity domains (the range of expressivity than they may express). This model takes also into account the signals dynamic instantiation, i.e. the modification of signals to alter their expressivity (without modify the corresponding meaning).

Embodied conversational characters: representation formats for multimodal communicative behaviours

This contribution deals with the requirements on representation languages employed in planning and displaying communicative multimodal behaviour of Embodied Conversational Agents (ECAs). We focus on the role of behaviour representation frameworks as part of the processing chain from intent planning to the planning and generation of multimodal communicative behaviours. On the one hand, the field is fragmented, with almost everybody working on ECAs developing their own tailor-made representations, which is amongst others reflected in the extensive references list. On the other hand, there are general aspects that need to be modelled in order to generate multimodal behaviour. Throughout the chapter we take different perspectives on existing representation languages and outline the fundament of a common framework.