No Peanuts! Affective Cues for the Virtual Bartender (original) (raw)
Related papers
No peanut! Affective Cues for the Virtual Bartender
The Florida AI Research Society, 2011
The aim of this paper are threefold: it explores methods for the detection of affective states in text, it presents the usage of such affective cues in a conversational system and it evaluates its effectiveness in a virtual reality setting. Valence and arousal values, used for generating facial expressions of users' avatars, are also incorporated into the dialog, helping to bridge the gap between textual and visual modalities. The system is evaluated in terms of its ability to: i) generate a realistic dialog, ii) create an enjoyable chatting experience, and iii) establish an emotional connection with participants. Results show that user ratings for the conversational agent match those obtained in a Wizard of Oz setting.
Skowron Pirker Rank Paltoglou Ahn Gobron No Peanuts Affective Cues For The Virtual Bartender
The aim of this paper is threefold: it explores methods for the detection of affective states in text, it presents the usage of such affective cues in a conversational system and it evaluates its effectiveness in a virtual reality setting. Valence and arousal values, used for generating facial expressions of users' avatars, are also incorporated into the dialog, helping to bridge the gap between textual and visual modalities. The system is evaluated in terms of its ability to: i) generate a realistic dialog, ii) create an enjoyable chatting experience, and iii) establish an emotional connection with participants. Results show that user ratings for the conversational agent match those obtained in a Wizard of Oz setting.
Affect bartender-affective cues and their application in a conversational agent
2011
Abstract This paper presents methods for the detection of textual expressions of users' affective states and explores an application of these affective cues in a conversational system-Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces. The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.
Skowron Paltoglou Affect Bartender Affective Cues Application Conversational Agent
This paper presents methods for the detection of textual expressions of users' affective states and explores an application of these affective cues in a conversational system -Affect Bartender. We also describe the architecture of the system, core system components and a range of developed communication interfaces. The application of the described methods is illustrated with examples of dialogs conducted with experiment participants in a Virtual Reality setting.
2005
This short paper contains a preliminary description of a novel type of chat system that aims at realizing natural and social communication between distant communication partners. The system is based on an Emotion Estimation module that assesses the affective content of textual chat messages and avatars associated with chat partners that act out the assessed emotions of messages through multiple modalities, including synthetic speech and associated affective gestures.
Effective affective communication in virtual environments
Proceedings of the Second Workshop on Intelligent Virtual Agents, 1999
Studies of communication between entities in virtual environments have tended to focus on the relevant technical issues and its social impact impact. An important component of human communication is the conveying of affective information via voice, facial expression and gestures and other body language. Virtual environments may be populated by representations of human or virtual agent participants. Communications may be between person-person, agent-agent or person-agent. This paper explores the possible use of the ...
A Chat System Based on Emotion Estimation from Text and Embodied Conversational Messengers
Lecture Notes in Computer Science, 2005
A Chat System Based on Emotion Estimation from Text and Embodied Conversational Messengers (Preliminary Report) Chunling Ma Graduate School of ... Among them, "norogo" refers to the user as one chat client, the other two (named "halaia" and "koko") are displayed by their ...
Conveying Emotions through Facially Animated Avatars in Networked Virtual Environments
Motion in Games, 2008
In this paper, our objective is to facilitate the way in which emotion is conveyed through avatars in virtual environments. The established way of achieving this includes the end-user having to manually select his/her emotional state through a text base interface (using emoticons and/or keywords) and applying these pre-defined emotional states on avatars. In contrast to this rather trivial solution, we envisage a system that enables automatic extraction of emotion-related metadata from a video stream, most often originating from a webcam. Contrary to the seemingly trivial solution of sending entire video streams-which is an optimal solution but often prohibitive in terms of bandwidth usagethis metadata extraction process enables the system to be deployed in large-scale environments, as the bandwidth required for the communication channel is severely limited.
An NVC Emotional Model for Conversational Virtual Humans in a 3D Chatting Environment
Articulated Motion and Deformable Objects. Proceedings 7th International Conference, AMDO 2012, 2012
This paper proposes a new emotional model for Virtual Humans (VHs) in a conversational environment. As a part of a multi-users emotional 3D-chatting system, this paper focus on how to formulate and visualize the flow of emotional state defined by the Valence-Arousal-Dominance (VAD) parameters. From this flow of emotion over time, we successfully visualized the change of VHs' emotional state through the proposed emoFaces and emoMotions. The notion of Non-Verbal Communication (NVC) was exploited for driving plausible emotional expressions during conversation. With the help of a proposed interface, where a user can parameterize emotional state and flow, we succeeded to vary the emotional expressions and reactions of VHs in a 3D conversation scene.
2009
In this paper we proposed a computational model that automatically integrates a knowledge base with an affective model. The knowledge base presented as a semantic model, is used for an accurate definition of the emotional interaction of a virtual character and their environment. The affective model generates emotional states from the emotional output of the knowledge base. Visualization of emotional states is done through facial expressions automatically created using the MPEG-4 standard. In order to test the model, we designed a story that provides the events, preferences, goals, and agent's interaction, used as input for the model. As a result the emotional states obtained as output were totally coherent with the input of the model. Then, the facial expressions representing these states were evaluated by a group of persons from different academic backgrounds, proving that emotional states can be recognized in the face of the virtual character.