A model for personality and emotion simulation (original) (raw)
Related papers
Generic personality and emotion simulation for conversational agents
Computer Animation and Virtual Worlds, 2004
This paper describes a generic model for personality, mood and emotion simulation for conversational virtual humans. We present a generic model for updating the parameters related to emotional behaviour, as well as a linear implementation of the generic update mechanisms. We explore how existing theories for appraisal can be integrated into the framework. Then we describe a prototype system that uses the described mod-1 els in combination with a dialogue system and a talking head with synchronised speech and facial expressions.
An embodied dialogue system with personality and emotions
Proceedings Workshop on Companionable Dialogue Systems, ACL 2010, 2010
An enduring challenge in humancomputer interaction (HCI) research is the creation of natural and intuitive interfaces. Besides the obvious requirement that such interfaces communicate over modalities such as natural language (especially spoken) and gesturing that are more natural for humans, exhibiting affect and adaptivity have also been identified as important factors to the interface's acceptance by the user. In the work presented here, we propose a novel architecture for affective and multimodal dialogue systems that allows explicit control over the personality traits that we want the system to exhibit. More specifically, we approach personality as a means of synthesising different, and possibly conflicting, adaptivity models into an overall model to be used to drive the interaction components of the system. Furthermore, this synthesis is performed in the presence of domain knowledge, so that domain structure and relations influence the results of the calculation.
Emotional communication with virtual humans
2003
In this paper, we present our approach to modelling perceptive 3D virtual characters with emotion and personality. The characters are powered by a dialogue system that consists of a large set of basic interactions between the user and the computer. These interactions are encoded in finite state machines. The system is integrated with an expression recognition system, that tracks a user's face in real-time and obtains expression data. Also, the system includes a personality and emotion simulator, so that the character responds naturally to both the speech and the facial expressions of the user. The virtual character is represented by a 3D face that performs the speech and facial animation in real-time, together with the appropriate facial expressions.
Personality models to and from virtual characters
2017
In order to be believable, virtual agents must possess both a behavioral model simulating emotions and personality, and a convincing aesthetics [4]. A lot of research already exists about models of emotions, and some seminal work investigates now the role of personality [1, 2]. While emotions are dynamic and variable in time, personality is a static feature of humans, changing only very slowly through the course of a life. The emotional state drives the style of the behavior of a character (how it accomplishes actions) the personality drives the intention of an autonomous agent (what to do next). However, there is not much work investigating the relationships between the personality of a virtual agent, its behavior, and its physical appearance. The work that we are conducting in the SLSI group is based on the observation that people very quickly build up their ideas about the personality of others in zero-acquaintance encounters [11]. The judgment of the personality can be modeled, ...
Simulating emotional personality in human computer interfaces
International Conference on Fuzzy Systems, 2010
Currently, there are quite a number of computational systems that attend to humans automatically, e.g., using natural language. However, the interaction with these machines is still too artificial. Usually, these machines are insensitive to the emotional content being expressed by the communication partner, as well as incapable of expressing emotional content. This paper presents the architecture of a computational system to simulate emotional states. The simulator is split up in three modules that provide to the designer a great number of possibilities to model different aspects of a simulated emotional personality.
Varying Personality in Spoken Dialogue with a Virtual Human
Lecture Notes in Computer Science, 2009
We extend a virtual human architecture that has been used to build tactical questioning characters with a parameterizable personality model, allowing characters to be designed with different personalities, allowing a richer set of possible user interactions in a training environment. Two experiments were carried out to evaluate the framework. In the first, it was determined that personality models do have an impact on user perception of several aspects of the personality of the character. In the second, a model of assertiveness was evaluated and found to have a small but significant impact on the users who interacted with the full virtual human, and larger differences in judgement of annotators who examined only the verbal transcripts of the interaction.
An NVC Emotional Model for Conversational Virtual Humans in a 3D Chatting Environment
Articulated Motion and Deformable Objects. Proceedings 7th International Conference, AMDO 2012, 2012
This paper proposes a new emotional model for Virtual Humans (VHs) in a conversational environment. As a part of a multi-users emotional 3D-chatting system, this paper focus on how to formulate and visualize the flow of emotional state defined by the Valence-Arousal-Dominance (VAD) parameters. From this flow of emotion over time, we successfully visualized the change of VHs' emotional state through the proposed emoFaces and emoMotions. The notion of Non-Verbal Communication (NVC) was exploited for driving plausible emotional expressions during conversation. With the help of a proposed interface, where a user can parameterize emotional state and flow, we succeeded to vary the emotional expressions and reactions of VHs in a 3D conversation scene.