Using Facial Expressiveness of a Virtual Agent to Induce Empathy in Users (original) (raw)
Related papers
Simulating empathy in virtual humans
2013
To my family for their love and support my daddy, my mommy, my brother, and my husband This doctoral thesis was realized during my employment in the Artificial Intelligence Group headed by Prof. Ipke Wachsmuth at the Faculty of Technology in Bielefeld University. The research done in this thesis was kindly supported by the Deutsche Forschungsgemeinschaft (DFG) in the Collaborative Research Center 673 (CRC 673), Alignment in Communication. First of all, I thank my professor Ipke Wachsmuth for giving me this opportunity and for believing in my qualification to succeed in this hard work. His wise advice and very helpful assistance were key contributors to the realization of the present thesis. I also thank Prof. Ana Maria Paiva and Prof. Dirk Reichardt for agreeing to review my thesis. I further thank Prof. Karl Grammer, Andrea Hofstätter, Prof. Pia Knoeferle, and Maria Nella Carminati for their collaboration and support. I thank the former and current members of the Artificial Intelligence Group for the very nice and supportive working atmosphere in our 'M4 Flur'. Especially, I thank Christian Becker-Asano, Stefan Kopp, and Thies Pfeiffer for the fruitful discussions during crucial moments of this thesis. Special thanks are directed to my friends and working colleagues, Nhung (my binom), Alexa, Maha, and Lars for their moral support and for keeping together. I also thank my Tunisian and German friends for allowing me to refrain from intense work and to enjoy other moments in life. Last but not least, I owe my dear family, Abderrahman, Dorra, Heithem, and Tarak, deepest thanks for everything, especially my daddy and mommy who were fully present despite the geographical distance. i Brooks claims that intelligent behavior emerges from an agent's interaction with its environment. Therefore, in order for an agent to exploit the actual situation and to interact with its environment, it needs a physical body composed of sensors and actuators that couple the agent with the environment, and thus influences the agent's internal processing. This is referred to as embodiment . Accordingly, an artificial agent's embodiment is crucial to perceiving and understanding an interaction partner's emotional states, and to expressing and communicating empathy through, for example, facial expressions and verbal utterances. Research on artificial agents exhibiting empathic behavior such as virtual humans and robots, provides valuable results for the integration of empathy in human-machine interaction, and substantiates the role of empathy in enhancing artificial agents' social behavior. Virtual humans are 3D animated characters with human like appearance. Prendinger & Ishizuka [95] found that a virtual human that provides empathic feedback through textual expressions can reduce the stress levels of candidates during job interview tasks. Brave et al. [19] found that in a game scenario of casino-style blackjack, an artificial agent that empathizes with the player's game situation is perceived as more likable, trustworthy, and caring. Leite et al. [69] found that a robot's empathic behavior in a chess game scenario is perceived by children as more engaging and helpful. Further, not only does an artificial agents' ability to empathize have a positive impact on humanmachine interaction, but also their ability to evoke empathy in humans. In this regard, Paiva et al. show that empathic virtual humans can evoke empathy in children and thus can teach them to deal with bullying situations. A virtual human's empathic behavior also contributes to its ability to build and sustain long-term socio-emotional relationships with human partners as demonstrated by Bickmore & Picard [9]. However, in the context of a competitive card game scenario, Becker et al. [7] found that a virtual human's positive empathic emotions are significantly arousing and stress inducing and thus inadequate. Therefore, in line with humans' ability to empathize with each other to different degrees, we believe that a modulation of a virtual human's empathic behavior through factors such as its mood, personality, and relationship to its interaction partner, will allow for a more adequate empathic behavior in the agent across different interaction scenarios. Although much effort has gone to providing artificial agents with features such as mood, personality and social relationships, little attention has been devoted to the role of such features in modulating their empathic behavior. Altogether, endowing artificial agents with appropriate empathic behavior
Autonomous Agents and Multi-Agent Systems
Designers of virtual agents have a combinatorically large space of choices for the look and behavior of their characters. We conducted two between-subjects studies to explore the systematic manipulation of animation quality, speech quality, rendering style, and simulated empathy, and its impact on perceptions of virtual agents in terms of naturalness, engagement, trust, credibility, and persuasion within a health counseling domain. In the first study, animation was varied between manually created, procedural, or no animations; voice quality was varied between recorded audio and synthetic speech; and rendering style was varied between realistic and toon-shaded. In the second study, simulated empathy of the agent was varied between no empathy, verbalonly empathic responses, and full empathy involving verbal, facial, and immediacy feedback. Results show that natural animations and recorded voice are more appropriate for the agent's general acceptance, trust, credibility, and appropriateness for the task. However, for a brief health counseling task, animation might actually be distracting from the persuasive message, with the highest levels of persuasion found when the amount of agent animation is minimized. Further, consistent and high levels of empathy improve agent perception but may interfere with forming a trusting bond with the agent.
Empathy and Its Modulation in a Virtual Human
Lecture Notes in Computer Science, 2013
Endowing artificial agents with the ability to empathize is believed to enhance their social behavior and to make them more likable, trustworthy, and caring. Neuropsychological findings substantiate that empathy occurs to different degrees depending on several factors including, among others, a person's mood, personality, and social relationships with others. Although there is increasing interest in endowing artificial agents with affect, personality, and the ability to build social relationships, little attention has been devoted to the role of such factors in influencing their empathic behavior. In this paper, we present a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. The presented model is based on psychological models of empathy and is applied and evaluated in the context of a conversational agent scenario. Research on empathic artificial agents corroborates the role of empathy in improving artificial agents' social behavior. For instance, it has been shown that empathic virtual humans can reduce stress levels during job interview tasks and that empathic agents are perceived as more likable, trustworthy, and caring . Furthermore, it has been found that empathic virtual humans can evoke empathy in children and can thus teach them to deal with bullying situations and that a virtual human's empathic behavior also contributes to its ability to build and sustain long-term socio-emotional relationships with human partners [3]. However, it has been shown that in a competitive card game scenario, empathic emotions can increase arousal and induce stress in an interaction partner . In line with neuropsychological findings [8] that humans empathize with each other to different degrees depending on their mood, personality, and social relationships with others, the modulation of a virtual human's empathic behavior through such factors would allow for a more adequate empathic behavior in the agent across different interaction scenarios. Although there is increasing interest in endowing artificial agents with affect, personality, and the ability to build social relationships, the role of such factors in influencing their empathic behavior has received little attention. In this paper, we present a computational model of empathy which allows a virtual human to exhibit different degrees of empathy. Our model is shaped by psychological models of empathy and is based on three processing steps that are central to empathy [4]: First, the Empathy Mechanism by which an empathic emotion is produced. Second, the Empathy Modulation by which the empathic emotion is modulated. Third, the
Emotionally Expressive Avatars for Collaborative Virtual Environments (PhD Thesis)
When humans communicate with each other face-to-face, they frequently use their bodies to complement, contradict, substitute, or regulate what is being said. These non-verbal signals are important for understanding each other, particularly in respect of expressing changing moods and emotional states. In modern communication technologies such as telephone, email or instant messaging, these indicators are typically lost and communication is limited to the exchange of verbal messages, with little scope for expressing emotions. This thesis explores Collaborative Virtual Environments (CVEs) as an alternative communication technology potentially allowing interlocutors to express themselves emotionally in an efficient and effective way. CVE users are represented by three-dimensional, animated embodiments, referred to as “avatars”, capable of showing emotional expressions. The avatar acts as an interaction device, providing information that would otherwise be difficult to mediate. Potential applications for such CVE systems are all areas where people cannot come together physically, but wish to discuss or collaborate on certain matters, for example in distance learning, home working, or simply to chat with friends and colleagues. Further, CVEs could be used in the therapeutic intervention of phobias and help address social impairments such as autism. To investigate how emotions can efficiently and effectively be visualised in a CVE, an animated virtual head was designed to express, in a readily recognisable manner, the six universal emotions happiness, sadness, anger, fear, surprise and disgust. A controlled experiment was then conducted to investigate the virtual head model. Effectiveness was demonstrated through good recognition rates for most emotions, and efficiency was established since a reduced animation feature set was found to be sufficient to build core distinctive facial expressions. A set of exemplar facial expressions and guidelines for their use was developed. A second controlled experiment was then conducted to investigate the effect such an emotionally expressive, animated avatar has on users of a prototype CVE, the Virtual Messenger (VM). The hypothesis was that introducing emotions into CVE interaction can be beneficial on many levels, namely the users’ subjective experience, their involvement, and how they perceive and interact with each other. The design considerations for VM are outlined, and a newly developed methodological framework for evaluation is presented. The findings suggest that emotional expressiveness in avatars increases involvement in the interaction between CVE users, as well as their sense of being together, or copresence. This has a positive effect on their subjective experience. Further, empathy was identified as a key component for creating a more enjoyable experience and greater harmony between CVE users. The caveat is that emotionally expressive avatars may not be useful in all contexts or all types of CVEs as they may distract users from the task they are aiming to complete. Finally, a set of tentative design guidelines for emotionally expressive avatars in CVEs is derived from the work undertaken, covering the appearance and expressive abilities of avatars. These are aimed at CVE researchers and avatar designers.
Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, 2019
Improving the expressiveness of virtual humans is essential for qualitative interactions and development of an emotional bond. It is certainly indicated for all applications using the user's cognitive processes, such as applications dedicated to training or health. Our study aims to contribute to the design of an expressive virtual human, by identifying and adapting visual factors promoting transcription of emotions. In this paper, we investigate the effect of expressive wrinkles and variation of pupil size. We propose to compare the recognition of basic emotions on a real human and on an expressive virtual human. The virtual human was subject to two different factors: expressive wrinkles and/or pupil size. Our results indicate that emotion recognition rates on the virtual agent are high. Moreover, expressive wrinkles affect emotion recognition. The effect of pupillary size is less significant. However, both are recommended to design an expressive virtual human. CCS CONCEPTS • Human-Centered Computing → Human Computer Interaction (HCI).
Virtual Characters as Emotional Interaction Element in the User Interfaces
Lecture Notes in Computer Science, 2006
Virtual assistants, also called Avatars, are virtual characters making the communication between the user and the machine more natural and interactive. In this research we have given avatars the capacity of having and expressing emotions by means of a computational emotional model based on the cognitive perspective. Once the system knows the emotional expressiveness that the virtual character will show to the user, we also have worked in how to express it through facial animation techniques.
Assessing empathy and managing emotions through interactions with an affective avatar
Health Informatics Journal, 2016
Assistive technologies can improve the quality of life of people diagnosed with different forms of social communication disorders. We report on the design and evaluation of an affective avatar aimed at engaging the user in a social interaction with the purpose of assisting in communication therapies. A human–avatar taxonomy is proposed to assist the design of affective avatars aimed at addressing social communication disorder. The avatar was evaluated with 30 subjects to assess how effectively it conveys the desired emotion and elicits empathy from the user. Results provide evidence that users become used to the avatar after a number of interactions, and they perceive the defined behavior as being logical. The users’ interactions with the avatar entail affective reactions, including the mimic emotions that users felt, and establish a preliminary ground truth about prototypic empathic interactions with avatars that is being used to train learning algorithms to support social communic...
How empathic traits affect interactions with virtual agents
2021
Nowadays, virtual technology with embedded virtual agents is increasingly present in everyday life. Therefore, understanding the characteristics of psychological experience in social interaction with virtual agents can be useful for theoretical and application purposes. Here, we aim to understand whether individual differences in empathy can influence social interaction with virtual agents. To this end, we designed a correlational study comparing individual propensity towards empathic traits and the ability to take the perspective of a virtual agent (VA) to understand whether and how they are associated. In an Immersive Virtual Reality (IVR) scenario, participants had to locate a glass according to the perspective of a virtual agent. They were seated behind a circular virtual table around which, in various positions closer and further away, VAs with a glass placed in front of them could appear. Participants had to decide whether the glass was to the right or left of the VA's body midline. The results showed an association between some components of empathy and localization time: the higher the tendency to identify with a fictional character, the faster the participants were to locate the glass in all positions of the virtual agents around the table. Likewise, the higher the tendency to experience feelings of empathy, the faster they were in locating only when the VA was close to the observer. These preliminary results suggest that individual differences in empathy and the location of virtual agents help define how people experience virtual social interactions.
Comparing Interpersonal Interactions with a Virtual Human to Those with a Real Human
IEEE Transactions on Visualization and Computer Graphics, 2007
This paper provides key insights into the construction and evaluation of interpersonal simulators-systems that enable interpersonal interaction with virtual humans. Using an interpersonal simulator, two studies were conducted that compare interactions with a virtual human to interactions with a similar real human. The specific interpersonal scenario employed was that of a medical interview. Medical students interacted with either a virtual human simulating appendicitis or a real human pretending to have the same symptoms. In Study I ðn ¼ 24Þ, medical students elicited the same information from the virtual and real human, indicating that the content of the virtual and real interactions were similar. However, participants appeared less engaged and insincere with the virtual human. These behavioral differences likely stemmed from the virtual human's limited expressive behavior. Study II ðn ¼ 58Þ explored participant behavior using new measures. Nonverbal behavior appeared to communicate lower interest and a poorer attitude toward the virtual human. Some subjective measures of participant behavior yielded contradictory results, highlighting the need for objective, physically-based measures in future studies. A real interpersonal interaction and (b) an equivalent virtual interpersonal interaction. In the real interaction, a medical student interviews a real standardized patient. In the virtual interaction, the medical student interviews a virtual human.