Behavioral Cues of Humanness in Complex Environments: How People Engage With Human and Artificially Intelligent Agents in a Multiplayer Videogame (original) (raw)
Related papers
Human-AI Collaboration in a Cooperative Game Setting: Measuring Social Perception and Outcomes
Human-AI interaction is pervasive across many areas of our day to day lives. In this paper, we investigate human-AI collaboration in the context of a collaborative AI-driven word association game with partially observable information. In our experiments, we test various dimensions of subjective social perceptions (rapport, intelligence, creativity and likeability) of participants towards their partners when participants believe they are playing with an AI or with a human. We also test subjective social perceptions of participants towards their partners when participants are presented with a variety of confidence levels. We ran a large scale study on Mechanical Turk (n=164) of this collaborative game. Our results show that when participants believe their partners were human, they found their partners to be more likeable, intelligent, creative and having more rapport and use more positive words to describe their partner's attributes than when they believed they were interacting with an AI partner. We also found no differences in game outcome including win rate and turns to completion. Drawing on both quantitative and qualitative findings, we discuss AI agent transparency, include design implications for tools incorporating or supporting human-AI collaboration, and lay out directions for future research. Our findings lead to implications for other forms of human-AI interaction and communication.
Behavioral Indicators of Interactions Between Humans, Virtual Agent Characters and Virtual Avatars
2020
Simulations and games allow us to experience events as if they were really happening in a way that is safer and less expensive. Despite improvements in realism in these types of environments, one area that still presents a challenge is interpersonal interactions. The subtleties of what makes an interaction rich are difficult to define. As such, there is value in building on existing research into how individuals react to virtual characters to inform future investments.
The Perception of Artificial Intelligence as “Human” by Computer Users
Springer eBooks, 2007
This paper deals with the topic of 'humanness' in intelligent agents. Chatbot agents (e.g. Eliza, Encarta) had been criticized on their ability to communicate in human like conversation. In this study, a CIT approach was used for analyzing the human and non-human parts of Eliza's conversation. The result showed that Eliza could act like a human as if it could greet, maintain a theme, apply damage control, react appropriately to cue, offer a cue, use appropriate language style and have a personality. It was non human insofar as it used formal or unusual treatment of language, failed to respond to a specific question, failed to respond to a general question or implicit cue, evidenced time delays and phrases delivered at inappropriate times.
The 19 Unifying Questionnaire Constructs of Artificial Social Agents
Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, 2020
In this paper, we report on the multi-year Intelligent Virtual Agents (IVA) community effort, involving more than 80 researchers worldwide, researching the IVA community interests and practises in evaluating human interaction with an artificial social agent (ASA). The effort is driven by previous IVA workshops and plenary IVA discussions related to the methodological crisis on the evaluation of ASAs. A previous literature review showed a continuous practise of creating new questionnaires instead of reusing validated questionnaires. We address this issue by examining questionnaire measurement constructs used in empirical studies between 2013 to 2018 published in the IVA conference. We identified 189 constructs used in 89 questionnaires that are reported across 81 studies. Although these constructs have different names, they often measure the same thing. In this paper, we, therefore, present a unifying set of 19 constructs that captures more than 80% of the 189 constructs initially identified. We established this set in two steps. First, 49 researchers classified the constructs in broad theoretically based categories. Next, 23 researchers grouped the constructs in each category on their similarity. The resulting 19 groups form a unifying set of constructs, which will be the basis for the future questionnaire instrument of human-ASA interaction. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; • Computing methodologies → Intelligent agents;
Improving social presence in human-agent interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014
Humans have a tendency to consider media devices as social beings. Social agents and artificial opponents can be exam ined as one instance of this effect. With today's technology it is already possible to create artificial agents that are per ceived as socially present. In this paper, we start by iden tifying the factors that influence perceptions of social pres ence in human-agent interactions. By taking these factors into account and by following previously defined guidelines for building socially present artificial opponents, a case study was created in which a social robot plays the Risk board game against three human players. An experiment was performed to ascertain whether the agent created in this case study is perceived as socially present. The experiment suggested that by following the guidelines for creating socially present arti ficial board game opponents, the perceived social presence of users towards the artificial agent improves.
Non-Player Characters and Artificial Intelligence
Psychology, Pedagogy, and Assessment in Serious Games
Serious Games rely on interactive systems to provide an efficient communication medium between the tutor and the user. Designing and implementing such medium is a multi-disciplinary task that aims at an environment that engages the user in a learning activity. User engagement is significantly related to the users’ sense of immersion or his willingness to accept the reality proposed by a game environment. This is a very relevant research topic for Artificial Intelligence (AI), since it requires computational systems to generate believable behaviors that can promote the users’ willingness to enter and engage in the game environment. In order to do this, AI research has been relying on social sciences, in particular psychology and sociology models, to ground the creation of computational models for non-player characters that behave according to the users’ expectations. In this chapter, the authors present some of the most relevant NPC research contributions following this approach.
Social and entertainment gratifications of gaming with robot, AI, and human partners
Proceedings of the 28th IEEE International Conference on Robot and Human Interactive Communication (Ro-Man 2019), 2019
As social robots' and AI agents' roles are becoming more diverse, those machines increasingly function as sociable partners. This trend raises questions about whether social gaming gratifications known to emerge in human-human co-play may (not) also manifest in human-machine co-play. In the present study, we examined social outcomes of playing a videogame with a human partner as compared to an ostensible social robot or A.I (i.e., computer-controlled player) partner. Participants (N = 103) were randomly assigned to three experimental conditions in which they played a cooperative video game with either a human, embodied robot, or non-embodied AI. Results indicated that few statistically significant or meaningful differences existed between any of the partner types on perceived closeness with partner, relatedness need satisfaction, or entertainment outcomes. However, qualitative data suggested that human and robot partners were both seen as more sociable, while AI partners were seen as more functional.
Entertainment Computing – ICEC 2017, 2017
Human-like behaviors are an important factor in achieving entertaining computer players. So far, the target of human-like behavior has been focused on actions in a game with the goal of winning. However, human behaviors might also be performed with other purposes not directly related to the game's main objective. For example, in FPS games, some human players create illustrations or graffiti with a weapon (i.e., gun). In cooperative online FPS, when chat is not allowed in-game, some players shoot the nearest wall to warn an ally about danger. This kind of action for an indirect purpose is hard to reproduce with a computer player, but it is very important to simulate human behavior and to entertain human players. In this article, we present a survey of the possible actions in a game that are not directly related to the game's main objective. Study cases of these behaviors are collected and classified by task and intention (i.e., warning, notification, provocation, greeting, expressing empathy, showing off, and selfsatisfaction) and we discuss the possibility of reproducing such actions with a computer player. Furthermore, we show in experiments with multi-agent Qlearning that such actions with another purpose can emerge naturally.