Intelligent Agents: Conversations from Human-Agent Imitation Games (original) (raw)

Perception of Artificial Agents and Utterance Friendliness in Dialogue

The present contribution investigates the construction of dialogue structure for the use in human-machine interaction especially for robotic systems and embodied conversational agents. We are going to present a methodology and findings of a pilot study for the design of task-specific dialogues. Specifically, we investigated effects of dialogue complexity on two levels: First, we examined the perception of the embodied conversational agent, and second, we studied participants' performance following HRI. To do so, we manipulated the agent's friendliness during a brief conversation with the user in a receptionist scenario. The paper presents an overview of the dialogue system, the process of dialogue construction, and initial evidence from an evaluation study with na¨ve users (N = 40). These users interacted with the system in a task-based dialogue in which they had to ask for the way in a building unknown to them. Afterwards participants filled in a questionnaire. Our findings...

A Review on The Development and Effect of Conversational Agents and Social Robots

HCI addresses to the concept of Human-Machine communication through including but not limited to AI-enabled embodied conversational software agents. The emergence of these agents has changed the history of computing and robotics once and for all. One of the most prominent social and intellectual qualities in humans is the ability to have conversations. Typically, a conversation takes place between people through verbal and non-verbal mediums. Languages play a vital role in these communications and conversations. Humanness and human-like interaction qualities are found to be in the core of the human-computer interface designs from the beginning of this research doctrine [1]. Programming languages has enabled computer scientists to establish a connection between humans and machines that enables the machine to understand the instructions given. However, the widespread use of cell phones, computers and other smart gadgets has clearly made it a demand of time that the machines used today can understand the commands given in natural languages (i.e. English, German, Spanish, etc.) as the user set is not limited to the computer scientists anymore[1]. Hence, robotics, natural language processing, machine learning, artificial intelligence, etc. has combined force to bridge the communication gap between the machines and the users. From ELIZA [3], Rea [4] to Siri, Amazon Alexa or Google assistant, the software interfaces has come a long way through a lengthy development process. They have proven to have enough influence to change the social, economic and political outcomes through their intelligent behavior [2]. The boundary between human-like and bot-like behavior is greyer then it is black and white [2]. The software interfaces has changed their appearance over the time by stripping down from the ideals of face-to-face conversations. The chatbots (i.e. Twitter bots) found online has developed different social media ecosystems [2] where humans and robots interact with each other in the same plane. To have a conversation or interaction with the machines humans are being trained to accept and use a new set of vocabularies [1]. In this paper, I would like to discuss how these conversational agents and social robots are shaping our social media ecosystems. I will revisit the interrelation between humans and machines while focusing on the socio-cultural impact of these robots into our IoT –enabled smart homes and online virtual spaces.

Chatbots' Greetings to Human-Computer Communication

ArXiv, 2016

Both dialogue systems and chatbots aim at putting into action communication between humans and computers. However, instead of focusing on sophisticated techniques to perform natural language understanding, as the former usually do, chatbots seek to mimic conversation. Since Eliza, the first chatbot ever, developed in 1966, there were many interesting ideas explored by the chatbots' community. Actually, more than just ideas, some chatbots' developers also provide free resources, including tools and large-scale corpora. It is our opinion that this know-how and materials should not be neglected, as they might be put to use in the human-computer communication field (and some authors already do it). Thus, in this paper we present a historical overview of the chatbots' developments, we review what we consider to be the main contributions of this community, and we point to some possible ways of coupling these with current work in the human-computer communication research line.

Do conversational agents have a theory of mind? A single case study of ChatGPT with the Hinting, False Beliefs and False Photographs, and Strange Stories paradigms

Zenodo (CERN European Organization for Nuclear Research), 2023

In this short report we consider the possible manifestation of theory-of-mind skills by the recently proposed OpenAI's ChatGPT conversational agent. To tap into these skills, we used an indirect speech understanding task, the hinting task, and a new text version of a False Belief/False Photographs paradigm, and the Strange Stories paradigm. The hinting task is usually used to assess individuals with autism or schizophrenia by requesting them to infer hidden intentions from short conversations involving two characters. Our results show that the artificial model has quite limited performances on the Hinting task when either original scoring or revised SCOPE's rating scales are used. To better understand this limitation, we introduced slightly modified versions of the hinting task in which either cues about the presence of a communicative intention were added or a specific question about the character's intentions were asked. Only the latter demonstrated enhanced performances. In addition, the use of a False Belief/False Photographs paradigm to assess belief attribution skills demonstrates that ChatGPT keeps track of successive physical states of the world and may refer to a character's erroneous expectations about the world. No dissociation between the conditions was found. The Strange Stories were associated with correct performances but we could not be sure that the algorithm had no prior knowledge of it. These findings suggest that ChatGPT may answer about a character's intentions or beliefs when the question focuses on these mental states, but does not use such references spontaneously on a regular basis. This may guide AI designers to improve inference models by privileging mental states concepts in order to help chatbots having more natural conversations. This work offers an illustration of the possible application of psychological constructs and paradigms to a cognitive entity of a radically new nature, which leads to a reflection on the experimental methods that should in the future propose evaluation tools designed to allow the comparison of human performances and strategies with those of the machine.

Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations

Computers in Human Behavior, 2015

This study analyzed how communication changes when people communicate with an intelligent agent as opposed to with another human. We compared 100 instant messaging conversations to 100 exchanges with the popular chatbot Cleverbot along seven dimensions: words per message, words per conversation, messages per conversation, word uniqueness, and use of profanity, shorthand, and emoticons. A MANOVA indicated that people communicated with the chatbot for longer durations (but with shorter messages) than they did with another human. Additionally, human–chatbot communication lacked much of the richness of vocabulary found in conversations among people, and exhibited greater profanity. These results suggest that while human language skills transfer easily to human–chatbot communication, there are notable differences in the content and quality of such conversations.

Chatbots: Cybernetic Psychology and the Future of Conversation

Journal of Cinema and Media Studies, 2022

Looking back on the history of chatbot development, one Microsoft development team observed in 2018 that "with vastly more people being digitally connected, it is not surprising that social chatbots have been developed as an alternative means for engagement." 1 What sort of "alternative" is presented when humans engage with chatbots? If the Fourth Industrial Revolution depends not only on the flow of goods and services but also on the flow of signals of assent (purchases, likes, shares), then the economy of conversation between users must be made seamless at any cost. 2 Is the chatbot an alternative to the otherness of human beings? Are chatbots a patch for alterity? Alongside the psychologically meaningful dimensions attending the problem of our incommensurability with one another-our personhood-the disconcerting, unmanageable, merciful, and threatening separation between human beings presents a newly focalized economic problem in the digital age.

Human-like communication in conversational agents: a literature review and research agenda

Journal of Service Management, 2020

PurposeConversational agents (chatbots, avatars and robots) are increasingly substituting human employees in service encounters. Their presence offers many potential benefits, but customers are reluctant to engage with them. A possible explanation is that conversational agents do not make optimal use of communicative behaviors that enhance relational outcomes. The purpose of this paper is to identify which human-like communicative behaviors used by conversational agents have positive effects on relational outcomes and which additional behaviors could be investigated in future research.Design/methodology/approachThis paper presents a systematic review of 61 articles that investigated the effects of communicative behaviors used by conversational agents on relational outcomes. A taxonomy is created of all behaviors investigated in these studies, and a research agenda is constructed on the basis of an analysis of their effects and a comparison with the literature on human-to-human servi...

The Ghost in the Machine – Emotionally Intelligent Conversational Agents and the Failure to Regulate ‘Deception by Design’

SCRIPT-ed

Google's Duplex illustrates the great strides made in AI to provide synthetic agents the capabilities to intuitive and seemingly natural humanmachine interaction, fostering a growing acceptance of AI systems as social actors. Following BJ Fogg's captology framework, we analyse the persuasive and potentially manipulative power of emotionally intelligent conversational agents (EICAs). By definition, human-sounding conversational agents are 'designed to deceive'. They do so on the basis of vast amounts of information about the individual they are interacting with. We argue that although the current data protection and privacy framework in the EU offers some protection against manipulative conversational agents, the real upcoming issues are not acknowledged in regulation yet.