Why it is interesting to investigate how people talk to computers and robots: Introduction to the special issue (original) (raw)

Consequences and Factors of Stylistic Differences in Human-Robot Dialogue

Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue

This paper identifies stylistic differences in instruction-giving observed in a corpus of human-robot dialogue. Differences in verbosity and structure (i.e., single-intent vs. multi-intent instructions) arose naturally without restrictions or prior guidance on how users should speak with the robot. Different styles were found to produce different rates of miscommunication, and correlations were found between style differences and individual user variation, trust, and interaction experience with the robot. Understanding potential consequences and factors that influence style can inform design of dialogue systems that are robust to natural variation from human users.

How Do People Talk with a Robot? An Analysis of Human-Robot Dialogues in the Real World

This paper reports the preliminary results of a human- robot dialogue analysis in the real world with the goal of understanding users’ interaction patterns. We analyzed the dialogue log data of Roboceptionist, a robotic receptionist located in a high-traffic area in an academic building [2][3]. The results show that (i) the occupation and background (persona) of the robot help people establish common ground with the robot, and (ii) there is great variability in the extent that users follow social norms of human-human dialogues in human-robot dialogues. Based on these results, we describe implications for designing the dialogue of a social robot.

What makes speakers angry in human-computer conversation

2000

Often, it cannot be completely avoided that current human-computer conversation systems function in a way that is dissatisfactory for the user. In this paper it is investigated what exactly it is that makes speakers angry and how their linguistic behaviour may change globally, in accordance with their changing speaker attitude, and locally, in reaction to particular system malfunctions. The prosodic peculiarities of the speakers' utterances can serve as indicators for the amount of problems a particular type of system malfunction may create. They can also serve to show which types of interventions by system designers can be useful. 1 1 Problem Human-computer conversation systems do not always work as they should. The problem which arises is that if speakers are repeatedly confronted with system malfunctions, the properties of

Ill-formedness and miscommunication in person-machine dialogue

Information and Software Technology, 1987

4h,wract. This paper de.scrihe.~ work carried out hi' the ('ommunicdlio/1 Faiha'e in Dialogue /('FID/ pro~eel (ESPRIT contract m). A IP P527). The paper present.s a broad c/assi[ication o/the type.s qf i/17lhrmetbw.vs that can occztr ill person-machhw cootmmtication. ..t corpu.s o[ m/tm'al language dialogue is anah'sed [?ore a variety qlper.weclive.s, lmplications jbr the development (?/'llalllt'd[ ldll~lta,~e dialogue ilttelji/ces are discus.sed. K~,lwor(Ls. JH[u /~roc{,,~.Vi/l£,, I1LIIIlF(I[ ]tlll~ll(lg{' /~rocessitl¢ r, mall-mac~lira, i/tler/ace, s+4iware lec/miqta,.s.

Expressive Speech Characteristics in the Communication with Artificial Agents

This paper deals with emotional speech characteristics in human-computer- and human-robot interaction. The focus is on the users' involuntary expression of emotion in reaction to system malfunction, which may cause severe problems for automatic speech recognition and processing. Investigating different user groups is shown to be a useful method for determining what makes speakers respond emotionally and for understanding the interpersonal differences that can be observed in reaction to system malfunction. Which aspects may be in- volved is illustrated by discussing the example of the personal relationship between user and system as evident from the different forms of address that can be found in the corpora. We shall draw on corpora of human- computer and human-robot communication involving children and adults from both sexes. However, it will be demonstrated that the major factor that determines the users' expressive behaviour is their conceptualisation of the artificial ...

A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence.

Cooperativity in human‐machine and human‐human spoken dialogue

Discourse Processes, 1996

The paper presents principles of dialogue co-operativity derived from a corpus of task-oriented spoken human-machine dialogue. The corpus was recorded during the design of a dialogue model for a spoken language dialogue system. Analysis of the corpus produced a set of dialogue design principles intended to prevent users from having to initiate clarification and repair meta-communication which the system would not understand. Developed independently of Grice's work on cooperation in spoken dialogue, these principles provide an empirical test of the correctness and completeness of Grice's maxims of co-operativity in the case of human-machine dialogue. Whereas the maxims pass the test of correctness, they fail to provide a complete account of principles of cooperative human-machine dialogue. A more complete set of aspects of cooperative taskoriented dialogue is proposed together with the principles expressing those aspects. Transferability of results to cooperative spoken human-human dialogue is discussed.