Reformulation Strategies of Repeated References in the Context of Robot Perception Errors in Situated Dialogue (original) (raw)
Related papers
Robot perception errors and human resolution strategies in situated human–robot dialogue
Advanced Robotics, 2017
We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot's perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot's understanding of the environment. We quantify the impact of perception errors on the dialogues, and investigate resolution attempts by users at a structural level and at the level of referring expressions.
Evaluating description and reference strategies in a cooperative human-robot dialogue system
Proceedings of …, 2009
We present a human-robot dialogue system that enables a robot to work together with a human user to build wooden construction toys. We then describe a study which assessed the responses of na��ve users to output that varied along two dimensions: the method of describing an assembly plan (pre-order or post-order), and the method of referring to objects in the world (basic and full). Varying both of these factors produced significant results: subjects using the system that employed a pre-order description ...
Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017
When uttering referring expressions in situated task descriptions, humans naturally use verbal and non-verbal channels to transmit information to their interlocutor. To develop mechanisms for robot architectures capable of resolving object references in such interaction contexts, we need to better understand the multi-modality of human situated task descriptions. In current computational models, mainly pointing gestures, eye gaze, and objects in the visual field are included as non-verbal cues, if any. We analyse reference resolution to objects in an object manipulation task and find that only up to 50% of all referring expressions to objects can be resolved including language, eye gaze and pointing gestures. Thus, we extract other non-verbal cues necessary for reference resolution to objects, investigate the reliability of the different verbal and non-verbal cues, and formulate lessons for the design of a robot's natural language understanding capabilities. CCS CONCEPTS • Computing methodologies → Artificial intelligence; • Human-centered computing → Interaction paradigms; • Computer systems organization → Robotics;
Clarification Dialogues for Perception-based Errors in Situated Human-Computer Dialogues
We present an experiment about situated human-computer interaction. Participants interacted with a simulated robot system to complete a series of tasks in a situated environment. Errors were introduced into the robot's perception to produce misunderstandings. We recorded the interactions and attempt to identify strategies the participants used to solve the arising problems.
Integrating miscommunication analysis in natural language interface design for a service robot
Intelligent Robots and …, 2006
Natural language user interfaces for cognitive robots should attempt to reduce the occurence of miscommunication in order to be perceived as providing a smooth and intuitive interaction to its users. This paper will describe how we integrate miscommunication analysis in the design process. By analysing data from 12 sessions, where subjects interacted with a service robot in a home like environment, we arrived at a set of observations, e.g., that users misunderstand the robot's functionality; and that feedback sometimes is ill-timed with respect to the situation; we also observed that referencing objects is important with respect to lexical choice and deixis. The design implications from our analysis are that we need to equip our robots to provide more and relevant feedback with respect to the system's functionality. Another design implication is to explore strategies that prime the user to respond in a way that can be handled by the robot system.
Towards a dialog strategy for handling miscommunication in human-robot dialog
19th International Symposium in Robot and Human Interactive Communication, 2010
This paper presents a first theoretical framework for a dialog strategy handling miscommunication in natural language Human-Robot Interaction (HRI). On the one hand the dialog strategy is deduced from findings about humanhuman communication patterns and coping strategies for miscommunication. On the other hand, relevant cognitive theories concerning human perception serve as a conceptual basis for the dialog strategy. The novel approach is firstly to combine these communication patterns with coping strategies and cognitive theories from human-human interaction (HHI) and secondly transfer them to HRI as a general dialog strategy for handling miscommunication. The presented approach is applicable to any task-oriented dialog. In a first step the conversational context is confined to route descriptions, given that asking for directions is an restricted but nevertheless challenging example for taskoriented dialog between humans and a robot.
The effect of sensor errors in situated human-computer dialogue
Errors in perception are a problem for computer systems that use sensors to perceive the environment. If a computer system is engaged in dialogue with a human user, these problems in perception lead to problems in the dialogue. We present two experiments, one in which participants interact through dialogue with a robot with perfect perception to fulfil a simple task, and a second one in which the robot is affected by sensor errors and compare the resulting dialogues to determine whether the sensor problems have an impact on dialogue success.
Resolving References in Natural Language Explanation Requests about Robot Behavior in HRI
Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
In HRI, users have been shown to request explanations when their interpretation of autonomous robot behavior fails. These requests can refer to the behavior either by open questions or with attributes of the behavior. The presented work aims to resolve these references in explanation requests by developing an episodic memory with a graph database that stores and queries representations of the internal execution. The reference resolution is done by the detection of temporal adverb and verb constraints in the syntactical dependency tree of utterances, the execution of a query in the episodic memory, and the scoring of the resulting entries to find the referred behavior. The explanation generation process of the original model is adapted to the new approach and can contain additional information such as detected constraints, a failed execution state, and the distinction between running and completed executions. CCS CONCEPTS • Computing methodologies → Cognitive robotics.
Recovering from Non-Understanding Errors in a Conversational Dialogue System
2012
Spoken dialogue systems can encounter different types of errors, including nonunderstanding errors where the system recognises that the user has spoken, but does not understand the utterance. Strategies for dealing with this kind of error have been proposed and tested in the context of goal-driven dialogue systems, for example by Bohus with a system which helps reserve conference rooms (Bohus and Rudnicky, 2005). However there has been little work on possible strategies in more conversational settings where the dialogue has more open-ended intentions. This paper looks at recovery from non-understanding errors in the context of a robot tourguide, and tests the strategies in a user trial. The results suggest that it is beneficial for user enjoyment to use strategies which attempt to move the dialogue on, rather than getting caught up in the error by asking users to repeat themselves.