Recovering from Non-Understanding Errors in a Conversational Dialogue System (original) (raw)

Towards a dialog strategy for handling miscommunication in human-robot dialog

19th International Symposium in Robot and Human Interactive Communication, 2010

This paper presents a first theoretical framework for a dialog strategy handling miscommunication in natural language Human-Robot Interaction (HRI). On the one hand the dialog strategy is deduced from findings about humanhuman communication patterns and coping strategies for miscommunication. On the other hand, relevant cognitive theories concerning human perception serve as a conceptual basis for the dialog strategy. The novel approach is firstly to combine these communication patterns with coping strategies and cognitive theories from human-human interaction (HHI) and secondly transfer them to HRI as a general dialog strategy for handling miscommunication. The presented approach is applicable to any task-oriented dialog. In a first step the conversational context is confined to route descriptions, given that asking for directions is an restricted but nevertheless challenging example for taskoriented dialog between humans and a robot.

Development of a Tour–Guide Robot Using Dialogue Models and a Cognitive Architecture

Lecture Notes in Computer Science, 2010

In this paper, we present the development of a tour–guide robot that conducts a poster session through spoken Spanish. The robot is able to navigate around its environment, visually identify informational posters, and explain sections of the posters that users request via pointing gestures. We specify the task by means of dialogue models. A dialogue model defines conversational situations, expectations and robot actions. Dialogue models are integrated into a novel cognitive architecture that allow us to coordinate both human–robot interaction and robot capabilities in a flexible and simple manner. Our robot also incorporates a confidence score on visual outcomes, the history of the conversation and error prevention strategies. Our initial evaluation of the dialogue structure shows the reliability of the overall approach, and the suitability of our dialogue model and architecture to represent complex human–robot interactions, with promising results.

Reformulation Strategies of Repeated References in the Context of Robot Perception Errors in Situated Dialogue

2015

We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot's perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot's understanding of the environment. In this work, we describe the effects that the robot's perceptual errors and the information request options available to the participant had on the reformulation of the referring expressions the participants used when resolving a unsuccessful reference.

Aspects of user specific dialog adaptation for an autonomous robot

2010

The paper is giving a survey on multimodal dialog technology and highlight some specifics of human machine dialog on an autonomous companion robot especially for elderly, cognitively impaired people, to be developed in the CompanionAble [1] project. The central aspect is adaptation to the user and multi-modality of inputs and outputs, which is essential for a natural and intuitive interaction. The paper at first introduces a prototypical dialog system and figures out issues of possible user adaptation. Then, a new concept for a dialog system, realizing three aspects of adaptation is described. Learning the meaning of user inputs, adaptation of dialog strategy according to the user's experience and user specific timing of proactive behaviors like reminders Index Termsmulti-modal human machine dialog, input interpretation, fusion, timing, daytime management

Error Detection and Recovery in Spoken Dialogue Systems

North American Chapter of the Association for Computational Linguistics, 2004

This paper describes our research on both the detection and subsequent resolution of recognition errors in spoken dialogue systems. The paper consists of two major components. The first half concerns the design of the error detection mechanism for resolving city names in our MERCURY flight reservation system, and an investigation of the behavioral patterns of users in subsequent subdialogues involving keypad entry for disambiguation. An important observation is that, upon a request for keypad entry, users are frequently unresponsive to the extent of waiting for a time-out or hanging up the phone. The second half concerns a pilot experiment investigating the feasibility of replacing the solicitation of a keypad entry with that of a "speak-and-spell" entry. A novelty of our work is the introduction of a speech synthesizer to simulate the user, which facilitates development and evaluation of our proposed strategy. We have found that the speak-and-spell strategy is quite effective in simulation mode, but it remains to be tested in real user dialogues.

Balancing Efficiency and Coverage in Human-Robot Dialogue Collection

ArXiv, 2018

We describe a multi-phased Wizard-of-Oz approach to collecting human-robot dialogue in a collaborative search and navigation task. The data is being used to train an initial automated robot dialogue system to support collaborative exploration tasks. In the first phase, a wizard freely typed robot utterances to human participants. For the second phase, this data was used to design a GUI that includes buttons for the most common communications, and templates for communications with varying parameters. Comparison of the data gathered in these phases show that the GUI enabled a faster pace of dialogue while still maintaining high coverage of suitable responses, enabling more efficient targeted data collection, and improvements in natural language understanding using GUI-collected data. As a promising first step towards interactive learning, this work shows that our approach enables the collection of useful training data for navigation-based HRI tasks.

Robot perception errors and human resolution strategies in situated human–robot dialogue

Advanced Robotics, 2017

We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot's perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot's understanding of the environment. We quantify the impact of perception errors on the dialogues, and investigate resolution attempts by users at a structural level and at the level of referring expressions.

ScoutBot: A Dialogue System for Collaborative Navigation

Proceedings of ACL 2018, System Demonstrations, 2018

ScoutBot is a dialogue interface to physical and simulated robots that supports collaborative exploration of environments. The demonstration will allow users to issue unconstrained spoken language commands to ScoutBot. ScoutBot will prompt for clarification if the user's instruction needs additional input. It is trained on human-robot dialogue collected from Wizard-of-Oz experiments, where robot responses were initiated by a human wizard in previous interactions. The demonstration will show a simulated ground robot (Clearpath Jackal) in a simulated environment supported by ROS (Robot Operating System). * Contributions were primarily performed during an internship at the Institute for Creative Technologies.

Using Knowledge about Misunderstandings to Increase the Robustness of Spoken Dialogue Systems

2010

This paper proposes a new technique to enhance the performance of spoken dialogue systems employing a method that automatically corrects semantic frames which are incorrectly generated by the semantic analyser of these systems. Experiments have been carried out using two spoken dialogue systems previously developed in our lab: Saplen and Viajero, which employ prompt-dependent and prompt-independent language models for speech recognition. The results obtained from 10,000 simulated dialogues show that the technique improves the performance of the two systems for both kinds of language modelling, especially for the prompt-independent language model. Using this type of model the Saplen system increased sentence understanding by 19.54%, task completion by 26.25%, word accuracy by 7.53%, and implicit recovery of speech recognition errors by 20.30%, whereas for the Viajero system these figures increased by 14.93%, 18.06%, 6.98% and 15.63%, respectively.