A Configurable Dialogue Platform for ASORO Robots (original) (raw)
Related papers
A multi-modal dialog system for a mobile robot
Proc. Int. Conf. on Spoken Language …, 2004
A challenging domain for dialog systems is their use for the communication with robotic assistants. In contrast to the classical use of spoken language for information retrieval, on a mobile robot multimodal dialogs and the dynamic interaction of the robot system with its environment have to be considered. In this paper we will present the dialog system developed for BIRON -the Bielefeld Robot Companion. The system is able to handle multi-modal dialogs by augmenting semantic interpretation structures derived from speech with hypotheses for additional modalities as e.g. speech-accompanying gestures. The architecture of the system is modular with the dialog manager being the central component. In order to be aware of the dynamic behavior of the robot itself, the possible states of the robot control system are integrated into the dialog model. For flexible use and easy configuration the communication between the individual modules as well as the declarative specification of the dialog model are encoded in XML. We will present example interactions with BIRON from the "home-tour" scenario defined within the COGN-IRON project.
A spoken dialogue system to control robots
Projektarbeten 2002, 2002
Speech recognition is available on ordinary personal computers and is starting to appear in standard software applications. A known problem with speech interfaces is their integration into current graphical user interfaces.
Demonstration of a spoken dialogue interface for planning activities of a semi-autonomous robot
Proceedings of the second international conference on Human Language Technology Research -, 2002
Planning and scheduling in the face of uncertainty and change pushes the capabilities of both planning and dialogue technologies by requiring complex negotiation to arrive at a workable plan. Planning for use of semi-autonomous robots involves negotiation among multiple participants with competing scientific and engineering goals to co-construct a complex plan. In NASA applications this plan construction is done under severe time pressure so having a dialogue interface to the plan construction tools can aid rapid completion of the process. But, this will put significant demands on spoken dialogue technology, particularly in the areas of dialogue management and generation. The dialogue interface will need to be able to handle the complex dialogue strategies that occur in negotiation dialogues, including hypotheticals and revisions, and the generation component will require an ability to summarize complex plans.
An Event-Based Conversational System for the Nao Robot
Parts of the research reported on in this paper were performed in the context of the EU-FP7 project ALIZ-E (ICT-248116), which develops embodied cognitive robots for believable any-depth affective interactions with young users over an extended and possibly discontinuous period .
A Dialogue Manager for an Intelligent Mobile Robot
Proceedings of the 4th International Workshop on Natural Language Processing and Cognitive Science, 2007
This paper focuses on a dialogue manager developed for Carl, an intelligent mobile robot. It uses the Information State (IS) approach and it is based on a Knowledge Acquisition and Management (KAM) module that integrates information obtained from various interlocutors. This mixed-initiative dialogue manager handles pronoun resolution, it is capable of performing different kinds of clarification questions and to comment information based on the current knowledge acquired.
Augmented Robotics Dialog System for Enhancing Human–Robot Interaction
Sensors, 2015
Augmented reality, augmented television and second screen are cutting edge technologies that provide end users extra and enhanced information related to certain events in real time. This enriched information helps users better understand such events, at the same time providing a more satisfactory experience. In the present paper, we apply this main idea to human-robot interaction (HRI), to how users and robots interchange information. The ultimate goal of this paper is to improve the quality of HRI, developing a new dialog manager system that incorporates enriched information from the semantic web. This work presents the augmented robotic dialog system (ARDS), which uses natural language understanding mechanisms to provide two features: (i) a non-grammar multimodal input (verbal and/or written) text; and (ii) a contextualization of the information conveyed in the interaction. This contextualization is achieved by information enrichment techniques that link the extracted information from the dialog with extra information about the world available in semantic knowledge bases. This enriched or contextualized information (information enrichment, semantic enhancement or contextualized information are used interchangeably in the rest of this paper) offers many possibilities in terms of HRI. For instance, it can enhance the robot's pro-activeness during a human-robot dialog (the enriched information can be used to propose new topics during the dialog, while ensuring a coherent interaction). Another possibility is to display additional multimedia content related to the enriched information on a visual device. This paper describes the ARDS and shows a proof of concept of its applications.
Spoken Language Processing in a Conversational System for Child-Robot Interaction
Proceedings of WOCCI 2012 - Workshop on Child, Computer and Interaction Satellite Event of INTERSPEECH, 2012
We describe a conversational system for child-robot interaction built with an event-based integration approach using the Nao robot platform with the Urbi middleware within the ALIZ-E project. Our integrated system includes components for the recognition, interpretation and generation of speech and gestures, dialogue management and user modeling. We describe our approach to processing spoken input and output and highlight some practical implementation issues. We also present preliminary results from experiments where young Italian users interacted with the system.
Dynamic Speech Interaction for Robotic Agents
Lecture Notes in Control and Information Sciences, 2008
Research in mobile service robotics aims on development of intuitive speech interfaces for human-robot interaction. We see a service robot as a part of an intelligent environment and want to step forward discussing a concept where a robot does not only offer its own features via natural speech interaction but also becomes a transactive agent featuring other services' interfaces. The provided framework makes provisions for the dynamic registration of speech interfaces to allow a loosely-coupled flexible and scalable environment. An intelligent environment can evolve out of multimedia devices, home automation, communication, security, and emergency technology. These appliances offer typical wireless or stationary control interfaces. The number of different control paradigms and differently lay-outed control devices gives a certain border in usability. As speech interfaces offer a more natural way to interact intuitively with technology we propose to centralize a general speech engine on a robotic unit. This has two reasons: The acceptance to talk to a mobile unit is estimated to be higher rather than to talk to an ambient system where no communication partner is visible. Additionally the devices or functionalities to be controlled in most cases do not provide a speech interface but offer only proprietary access.
Developing Human-Robot Dialogue Management Formally
2005
In shared-control systems, such as intelligent service robots, a human operator and an automated technical system are interdependently in charge of control. Natural Language dialogues have long been acknowledged as a potentially fruitful modality for instructing, describing and negotiating in human-machine interfaces. Since shared-control systems are often embedded in safety-critical devices, formal methods are thus widely used for improving the quality of such systems. In this paper, we present a formal method based approach for dialogue management and show how it enhances the clarity of dialog modelling, provides several engineering properties (e.g., validation, test and simulation) and supports the generation of clarification subdialogues.