What topic do you want to hear about? A bilingual talking robot using English and Japanese Wikipedias (original) (raw)
Related papers
Multilingual WikiTalk: Wikipedia-based talking robots that switch languages
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 2015
At SIGDIAL-2013 our talking robot demonstrated Wikipedia-based spoken information access in English. Our new demo shows a robot speaking different languages, getting content from different language Wikipedias, and switching languages to meet the linguistic capabilities of different dialogue partners.
Advances in Wikipedia-based Interaction with Robots
Proceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction - MMRWHRI '14, 2014
The paper describes advances in Wikipedia-based humanrobot interaction. After reviewing the current capabilities of talking robots that use Wikipedia as an information source, the paper presents methods that support new capabilities. These include language-switching and multimodal behaviourswitching when using Wikipedias in many languages, robot initiatives to suggest new topics from Wikipedia based on semantic similarity to the current topic, and the capability of the robot to listen to the user talking and to recognize entities mentioned by the user that have Wikipedia links.
2016
The paper describes topic shifting in dialogues with a robot that provides information from Wiki-pedia. The work focuses on a double topical construction of dialogue coherence which refers to discourse coherence on two levels: the evolution of dialogue topics via the interaction between the user and the robot system, and the creation of discourse topics via the content of the Wiki-pedia article itself. The user selects topics that are of interest to her, and the system builds a list of potential topics, anticipated to be the next topic, by the links in the article and by the keywords extracted from the article. The described system deals with Wikipedia articles, but could easily be adapted to other digital information providing systems.
Open-Domain Information Access with Talking Robots
2013
The demo shows Wikipedia-based opendomain information access dialogues with a talking humanoid robot. The robot uses face-tracking, nodding and gesturing to support interaction management and the presentation of information to the partner.
WikiTalk human-robot interactions
Proceedings of the 15th ACM on International conference on multimodal interaction, 2013
The demo shows WikiTalk, a Wikipedia-based open-domain information access dialogue system implemented on a talking humanoid robot. The robot behaviour integrates speech, nodding, gesturing and face-tracking to support interaction management and the presentation of information to the partner.
Towards SamiTalk: A Sami-Speaking Robot Linked to Sami Wikipedia
Lecture Notes in Electrical Engineering
We describe our work towards developing SamiTalk, a robot application for the North Sami language. With SamiTalk, users will hold spoken dialogues with a humanoid robot that speaks and recognizes North Sami. The robot will access information from the Sami Wikipedia, talk about requested topics using the Wikipedia texts, and make smooth topic shifts to related topics using the Wikipedia links. SamiTalk will be based on the existing WikiTalk system for Wikipedia-based spoken dialogues, with newly developed speech components for North Sami.
Open-Domain Conversation with a NAO Robot
The paper presents a multimodal conversational interaction system for the Nao humanoid robot. The system was developed at the 8th International Summer Workshop on Multimodal Interfaces, Metz, 2012. We implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao. This greatly extended the robot's interaction capabilities by enabling Nao to talk about an unlimited range of topics. In addition to speech interaction, we developed a wide range of multimodal interactive behaviours by the robot, including facetracking, nodding, communicative gesturing, proximity detection and tactile interrupts. We made video recordings of user interactions and used questionnaires to evaluate the system. We further extended the robot's capabilities by linking Nao with Kinect.
Situated Interaction in a Multilingual Spoken Information Access Framework
Signals and Communication Technology, 2016
The paper describes aspects of situated interaction when a humanoid robot uses the WikiTalk system as a spoken language dialogue interface. WikiTalk is a speech-based open-domain information access system that enables the user to move around Wikipedia from topic to topic and have chunks of interesting articles read out aloud. The interactions with the robot are situated: they take place in a particular context and are driven according to the user's interest and focus of attention. The interactions are also multimodal as both user and robot extend their communicative repertoire with multimodal signals. The robot uses face-tracking, nodding and gesturing to support interaction management and the presentation of new information to the partner, while the user speaks, moves, and can touch the robot to interrupt it. 1 Introduction Spoken interactive systems have so far usually been atemporal and unrelated to the situation in which the interaction takes place: interaction management is the same for all users and all situations, while situated references such as pronouns (I, you), location (here), and time (now, tomorrow) are managed with built-in roles of the speakers, GPS-system, and the computer time. Humanoid robots are well suited to take advantage of environments designed by people: their sensors, signal detection and reasoning processes can furnish them with situationally appropriate behaviour and robust autonomous control. Interactions with this kind of robot are "situated"
Integrating Pointing Gestures into a Spanish-spoken Dialog System for Conversational Service Robots
International Conference on Agents and Artificial Intelligence, 2010
In this paper we present our work on the integration of human pointing gestures into a spoken dialog system in Spanish for conversational service robots. The dialog system is composed by a dialog manager, an interpreter that guides the spoken dialog and robot actions, in terms of user intentions and relevant environment stimuli associated to the current conversational situation. We demonstrate our approach by developing a tour-guide robot that is able to move around its environment, visually recognize informational posters, and explain sections of the poster selected by the user via pointing gestures. This robot also incorporates simple methods to qualify confidence in its visual outcomes, to inform about its internal state, and to start error-prevention dialogs whenever necessary. Our results show the reliability of the overall approach to model complex multimodal human-robot interactions.