Robot, asker of questions (original) (raw)
Related papers
Collaboration, Dialogue, and Human-Robot Interaction
International Journal of Robotic Research, 2002
Teleoperation can be improved if humans and robots work as partners, exchanging information and assisting one another to achieve common goals. In this paper, we discuss the importance of collaboration and dialogue in human-robot systems. We then present collaborative control, a system model in which human and robot collaborate, and describe its use in vehicle teleoperation.
Collaboration, dialogue, human-robot interaction
Robotics Research, 2003
Teleoperation can be significantly improved if humans and robots work as partners. By adapting autonomy and human-robot interaction to the situation and the user, we can create systems which are easier to use and better performing. In this paper, we discuss the importance of collaboration and dialogue in human-robot systems. We then present a system based on collaborative control, a teleoperation model in which humans and robots collaborate to perform tasks. Finally, we describe our experiences using this system for vehicle teleoperation.
User Experience of the CoSTAR System for Instruction of Collaborative Robots
ArXiv, 2017
How can we enable novice users to create effective task plans for collaborative robots? Must there be a tradeoff between generalizability and ease of use? To answer these questions, we conducted a user study with the CoSTAR system, which integrates perception and reasoning into a Behavior Tree-based task plan editor. In our study, we ask novice users to perform simple pick-and-place assembly tasks under varying perception and planning capabilities. Our study shows that users found Behavior Trees to be an effective way of specifying task plans. Furthermore, users were also able to more quickly, effectively, and generally author task plans with the addition of CoSTAR's planning, perception, and reasoning capabilities. Despite these improvements, concepts associated with these capabilities were rated by users as less usable, and our results suggest a direction for further refinement.
Human-Robot Conversational Interaction (HRCI)
Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction
Conversation is one of the primary methods of interaction between humans and robots. It provides a natural way of communication with the robot, thereby reducing the obstacles that can be faced through other interfaces (e.g., text or touch) that may cause difficulties to certain populations, such as the elderly or those with disabilities, promoting inclusivity in Human-Robot Interaction (HRI). Work in HRI has contributed significantly to the design, understanding and evaluation of human-robot conversational interactions. Concurrently, the Conversational User Interfaces (CUI) community has developed with similar aims, though with a wider focus on conversational interactions across a range of devices and platforms. This workshop aims to bring together the CUI and HRI communities to outline key shared opportunities and challenges in developing conversational interactions with robots, resulting in collaborative publications targeted at the CUI 2023 provocations track. CCS CONCEPTS • Human-centered computing → Collaborative interaction; Interaction design; Collaborative and social computing.
Collaborative Autonomy: Human–Robot Interaction to the Test of Intelligent Help
Electronics
A big challenge in human–robot interaction (HRI) is the design of autonomous robots that collaborate effectively with humans, exposing behaviors similar to those exhibited by humans when they interact with each other. Indeed, robots are part of daily life in multiple environments (i.e., cultural heritage sites, hospitals, offices, touristic scenarios and so on). In these contexts, robots have to coexist and interact with a wide spectrum of users not necessarily able or willing to adapt their interaction level to the kind requested by a machine: the users need to deal with artificial systems whose behaviors must be adapted as much as possible to the goals/needs of the users themselves, or more in general, to their mental states (beliefs, goals, plans and so on). In this paper, we introduce a cognitive architecture for adaptive and transparent human–robot interaction. The architecture allows a social robot to dynamically adjust its level of collaborative autonomy by restricting or exp...
Toward Human-Robot Collaboration
Recently robots have been launched as tour-guides in museums, lawnmowers, in-home vacuum cleaners, and as remotely operated machines in so-called distant, dangerous and dirty applications. While the methods to endow robots with a degree of autonomy have been a strong research focus, the methods for human-machine control have not been given as much attention. As autonomous robots become more ubiquitous, the methods we use to communicate task specification to them become more crucial. This thesis presents a methodology and a system for the supervisory collaborative control of a remote semi-autonomous mobile robot. The presentation centers on three main aspects of the work and offers a description of the system and the motivations behind the design. The supervisory system for human specification of robot tasks is based on a Collaborative Virtual Environment (CVE) which provides an effective framework for scalable robot autonomy, interaction and environment visualization. The system affords the specification of deictic commands to the semi-autonomous robot via the spatial CVE interface. Spatial commands can be specified in a manner that takes into account some specific everyday notions of collaborative task activity. Environment visualization of the remote environment is accomplished by combining the virtual model of the remote environment with video from the robot camera. Finally the system underwent a study with users that explored design and interaction issues within the context of performing a remote search task. Examples of study issues center on the presentation of the CVE, understanding robot competence, presence, control and interaction. One goal of the system presented in the thesis is to provide a direction in human-machine interaction from a form of direct control to an instance of human-machine collaboration.
Initiative in robot assistance during collaborative task execution
2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 2016
Collaborative robots are quickly gaining momentum in real-world settings. This has motivated many new research questions in human-robot collaboration. In this paper, we address the questions of whether and when a robot should take initiative during joint human-robot task execution. We develop a system capable of autonomously tracking and performing table-top object manipulation tasks with humans and we implement three different initiative models to trigger robot actions. Humaninitiated help gives control of robot action timing to the user; robot-initiated reactive help triggers robot assistance when it detects that the user needs help; and robot-initiated proactive help makes the robot help whenever it can. We performed a user study (N=18) to compare these trigger mechanisms in terms of task performance, usage characteristics, and subjective preference. We found that people collaborate best with a proactive robot, yielding better team fluency and high subjective ratings. However, they prefer having control of when the robot should help, rather than working with a reactive robot that only helps when it is needed.
Whose Job Is It Anyway? A Study of Human-Robot Interaction in a Collaborative Task
Human-Computer Interaction, 2004
and Engineering at Stanford University. Teresa Roberts is a professional in human-computer interaction, with interests in user-centered design and computer-mediated communication; she has most recently been a senior interaction designer at PeopleSoft and at Sun Microsystems. Hank Jones is a robotics researcher with an interest in user-centered design of human-robot interactions; he recently completed his doctorate in Aeronautics and Astronautics from Stanford University in the Aerospace Robotics Laboratory.