Human Multi-robots Interaction with High Virtual Reality Abstraction Level (original) (raw)
Related papers
Virtual and augmented reality tools for teleoperation: improving distant immersion and perception
Transactions on Edutainment II, 2009
This paper reports on the development of a collaborative system enabling to tele-operate groups of robots. The general aim is to allow a group of tele-operators to share the control of robots. This system enables the joint team of operators and robots to achieve complex tasks such as inspecting an area or exploring unknown parts of an unknown environment. Thanks to virtual and augmented reality techniques, a Virtual and Augmented Collaborative Environment (VACE) is built. This last supports a N * M * K scheme: N1;N tele-operators control M1;M robots at K1;K abstraction levels. Indeed, our VACE allows to N people to control any robot at different abstraction's levels (from individual actuator's control K = 1 to final status specification K = 3). On the other hand, the VACE enables to build synthetic representations of the robots and their world. Robots may appear to tele-operators as individuals or reduced to a single virtual entity. We present in this paper an overview of this system and an application, namely a museum visit. We show how visitors can control robots and improve their immersion using a head-tracking system combined with a VR helmet to control the active vision systems on the remote mobile robots. We also introduce the ability to control the remote robots configuration, at a group level. We finally show how Augmented and Virtual Reality add-ons are included to ease the execution of remote tasks.
Advances in Computer- …, 2009
This paper deals with the usage of Virtual Reality and Scenario Languages in the field of teleoperation: how to enable a group of teleoperators to control, in a collaborative way, groups of real robots, in turn collaborating with each other to achieve complex tasks; such tasks include inspecting a dangerous area or exploring a partially unknown environment. The main goal is to obtain efficient, natural and innovative interactions in such a context. We first present the usage of Collaborative Virtual Environments (CVE) to obtain a unified, simplified,virtual abstraction of distributed, complex, real robots. We show how this virtual environment offers a peculiar ability: to free teleoperators from space and time constraints. Then we present our original usage of Scenario Languages to describe complex and collaborative tasks in a natural and flexible way. Finally, we validate the proposed framework through our Teleoperation platform ViRAT.
Towards multimodal human-robot interaction in large scale virtual environment
2008
Human Operators (HO) of telerobotics systems may be able to achieve complex operations with robots. Designing usable and effective Human-Robot Interaction (HRI) is very challenging for system developers and human factors specialists. The search for new metaphors and techniques for HRI adapted to telerobotics systems emerge on the conception of Multimodal HRI (MHRI). MHRI allows to interact naturally and easily with robots due to combination of many devices and an efficient Multimodal Management System (MMS). A system like this should bring a new user's experience in terms of natural interaction, usability, efficiency and flexibility to HRI system. So, a good management of multimodality is very. Moreover, the MMS must be transparent to user in order to be efficient and natural.
Virtual Reality-Based Interface for Advanced Assisted Mobile Robot Teleoperation
Applied Sciences
This work proposes a new interface for the teleoperation of mobile robots based on virtual reality that allows a natural and intuitive interaction and cooperation between the human and the robot, which is useful for many situations, such as inspection tasks, the mapping of complex environments, etc. Contrary to previous works, the proposed interface does not seek the realism of the virtual environment but provides all the minimum necessary elements that allow the user to carry out the teleoperation task in a more natural and intuitive way. The teleoperation is carried out in such a way that the human user and the mobile robot cooperate in a synergistic way to properly accomplish the task: the user guides the robot through the environment in order to benefit from the intelligence and adaptability of the human, whereas the robot is able to automatically avoid collisions with the objects in the environment in order to benefit from its fast response. The latter is carried out using the ...
A Distributed Architecture for Collaborative Teleoperation using Virtual Reality and Web Platforms
2009 6th IEEE Consumer Communications and Networking Conference, 2009
Augmented Reality (AR) can provide to a Human Operator (HO) a real help to achieve complex tasks, such as remote control of robots and cooperative teleassistance. Using appropriate augmentations, the HO can interact faster, safer and easier with the remote real world. In this paper, we present an extension of an existing distributed software and network architecture for collaborative teleoperation based on networked humanscaled mixed reality and mobile platform. The first teleoperation system was composed by a VR application and a Web application. However the 2 systems cannot be used together and it is impossible to control a distant robot simultaneously. Our goal is to update the teleoperation system to permit a heterogeneous collaborative teleoperation between the 2 platforms. An important feature of this interface is based on the use of different Virtual Reality platforms and different Mobile platforms to control one or many robots. The first aim with this work is to develop collaborative robot teaching with the use of virtual or real robot.
Virtual Reality in Cooperative Teleoperation
2002
Virtual reality enables to build systems that enhance communication by facilitating information display. In teleoperation applications, virtual reality improves task performance thanks to intuitive manipulation and exploration of working environment. In more, an interactive 3D simulation allows to give "virtual assistance" to the operator by adding guides or actors, present or not in the real world, which aid the operator in task learning or execution. Our research in telerobotics systems involves the integration of (real or virtual) autonomous entities as assistants for teleoperation missions. This paper describes a platform for teleoperation systems prototyping that is able to manage teleoperated and autonomous entities. This platform, called ASSET, includes control of interaction and acting devices, 3D rendering and easy integration of user components. Our system is analyzed in this paper and a concrete example of its applicability in cooperative teleoperation is shown.
Team-centered virtual interactive presence for adjustable autonomy
Space, 2005
Current solutions for autonomy plan creation, monitoring and modification are complex, resulting in loss of flexibility and safety. The size of ground control operations and the number of aviation accidents involving automation are clear indicators of this problem. The solution is for humans to naturally collaborate with autonomous systems. Visual spatial information is a common reference for increased team situation awareness between humans and adaptive autonomous systems such as robots, satellites, and agents. We use a teamcentered, augmented reality spatial dialog approach to improve human-automation interaction for collaborative assembly, repair, and exploration. With our spatial dialog approach, we provide a data-rich virtual interactive presence for collaborating with autonomous systems by incorporating the spatial context along with the spoken context. Spatial Dialog is an interaction and communication technology that uses a deeper understanding of spatial context and a richer spatial vocabulary to realize improved humancomputer symbiosis. Spatial context is useful for communicating, understanding and remembering information. It includes the location of objects and people, and the place in which events are occurring. Using computers that see, we can pick out faces in a scene, track people, and follow where people are pointing and the objects they use. Using augmented reality and a real-time understanding of the spatial scene, we can overlay information on the real world or virtual models as a key spatial component of the dialog process, especially for remote virtual tele-presence, tele-supervision, and tele-science cases.
2011
This research proposes a human-multirobot system with semi-autonomous ground robots and UAV view for contaminant localization tasks. A novel Augmented Reality based operator interface has been developed. The interface uses an over-watch camera view of the robotic environment and allows the operator to direct each robot individually or in groups. It uses an A* path planning algorithm to ensure obstacles are avoided and frees the operator for higher-level tasks. It also displays sensor information from each individual robot directly on the robot in the video view. In addition, a combined sensor view can also be displayed which helps the user pin point source information. The sensors on each robot monitor the contaminant levels and a virtual display of the levels is given to the user and allows him to direct the multiple ground robots towards the hidden target. This paper reviews the user interface and describes several initial usability tests that were performed. This research demonst...
Robot task execution with telepresence using virtual reality technology
1998
Robotic manipulators are widely used to replace human operators in tasks that are repetitive in nature. However, there are many tasks that are non-repetitive, unpredictable, or hazardous to the human operators. With teleoperation, or remote control, such tasks can still be performed using robotic manipulators. A suitable platform with visual and mechanical feedback is deemed necessary to simplify the operation of such system. This paper describes the design and implementation of a telepresent robot control system using virtual reality (VR) instruments. The system includes a stereo Head-Eye Module (HEM) with 3 degree-of-freedom, a highresolution stereo Head Mounted Display (HMD) for remote supervision, and a 6 degree-of-freedom articulated robotic manipulator. The motion of the operator's head and hand are tracked using 6 degree-of-freedom magnetic trackers. Implementation of the system includes the mechanical design and fabrication of the HEM, and the overall software and hardware integration. The operation of the system was subsequently demonstrated by performing 2 specific tasks