Immersive telepresence on the operation of unmanned vehicles (original) (raw)

Examining the Effects of Perceived Telepresence, Interactivity, and Immersion on Pilot Situation Awareness During a Virtual Reality Flight Exercise

Flight simulators that use virtual reality (VR) displays are becoming more prevalent in the aviation domain, enabling situation awareness (SA) training and assessment paradigms that integrate a broad range of aircraft types and flight environments. Research has identified three key components of user psychological experiences during VR exposure that may affect cognitive performance: immersion, telepresence, and interactivity. The primary objective of this study was to validate and quantify the effects of these VR experience constructs on pilot SA at the levels of information processing and comprehension. Effects of age and experience were also explored, as these factors are known to influence achievement of SA. Findings from this research will provide insight into the ways in which pilots' psychological experiences of VR flight simulators affect their cognitive processing. Moreover, the results of this work will inform the design of improved VR systems for the training and assessment of pilot SA.

Comparing situation awareness for two unmanned aerial vehicle human interface approaches

2006

Our goal is to improve the design of human-Unmanned Aerial Vehicle (UAV) interaction so operators can have better situation awareness (SA) of conditions pertaining to the UAVs. We developed a UAV interaction design approach that uses pre-loaded terrain data to augment real-time video data sensed by the UAVs. We hypothesized that augmentation of the video in this manner would provide better SA than a video stream alone. To test the hypothesis, we performed a counterbalanced within-subjects experiment in which the independent variable was video presentation approach. Our results show an increase in comprehension of 3D spatial relationships between the UAV and points on the earth when experiment participants were given an augmented video presentation, as evidenced by a statistically significant difference in participants' mapping accuracy. We believe our results will generalize to situations beyond UAVs to those situations in which people must monitor and comprehend real-time, map-based information.

Utility of situation awareness and attention for describing telepresence experiences in a virtual teleoperation task

Proceedings of the International …, 2001

The current study assessed the utility of measures of situation awareness (SA) and attention allocation for objectively describing telepresence. The concepts of SA and attention have been identified as cognitive constructs potentially underlying telepresence experiences. The motivation for this research was to establish an objective measure of telepresence and to further investigate the relationship between telepresence and teleoperation task performance. Research subjects performed a virtual demining task at varying levels of difficulty using a simulated rover equipped with mine neutralization tools. At the same time, they completed two secondary monitoring tasks. Teleoperation task performance, SA, secondary task performance (attention), and telepresence were measured during the experiment. Results demonstrated level of difficulty (LOD) effects on performance and telepresence. Regression analysis revealed LOD and attention to explain significant portions of the variance in telepresence. Correlation analyses revealed significant relationships between teleoperation task performance and subjective ratings of telepresence, as well as telepresence and SA and attention.

“Intelligent” telepresence: Introducing virtual reality in advanced robots

Lecture Notes in Computer Science, 1993

The paper discusses some issues about a telepresence system to be used for remote driving of robots that operate in hazardous and/or hostile environments. The emphasis is on the problem of efficiently using and integrating information from different sensors, in order to provide the remote operator with readily understandable and usable data. Solution hypotheses, which take advantage of virtual reality techniques, are presented.

Sensor augmented virtual reality based teleoperation using mixed autonomy

Journal of computing and information science in …, 2009

A multimodal teleoperation interface is introduced, featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities onboard the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multimodal control interface. VR addresses the typical limitations of video based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an onboard computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and a real-state tracking system enable temporary autonomous operation of the vehicle for local navigation around obstacles and automatic reestablishment of the vehicle's teleoperated state. The system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR based multimodal teleoperation interface is expected to be more adaptable and intuitive when compared with other interfaces.

The application of telepresence and virtual reality to subsea exploration

1994

The operation of remote science exploration vehicles benefits greatly from the application of advanced telepresence and virtual reality operator interfaces. Telepresence, or the projection of the human sensory apparatus into a remote location, can provide scientists with a much greater intuitive understanding of the environment in which they are working than simple camera-display systems. Likewise virtual reality, or the use of highly interactive three-dimensional computer graphics, can both enhance an operator's situational awareness of an environment and also compensate (to some degree) for low bandwidth and/or long time delays in the communications channel between the operator and the vehicle. These advanced operator interfaces are important for terrestrial science and exploration applications, and are critical for missions involving the exploration of other planetary surfaces, such as on Mars. The undersea environment provides an excellent terrestrial analog to science exploration and operations on another planetary surface. FIGURE 4. An operator in a helmet-mounted display. The operator can choose either live stereo video, computer-generated graphics, or a mixture of the two. to be displayed in the helmet

Utility of situation awareness and attention for describing telepresence experiences in a virtual telepresence task

The current study assessed the utility of measures of situation awareness (SA) and attention allocation for objectively describing telepresence. The concepts of SA and attention have been identified as cognitive constructs potentially underlying telepresence experiences. The motivation for this research was to establish an objective measure of telepresence and to further investigate the relationship between telepresence and teleoperation task performance. Research subjects performed a virtual demining task at varying levels of difficulty using a simulated rover equipped with mine neutralization tools. At the same time, they completed two secondary monitoring tasks. Teleoperation task performance, SA, secondary task performance (attention), and telepresence were measured during the experiment. Results demonstrated level of difficulty (LOD) effects on performance and telepresence. Regression analysis revealed LOD and attention to explain significant portions of the variance in telepresence. Correlation analyses revealed significant relationships between teleoperation task performance and subjective ratings of telepresence, as well as telepresence and SA and attention.

Team-centered virtual interactive presence for adjustable autonomy

Space, 2005

Current solutions for autonomy plan creation, monitoring and modification are complex, resulting in loss of flexibility and safety. The size of ground control operations and the number of aviation accidents involving automation are clear indicators of this problem. The solution is for humans to naturally collaborate with autonomous systems. Visual spatial information is a common reference for increased team situation awareness between humans and adaptive autonomous systems such as robots, satellites, and agents. We use a teamcentered, augmented reality spatial dialog approach to improve human-automation interaction for collaborative assembly, repair, and exploration. With our spatial dialog approach, we provide a data-rich virtual interactive presence for collaborating with autonomous systems by incorporating the spatial context along with the spoken context. Spatial Dialog is an interaction and communication technology that uses a deeper understanding of spatial context and a richer spatial vocabulary to realize improved humancomputer symbiosis. Spatial context is useful for communicating, understanding and remembering information. It includes the location of objects and people, and the place in which events are occurring. Using computers that see, we can pick out faces in a scene, track people, and follow where people are pointing and the objects they use. Using augmented reality and a real-time understanding of the spatial scene, we can overlay information on the real world or virtual models as a key spatial component of the dialog process, especially for remote virtual tele-presence, tele-supervision, and tele-science cases.

Virtual reality and telepresence

Robotica, 1992

SUMMARYThe UK Advanced Robotics Research Centre's VERDEX Project (Virtual Environment demote Driving Experiment) is an experimental test bed for investigating telepresence and virtual reality technologies in the design of human-system interfaces for telerobots. The achievements of the Project to date include the transformation of scanning laser rangefinder output to stereo virtual imagery (viewed using the VPL EyePhoneTM), the Teletact® Tactile Feedback Glove (for use with the VPL DataGloveTM), a high-speed, head-slaved stereo TV system, and a T800/i860 SuperVisionTM graphics/video parallel processing system.