HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments (original) (raw)

3D presence - a system concept for multi-user and multi-party immersive 3D video conferencing

IET 5th European Conference on Visual Media Production (CVMP 2008), 2008

Traditional set-top camera video-conferencing systems still fail to meet the 'telepresence challenge' of providing a viable alternative for physical business travel, which is nowadays characterized by unacceptable delays, costs, inconvenience, and an increasingly large ecological footprint. Even recent high-end commercial solutions, while partially removing some of these traditional shortcomings, still present the problems of not scaling easily, expensive implementations, not utilizing 3D life-sized representations of the remote participants and addressing only eye contact and gesturebased interactions in very limited ways. The European FP7 project 3DPresence will develop a multi-party, high-end 3D videoconferencing concept that will tackle the problem of transmitting the feeling of physical presence in real-time to multiple remote locations in a transparent and natural way. In this paper, we present an overall concept, which includes the geometrical design of the whole prototype demonstrator, the arrangement of the cameras and displays and the general multi-view video analysis chain. The driving force behind the design strategy is to fulfil the requirements of a novel 3D immersive videoconferencing system, including directional eye gaze and gesture awareness.

A seamless solution for 3D real-time interaction: design and evaluation

Virtual Reality, 2014

This paper aims to propose and evaluate a markerless solution for capturing hand movements in real time to allow 3D interactions in virtual environments (VEs). Tools such as keyboard and mice are not enough for interacting in 3D VE; current motion capture systems are expensive and require wearing equipment. We developed a solution to allow more natural interactions with objects and VE for navigation and manipulation tasks. We conducted an experimental study involving 20 participants. The goal was to realize object manipulation (moving, orientation, scaling) and navigation tasks in VE. We compared our solution (Microsoft Kinect-based) with data gloves and magnetic sensors (3DGloves) regarding two criteria: performance and acceptability. Results demonstrate similar performance (precision, execution time) but a better overall acceptability for our solution. Preferences of participants are mostly in favor of the 3DCam, mainly for the criteria of comfort, freedom of movement, and handiness. Our solution can be considered as a real alternative to conventional systems for object manipulation in virtual reality.

3D User Interfaces for Collaborative Work

Desktop environments have proven to be a powerful user interface and are used as the de facto standard human-computer interaction paradigm for over 20 years. However, there is a rising demand on 3D applications dealing with complex datasets, which exceeds the possibilities provided by traditional devices or two-dimensional display. For these domains more immersive and intuitive interfaces are required. But in order to get the users' acceptance, technology-driven solutions that require inconvenient instrumentation, e.g., stereo glasses or tracked gloves, should be avoided. Autostereoscopic display environments equipped with tracking systems enable users to experience 3D virtual environments more natural without annoying devices, for instance via gestures. However, currently these approaches are only applied for specially designed or adapted applications without universal usability. Although these systems provide enough space to support multi-user, additional costs and inconvenient instrumentation hinder acceptance of these user interfaces. In this chapter we introduce new collaborative 3D user interface concepts for such setups where minimal instrumentation of the user is required such that the strategies can be easily integrated in everyday working environments. Therefore, we propose an interaction system and framework, which allows displaying and interacting with both mono-as well as stereoscopic content in parallel. Furthermore, the setup enables multiple users to view the same data simultaneously. The challenges for combined mouse-, keyboard-and gesturebased input paradigms in such an environment are pointed out and novel interaction strategies are introduced.

3D in-world Telepresence With Camera-Tracked Gestural Interaction

2016

While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the presenter does not have to try to face both audience and screen. To meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled ...

Hand gesture interaction with a 3D virtual environment

2011

T h e R e s e a r c h B u l l e t i n o f J o r d a n A C M , I S S N : 2 0 7 8 -7 9 5 2 , V o l u m e I I ( I I I ) P a g e | 86 ABSTRACT Virtual Environment (VE) system offers a natural and intelligent user interface. Hand gesture recognition is more efficient and easier interaction in VE than human-computer interface (HCI) devices like keyboards and mouses. We propose a hand gesture recognition interface that generates commands to control objects directly in a game. Our novel hand gesture recognition system utilizes both Bag-offeatures and Support Vector Machine (SVM) to realize user-friendly interaction between human and computers. The HCI based on hand gesture recognition interacts with objects in a 3D virtual environment. With this interface, the user can control and direct a helicopter by a set of hand gesture commands controlling the movements of the helicopter. Our system shows the hand gesture recognition interface can attain an enhanced and more intuitive and flexible interaction for the user than other HCI devices.

Real-Time Hand Gesture-Based Interaction with Objects in 3D Virtual Environments

International Journal of Multimedia and Ubiquitous Engineering, 2013

This paper introduces a hand-gesture-based user interface that enables users to control virtual objects in three-dimensional (3D) virtual environments. Users are able to manipulate objects by moving, rotating, scaling, and selecting their hands in the real world. A gesturebased virtual simulation system for vehicle maintenance is developed, incorporating several object-manipulation features (such as opening a car door, sitting in a seat, replacing tires, and disassembling the engine) to evaluate the effectiveness of this protocol. Our results indicate that the proposed hand-gesture-based user interface could potentially be employed in place of a mouse or keyboard to interact with objects in 3D virtual environments.

A Unified Framework for Remote Collaboration Using Interactive AR Authoring and Hands Tracking

Distributed, Ambient and Pervasive Interactions, 2016

In this paper, we present a unified framework for remote collaboration using interactive augmented reality (AR) authoring and hand tracking methods. The proposed framework enables a local user to organize AR digital contents for making a shared working environment and collaborate multiple users in the distance. To develop the framework, we combine two core technologies: (i) interactive AR authoring method utilizing a smart input device for making a shared working space, (ii) hand-augmented object interaction method by tracking two hands in egocentric camera view. We implement a prototype of the proposed remote collaboration framework for testing its feasibility in an indoor environment. To the end, we expect that our framework enables collaboration as feeling a sense of co-presence with remote users in a user's friendly AR working space.

Improving awareness for 3D virtual collaboration by embedding the features of users’ physical environments and by augmenting interaction tools with cognitive feedback cues

Journal on Multimodal User Interfaces, 2013

The feeling of presence is essential for efficient interaction within Virtual Environments (VEs). When a user is fully immersed within a VE through a large immersive display system, his/her feeling of presence can be altered because of disturbing interactions with his/her physical environment, such as collision with hardware parts of the system or loss of tracking. This alteration can be avoided by taking into account the physical features of the user as well as those of the system hardware and embedding them in the VE. Moreover, the 3D abstract representation of these physical features can also be useful for collaboration between distant users because they can make a user aware of the physical limitations of the others he/she is collaborating with. In this paper we present how we use the

The 14th EUROGRAPHICS Workshop on Graphics and Cultural Heritage GCH 2016 (Editors) 3D in-world Telepresence With Camera-Tracked Gestural Interaction

While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the pre-senter does not have to try to face both audience and screen. To meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled with a large display screen (in this case an 8 meter wide curved screen) allows participants to have their gestures, movements and group behavior fed into the virtual environment either directly or indirectly. Allowing speakers to present 3D virtual worlds remotely located audiences while appearing to be inside virtual worlds has immediate practical uses for teaching and long-distance collaboration.