Augmenting 3D interactions with haptic guide in a large scale virtual environment (original) (raw)

The Use of Haptic Guide with 3D Interactions in a large Scale Virtual Environment

Interaction techniques play a vital role in the virtual environment's enrichment and have profound effects on the uer's performance and sense of presence as well as realism of the virtual environment(VE).In this paper we present new haptic guide models for object selection. It is used to augment the Follow-Me 3D interaction technique dedicated to object selection and manipulation. we divide the VE into three zones, in the first zone the user can freely navigate and does't need any guidance, the second zone provides visual guidance to the user near an object and the third zone gives haptic guidance very near to the object. The haptic and visual guides assist the user in object selection. The paper presents two different models of the haptic guides, one for free and multidirectional selection and the second for precise and single direction selection. The evaluation and comparison of these haptic guides is investigated.

Augmenting 3D Interactions with Haptic Guide in Large Scale Virtual Environments

2009

Abstract—3D interaction techniques play a vital role in virtual environments’(VEs) enrichment and have profound effects on the user’s performance and sense of presence. In this context haptic feedback may be useful to assist the user in complex tasks particularly in the absence of efficient depth cues. In this paper we propose and evaluate two haptic guides (spherical and conical) for object selection. These guides are meant to improve the Follow-Me 3D interaction technique. Twenty volunteers were instructed to reach for and select four spherical targets in a large scale VE. Three different interaction zones (free manipulation, visual and haptic zones) are defined. Each one is characterized by a specific interaction granularity which defines different assistance levels (visual and haptic). We observed for both types of haptic guides a reduction in task completion time. All participants reported that it was easier to select objects using spherical guide with visual feedback only. How...

Evaluation of the follow-me technique for grabbing virtual objects in semi-immersive virtual environment

The implementation of an interaction between a user and a Virtual Environment (VE) in Virtual Reality (VR) may use various techniques. However, in some cases, the interaction must be very precise and comfortable for the user. In this paper, we evaluate the FOLLOW-ME technique which is a selection and manipulation technique. The model associated to this technique splits the Virtual Environment into three zones in which a specific interaction model is used: a free manipulation zone, a scaled manipulation zone and a precise manipulation zone. Each one of the three zones is characterized by a specific interaction granularity which defines the properties of the interaction in the concerned zone. This splitting is created in order to have both precision near the object to reach or to manipulate (scaled and precise manipulation zone) and to maintain a realistic and free interaction in the VE (free manipulation zone). In the precise manipulation zone, we use the concept of virtual guides in order to assist the user to achieve his task. The aim of this paper is both to compare the FOLLOW-ME technique with classical interaction techniques in the case of selection task in semi-immersive VR and to give some insights about the conditions of usefulness of virtual guides in a selection task. The preliminary results show that FOLLOW-ME technique is more efficient to select remote and dynamical objects compared to other techniques such as Go-Go and Ray-casting techniques. In all the situations, we have tested, the FOLLOW-ME technique makes the selection process easier.

The Effect of Haptic Guides on Human Performance in Virtual Environments

Computer Graphics Theory and Applications, 2009

In order to make the virtual environments(VE's) more realistic and to increase human performance, the inclusion of haptic modality becomes more important. In this paper we present two new haptic guides. The haptic guides are fundamentally aimed to assist users for object selection in VE's. We divide the virtual environment into three zones. In the first zone the user can freely move and don't use any sort of guides. In the 2nd zone, user is given visual guidance and the 3rd zone contains haptic guides along with visual guides. As the paper presents two different models of the haptic guides, one for free and multidirectional selection and the second for precise and single direction selection. We not only study the effect of these guides on human task performance in the VE but also investigate a comparison of the two haptic models.

Haptic interaction in virtual environments

Proceedings of the 1997 IEEE/RSJ International Conference on Intelligent Robot and Systems. Innovative Robotics for Real-World Applications. IROS '97

We present a haptic rendering framework that allows the tactile display of complex virtual environments. This framework allows surface constraints, surface shading, friction, texture and other effects to be modeled solely by updating the position of a representative object, the "virtual proxy." This abstraction reduces the task of the haptic servo control loop to the minimization of the error between user's position and that of the proxy. This framework has been implemented in a system that is able to haptically render virtual environments of a complexity that is near and often in excess of the capabilities of current interactive graphic systems.

Local Model of Interaction for Haptic Manipulation of Rigid Virtual Worlds

International Journal of Robotic Research, 2005

This paper proposes a local model of rigid body interaction that provides users with convincing kinesthetic feedback while they manipulate a virtual tool within a rigid multibody virtual world. The virtual tool can be a rigid object or a linkage. The local model can interface a haptic device to any virtual environment simulation that provides the contact position, the contact normal direction, and the penetration depth of the virtual tool into the virtual world at the virtual tool contacts. The local model includes a proxy of the virtual tool that embeds an approximation of its geometry and of its dynamics, and that eliminates force discontinuities at model updates via proxy deformation. In addition, the model includes active and predicted virtual tool contacts. Predicted contacts are used to maintain force continuity and to better constrain users to tight virtual spaces. Experiments performed within a planar virtual world demonstrate that, compared to prior local models of rigid body interaction, the proposed model enables users to feel stiffer contacts and improves users' perception of free space in locally cluttered virtual environments.

Special Section on Touching the 3rd Dimension A survey of 3D object selection techniques for virtual environments

Computer graphics applications controlled through natural gestures are gaining increasing popularity these days due to recent developments in low-cost tracking systems and gesture recognition technologies. Although interaction techniques through natural gestures have already demonstrated their benefits in manipulation, navigation and avatar-control tasks, effective selection with pointing gestures remains an open problem. In this paper we survey the state-of-the-art in 3D object selection techniques. We review important findings in human control models, analyze major factors influencing selection performance, and classify existing techniques according to a number of criteria. Unlike other components of the application's user interface, pointing techniques need a close coupling with the rendering pipeline, introducing new elements to be drawn, and potentially modifying the object layout and the way the scene is rendered. Conversely, selection performance is affected by rendering issues such as visual feedback, depth perception, and occlusion management. We thus review existing literature paying special attention to those aspects in the boundary between computer graphics and human-computer interaction.

3D interaction technique to enhance telemanipulation tasks using virtual environment

2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006

This paper gives preliminary results about the utilization of an interaction technique called FOLLOW-ME to fasten the selection task for teleoperation system. The implementation of an interaction between a user and a Virtual Environment (VE) in Virtual Reality (VR) may use various techniques. However, in the case of teleoperation, the interaction must be very precise and comfortable for the user. The model associated to the FOLLOW-ME technique splits the Virtual Environment into three zones in which a specific interaction model is used: a free manipulation zone, a scaled manipulation zone and a precise manipulation zone. Each one of the three zones is characterized by a specific interaction granularity. In the precise manipulation zone, we use the concept of virtual guides in order to assist the user to achieve his task. In this paper, our aim is to show that the FOLLOW-ME technique is well suited for selection in teleoperation tasks. To do this, we have first compared the FOLLOW-ME technique with classical interaction techniques in a virtual environment where different targets are situated at different depth and may move. The preliminary results show that our technique is more efficient than the classical Go-Go and Ray-casting techniques, in a sense that the task is more reproducible and easier to accomplish by the user. In a second stage, we use this result to design selection procedures for the ARITI tele-operation system and show that the use of FOLLOW-ME induces benefits for the user.

Z-Clutching: Interaction Technique for Navigating 3D Virtual Environment Using a Generic Haptic Device

Journal of Computing Science and Engineering, 2016

Navigating a large 3D virtual environment using a generic haptic device can be challenging since the haptic device is usually bounded by its own physical workspace. On the other hand, mouse interaction easily handles the situation with a clutching mechanism-simply lifting the mouse and repositioning its location in the physical space. Since the haptic device is used for both input and output at the same time, in many cases, its freedom needs to be limited in order to accommodate such a situation. In this paper, we propose a new mechanism called Z-Clutching for 3D navigation of a virtual environment by using only the haptic device without any interruption or sacrifice in the given degrees of freedom of the device's handle. We define the clutching state which is set by pulling the haptic handle back into space. It acts similarly to lifting the mouse off the desk. In this way, the user naturally feels the haptic feedback based on the depth (z-direction), while manipulating the haptic device and moving the view as desired. We conducted a user study to evaluate the proposed interaction technique, and the results are promising in terms of the usefulness of the proposed mechanism.

A comparison of three techniques to interact in large virtual environments using haptic devices with limited workspace

Advances in Computer …, 2006

This paper describes an experiment that was conducted to evaluate three interaction techniques aiming at interacting with large virtual environments using haptic devices with limited workspace: the Scaling technique, the Clutching technique, and the Bubble technique. Participants were asked to paint a virtual model as fast and as precisely as possible inside a CAVE, using a "desktop" haptic device. The results showed that the Bubble technique enabled both the quickest and the most precise paintings. It was also the most appreciated technique.