HuMoRS (original) (raw)

Virtual Datagloves: Interacting with Virtual Environments Through Computer Vision

1996

This paper outlines a system design and implementation of a 3D input device for graphical applications. It is shown how computer vision can be used to track a users movements within the image frame allowing interaction with 3D worlds and objects. Point Distribution Models (PDMs) have been shown to be successful at tracking deformable objects. This system demonstrates how these ‘smart snakes’ can be used in real time with real world applications, demonstrating how computer vision can provide a low cost, intuitive interface that has few hardware constraints. The compact mathematical model behind the PDM allows simple static gesture recognition to be performed providing the means to communicate with an application. It is shown how movement of both the hand and face can be used to drive 3D engines. The system is based upon Open Inventor and designed for use with Silicon Graphics Indy Workstations but allowances have been made to facilitate the inclusion of the tracker within third party...

QuickTime VR: an image-based approach to virtual environment navigation

1995

Traditionally, virtual reality systems use 3D computer graphics to model and render virtual environments in real-time. This approach usually requires laborious modeling and expensive special purpose rendering hardware. The rendering quality and scene complexity are often limited because of the real-time constraint. This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment. The panoramic image is digitally warped on-the-fly to simulate camera panning and zooming. The panoramic images can be created with computer rendering, specialized panoramic cameras or by "stitching" together overlapping photographs taken with a regular camera. Walking in a space is currently accomplished by "hopping" to different panoramic points. The image-based approach has been used in the commercial product QuickTime VR, a virtual reality extension to Apple Computer's QuickTime digital multimedia framework. The paper describes the architecture, the file format, the authoring process and the interactive players of the VR system. In addition to panoramic viewing, the system includes viewing of an object from different directions and hit-testing through orientation-independent hot spots.

The GrImage Platform: A Mixed Reality Environment for Interactions

In this paper, we present a scalable architecture to compute, visualize and interact with 3D dynamic models of real scenes. This architecture is designed for mixed reality applications requiring such dynamic models, tele-immersion for instance. Our system consists in 3 main parts: the acquisition, based on standard firewire cameras; the computation, based on a distribution scheme over a cluster of PC and using a recent shape-from-silhouette algorithm which leads to optimally precise 3D models; the visualization, which is achieved on a multiple display wall. The proposed distribution scheme ensures scalability of the system and hereby allows control over the number of cameras used for acquisition, the frame-rate, or the number of projectors used for high resolution visualization. To our knowledge this is the first completely scalable vision architecture for real time 3D modeling, from acquisition to visualization through computation. Experimental results show that this framework is very promising for real time 3D interactions.

The 14th EUROGRAPHICS Workshop on Graphics and Cultural Heritage GCH 2016 (Editors) 3D in-world Telepresence With Camera-Tracked Gestural Interaction

While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the pre-senter does not have to try to face both audience and screen. To meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled with a large display screen (in this case an 8 meter wide curved screen) allows participants to have their gestures, movements and group behavior fed into the virtual environment either directly or indirectly. Allowing speakers to present 3D virtual worlds remotely located audiences while appearing to be inside virtual worlds has immediate practical uses for teaching and long-distance collaboration.

Mobile phone based AR scene assembly

Proceedings of the 4th …, 2005

In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.

InReach: Manipulating 3D Objects Remotely Using Your Body

How can we integrate digital information into our physical lives? How can we dive into the full digital experience with our whole body and all of our senses? InReach represents one approach for including proprioception and embodied interaction into an environment for remote collaboration. It explores how remote collaborators can "reach into" a shared digital workspace where they can manipulate virtual objects and data. The collaborators see their live threedimensional (3D) recreated mesh in a shared virtual space on a large screen in front of them and can use their bodies to "inhabit" and interact with 3D models. They can grab digital objects with their bare hands, translate, scale, and rotate them. In contrast with the traditional view for remote conferencing InReach is particularly useful for situations in which users can benefit from seeing their own and the their collaborator's body in relation to the data and can use their bodies to navigate and manipulate the data.

Interacting with Virtual Reality models on mobile devices

This paper discusses alternative approaches for interacting with virtual reality models on mobile devices, based upon work conducted as part of the locus project [6]. First, mobile virtual reality models are discussed within the established reality-virtuality continuum to demonstrate how this new mixed reality paradigm compares with augmented and virtual reality generally. Next, three prototypes are introduced that adopt different interaction paradigms for mobile virtual reality models: interaction can be via the screen only, movement and gestures within the real world environment, or a mixture of these two approaches. Finally this paper describes some potential applications of mobile virtual reality, including navigation, and the display of georeferenced information. The paper concludes by suggesting that interaction via movement and gestures within the may be a more intuitive approach for mobile virtual reality models.

The perceptive workbench: Toward spontaneous and natural interaction in semi-immersive virtual environments

2000

The Perceptive Workbench enables a spontaneous, natural, and unimpeded interface between the physical and virtual world. It is built on vision-based methods for interaction that remove the need for wired input devices and wired tracking. Objects are recognized and tracked when placed on the display surface. Through the use of multiple light sources, the object's 3D shape can be captured and inserted into the virtual interface. This ability permits spontaneity as either preloaded objects or those selected on the spot by the user can become physical icons. Integrated into the same vision-based interface is the ability to identify 3D hand position, pointing direction, and sweeping arm gestures. Such gestures can support selection, manipulation, and navigation tasks. In this paper the Perceptive Workbench is used for augmented reality gaming and terrain navigation applications, which demonstrate the utility and capability of the interface.

Mobile Cross Reality for cultural heritage

2013 Digital Heritage International Congress (DigitalHeritage), 2013

Widespread adoption of smartphones and tablets has enabled people to multiplex their physical reality, where they engage in face-to-face social interaction, with Web-based social networks and apps, whilst emerging 3D Web technologies hold promise for networks of parallel 3D virtual environments to emerge. Although current technologies allow this multiplexing of physical reality and 2D Web, in a situation called PolySocial Reality, the same cannot yet be achieved with 3D content. Cross Reality was proposed to address this issue; however so far it has focused on the use of fixed links between physical and virtual environments in closed lab settings, limiting investigation of the explorative and social aspects. This paper presents an architecture and implementation that addresses these shortcomings using a tablet computer and the Pangolin virtual world viewer to provide a mobile interface to a corresponding 3D virtual environment. Motivation for this project stemmed from a desire to enable students to interact with existing virtual reconstructions of cultural heritage sites in tandem with exploration of the corresponding real locations, avoiding the adverse temporal separation caused otherwise by interacting with the virtual content only within the classroom. The accuracy of GPS tracking emerged as a constraint on this style of interaction.