HuMoRS (original) (raw)
Related papers
HuMoRS: huge models mobile rendering system
ABSTRACT We present HuMoRS, a networked 3D graphics system for interactively streaming and exploring massive 3D mesh models on mobile devices. The system integrates a networked architecture for adaptive on-device rendering of multiresolution surfaces with a simple and effective interactive camera controller customized for touch-enabled mobile devices. During interaction, knowledge of the currently rendered scene is exploited to automatically center a rotation pivot and to propose context-dependent precomputed viewpoints. Both the object of interest and the viewpoint database are resident on a web server and adaptive transmission is demonstrated over wireless and phone connections in a Cultural Heritage application for the exploration of sub-millimetric colored reconstructions of stone statues. We report also on a preliminary user-study comparing the performances of our camera navigation method with respect to the most popular Virtual TrackBall implementations, with and without pivoting.
Interactive Scene Walkthrough Using a Physically-Based Virtual Camera
1997
One of the most powerful results of recent advances in graphics hardware is the ability of a computer user to interactively explore a virtual building or landscape. The newest three-dimensional input devices, together with high speed 3D graphics workstations, make it possible to view and move through a 3D scene by interactively controlling the motion of a virtual camera. In this paper, we describe how natural and intuitive control of building walkthrough can be achieved by using a physically-based model of the virtual camera's behavior. Using the laws of classical mechanics to create an abstract physical model of the camera, we then simulate the virtual camera motion in real time in response to force data from the various 3D input devices (e.g. the Spaceball and Polhemus 3Space Digitizer). The resulting interactive behavior of the model is determined by several physical parameters such as mass, moment of inertia, and various friction coefficients which can all be varied interactively, and by constraints on the camera's degrees of freedom. This allows us to explore a continuous range of physically-based metaphors for controlling the camera motion. We present the results of experiments using several of these metaphors for virtual camera motion and describe the effects of the various physical parameters.
Mixed-Reality for Object-Focused Remote Collaboration
The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings
In this paper we outline the design of a mixed-reality system to support object-focused remote collaboration. Here, being able to adjust collaborators' perspectives on the object as well as understand one another's perspective is essential to support effective collaboration over distance. We propose a low-cost mixed-reality system that allows users to: (1) quickly align and understand each other's perspective; (2) explore objects independently from one another, and (3) render gestures in the remote's workspace. In this work, we focus on the expert's role and we introduce an interaction technique allowing users to quickly manipulation 3D virtual objects in space.
2016
In this work we present a framework for fast developing of remote controlled 3D applications of digital heritage. The framework relies on standard technologies and does not have special hardware requirements since just a smartphone, a remote 3D application and a wireless connection are required to set up the system, making it ideal for live environments where attenders can use their own smartphone to interact with remote 3D digital content. The framework provides a set of single-handed interaction techniques for smartphones and also provides with connection capabilities allowing to remotely interact with 3D content by combining the modern 3D sensing capabilities of these devices with touch-screen inputs. The prototype was set up and presented at the Digital Heritage 2015 Exposition, where attenders were able to remotely interact with 3D heritage models.
Camera-Based Virtual Environment Interaction on Mobile Devices
Lecture Notes in Computer Science, 2006
Mobile virtual environments, with real-time 3D and 2D graphics, are now possible on smart phone and other camera-enabled devices. Using computer vision, the camera sensor can be treated as an input modality in applications by analyzing the incoming live video. We present our tracking algorithm and several mobile virtual environment and gaming prototypes including: a 3D first person shooter, a 2D puzzle game and a simple action game. Camera-based interaction provides a user experience that is not possible through traditional means, and maximizes the use of the limited display size.
Virtual Datagloves: Interacting with Virtual Environments Through Computer Vision
1996
This paper outlines a system design and implementation of a 3D input device for graphical applications. It is shown how computer vision can be used to track a users movements within the image frame allowing interaction with 3D worlds and objects. Point Distribution Models (PDMs) have been shown to be successful at tracking deformable objects. This system demonstrates how these ‘smart snakes’ can be used in real time with real world applications, demonstrating how computer vision can provide a low cost, intuitive interface that has few hardware constraints. The compact mathematical model behind the PDM allows simple static gesture recognition to be performed providing the means to communicate with an application. It is shown how movement of both the hand and face can be used to drive 3D engines. The system is based upon Open Inventor and designed for use with Silicon Graphics Indy Workstations but allowances have been made to facilitate the inclusion of the tracker within third party...
QuickTime VR: an image-based approach to virtual environment navigation
1995
Traditionally, virtual reality systems use 3D computer graphics to model and render virtual environments in real-time. This approach usually requires laborious modeling and expensive special purpose rendering hardware. The rendering quality and scene complexity are often limited because of the real-time constraint. This paper presents a new approach which uses 360-degree cylindrical panoramic images to compose a virtual environment. The panoramic image is digitally warped on-the-fly to simulate camera panning and zooming. The panoramic images can be created with computer rendering, specialized panoramic cameras or by "stitching" together overlapping photographs taken with a regular camera. Walking in a space is currently accomplished by "hopping" to different panoramic points. The image-based approach has been used in the commercial product QuickTime VR, a virtual reality extension to Apple Computer's QuickTime digital multimedia framework. The paper describes the architecture, the file format, the authoring process and the interactive players of the VR system. In addition to panoramic viewing, the system includes viewing of an object from different directions and hit-testing through orientation-independent hot spots.
The GrImage Platform: A Mixed Reality Environment for Interactions
In this paper, we present a scalable architecture to compute, visualize and interact with 3D dynamic models of real scenes. This architecture is designed for mixed reality applications requiring such dynamic models, tele-immersion for instance. Our system consists in 3 main parts: the acquisition, based on standard firewire cameras; the computation, based on a distribution scheme over a cluster of PC and using a recent shape-from-silhouette algorithm which leads to optimally precise 3D models; the visualization, which is achieved on a multiple display wall. The proposed distribution scheme ensures scalability of the system and hereby allows control over the number of cameras used for acquisition, the frame-rate, or the number of projectors used for high resolution visualization. To our knowledge this is the first completely scalable vision architecture for real time 3D modeling, from acquisition to visualization through computation. Experimental results show that this framework is very promising for real time 3D interactions.
While many education institutes use Skype, Google Chat or other commercial video-conferencing applications, these applications are not suitable for presenting architectural or urban design or archaeological information, as they don't integrate the presenter with interactive 3D media. Nor do they allow spatial or component-based interaction controlled by the presenter in a natural and intuitive manner, without needing to sit or stoop over a mouse or keyboard. A third feature that would be very useful is to mirror the presenter's gestures and actions so that the pre-senter does not have to try to face both audience and screen. To meet these demands we developed a prototype camera-tracking application using a Kinect camera sensor and multi-camera Unity windows for teleconferencing that required the presentation of interactive 3D content along with the speaker (or an avatar that mirrored the gestures of the speaker). Cheaply available commercial software and hardware but coupled with a large display screen (in this case an 8 meter wide curved screen) allows participants to have their gestures, movements and group behavior fed into the virtual environment either directly or indirectly. Allowing speakers to present 3D virtual worlds remotely located audiences while appearing to be inside virtual worlds has immediate practical uses for teaching and long-distance collaboration.
Mobile phone based AR scene assembly
Proceedings of the 4th …, 2005
In this paper we describe a mobile phone based Augmented Reality application for 3D scene assembly. Augmented Reality on mobile phones extends the interaction capabilities on such handheld devices. It adds a 6 DOF isomorphic interaction technique for manipulating 3D content. We give details of an application that we believe to be the first where 3D content can be manipulated using both the movement of a camera tracked mobile phone and a traditional button interface as input for transformations. By centering the scene in a tangible marker space in front of the phone we provide a mean for bimanual interaction. We describe the implementation, the interaction techniques we have developed and initial user response to trying the application.