Panoramic Spherical Video — The Space Ball (original) (raw)
Related papers
Panoramic 360-Degree Video Application Design , Development and Evaluation 1
2003
Research Goal The acquisition and presentation of high-resolution panoramic video (PV) presents a number of technical difficulties, as well as excellent User Centered Sciences research opportunities. We are using a five-camera 360-degree PV system that acquires highresolution (>3Kx480) panoramic video images. These images are recorded at 30Hz frame rates and played back for later viewing. During playback, users can wear a head-mounted display (HMD) and a head-tracking device that allows them to turn their heads freely to observe the desired portions of the panoramic scene. Continued user studies are planned to explore the usability and usefulness of these types of realistic virtual environments.
Composition of Panoramic Images for Telepresence Systems
This work presents a visual telepresence system that captures and projects panoramic images into immersive environments in real-time. A number of images are captured with a ring of cameras (a special mounting of multiple digital cameras that captures synchronized images of a 360 degrees scene). These images are processed by a PC cluster before their transmission. A panoramic image is locally rendered by the proposed optimized stitching method that computes a positioning map for all pixels. Then, a complete cylindrical image is sent over the Internet to a multi-projection virtual environment called CAVERNA Digital®. Since the panoramic view is projected in natural size onto the screens of the virtual environment, the user has the sensation of being present at the remote scene.
A multicamera setup for generating stereo panoramic video
2005
Abstract Traditional visual communication systems convey only two-dimensional (2-D) fixed field-of-view (FOV) video information. The viewer is presented with a series of flat, nonstereoscopic images, which fail to provide a realistic sense of depth. Furthermore, traditional video is restricted to only a small part of the scene, based on the director's discretion and the user is not allowed to" look around" in an environment.
All around the map: Online spherical panorama construction
Computers & Graphics, 2009
One of the main goals of anywhere augmentation is the development of automatic algorithms for scene acquisition in augmented reality systems. In this paper, we present Envisor, a system for online construction of environment maps in new locations. To accomplish this, Envisor uses vision-based frame to frame and landmark orientation tracking for long-term, drift-free registration. For additional robustness, a gyroscope/compass orientation unit can optionally be used for hybrid tracking. The tracked video is then projected into a cubemap frame by frame. Feedback is presented to the user to help avoid gaps in the cubemap, while any remaining gaps are filled by texture diffusion. The resulting environment map can be used for a variety of applications, including shading of virtual geometry and remote presence.
The ultimate immersive experience: panoramic 3d video acquisition
2012
The paper presents a new approach on an omni-directional omnistereo multi-camera system that allows the recording of panoramic 3D video with high resolution and quality and display in stereo 3D on a cylindrical screen. It has been developed in the framework of the TiME Lab at Fraunhofer HHI, an experimental platform for immersive media and related content creation. The new system uses a mirror rig to enable a multi-camera constellation that is close to the concept of concentric mosaics. A proof of concept has shown that the systematical approximation error related to concentric mosaics is negligible in practice and parallax-free stitching of stereoscopic video panoramas can be achieved with high 3D quality and for arbitrary scenes with depth ranges from 2 meters to infinity.
2005
In this paper we present an MPEG-4 compliant system for viewing 360° high-resolution spherical panoramic images and videos. The system allows user navigation within an MPEG-4 3D scene description. Here, two different scene geometries were implemented: One consisting of a spherical geometry around the viewpoint and the other having a cylindrical geometry with associated texture patches each. The textures are either real-world static high-resolution scene images or movie textures This scene dynamically deals with problems like bandwidth and video sizes to provide a real-time viewing experience by dividing the texture into a number of tiles. Thus, only the visible parts of the scene can be rendered. Furthermore, visibility sensors are associated to the texture tiles such that pre-fetching of texture tiles next the visible once is enabled during scene navigation. By varying the tile size and visibility sensor size, experiments for real-time capability of the rendering environment were performed. Additionally, a combination of head tracking device and head mounted display was investigated for a more comfortable scene navigation.
Panoramic Images Composition with Positioning Maps for Virtual Telepresence
2007
The virtual telepresence system we are working on investigates the composition of panoramic images to be projected within immersive environments in real-time. A number of images are captured by multiple digital cameras that are synchronized and that cover a 360 degrees scene. Therefore, a panoramic image can be rendered by our proposed optimized stitching method we called Positioning Maps. This method allows concatenation of several operations, such as lens distortion correction, cylindrical projection, and image alignment, in a single map of pixels for each camera. We conclude that our method of Positioning Map has great potential to obtain faster image processing time, which is relevant in real-time telepresence systems.
PANORAMIC IMMERSIVE VIDEOS - 3D Production and Visualization Framework
Proceedings of the International Conference on Signal Processing and Multimedia Applications, 2009
Panoramic immersive video is a new technology, which allows the user to interact with a video beyond simple production line, because it enables the possibility to navigate in the scene from different points of view. Although many devices for the production of panoramic videos have been proposed, these are still expensive.In this paper a framework for production of virtual panoramic immersive videos using 3D production software is presented. The framework is composed by two stages: panoramic video production and immersive visualization. In the former stage the traditional 3D scene is taken as input and two outputs are generated, the panoramic video and path sounds to immersive audio reproduction. In the latter, a desktop CAVE assembly is proposed in order to provide an immersive display.
Computer Graphics Forum, 2008
Many video sequences consist of a locally dynamic background containing moving foreground subjects. In this paper we propose a novel way of re-displaying these sequences, by giving the user control over a virtual camera frame. Based on video mosaicing, we first compute a static high quality background panorama. After segmenting and removing the foreground subjects from the original video, the remaining elements are merged into a dynamic background panorama, which seamlessly extends the original video footage. We then re-display this augmented video by warping and cropping the panorama. The virtual camera can have an enlarged field-of-view and a controlled camera motion. Our technique is able to process videos with complex camera motions, reconstructing high quality panoramas without parallax artefacts, visible seams or blurring, while retaining repetitive dynamic elements.