Rendering multi-view plus depth data on light-field displays (original) (raw)
Related papers
A qualitative comparison of MPEG view synthesis and light field rendering
2014 3DTV-Conference: The True Vision - Capture, Transmission and Display of 3D Video (3DTV-CON), 2014
Free Viewpoint Television (FTV) is a new modality in next generation television, which provides the viewer free navigation through the scene, using image-based view synthesis from a couple of camera view inputs. The recently developed MPEG reference software technology is, however, restricted to narrow baselines and linear camera arrangements. Its reference software currently implements stereo matching and interpolation techniques, designed mainly to support three camera inputs (middle-left and middleright stereo). Especially in view of future use case scenarios in multi-scopic 3D displays, where hundreds of output views are generated from a limited number (tens) of wide baseline input views, it becomes mandatory to fully exploit all input camera information to its maximal potential. We therefore revisit existing view interpolation techniques to support dozens of camera inputs for better view synthesis performance. In particular, we show that Light Fields yield average PSNR gains of approximately 5 dB over MPEG's existing depth-based multiview video technology, even in the presence of large baselines.
Visual enhancements for improved interactive rendering on light field displays
2011
abstract={Rendering of complex scenes on a projector-based light field display requires 3D content adaptation in order to provide comfortable viewing experiences in all conditions. In this paper we report about our approach to improve visual experiences while coping with the limitations in the effective field of depth and the angular field of view of the light field display. We present adaptation methods employing non-linear depth mapping and depth of field simulation which leave large parts of the scene unmodified, while modifying the other ...
Performance analysis of a parallel multi-view rendering architecture using light fields
The Visual Computer, 2009
Multiple view rendering is a common problem for applications where multiple users visualize a common dataset, as in multi-player games and collaborative engineering tools. For a system to be able to render a large number of views at interactive rates efficiently, parallel processing is an attractive technique. In this work, we present the implementation of a pipelined multiview light field renderer using a cluster with GPUs and MPI. We discuss the parallelization model and the problem of partitioning the tasks of the pipeline among the cluster machines based on the pipeline model and the costs of the stages. Our solution achieves 83% efficiency with ten machines, against only 11% efficiency of a naive parallelization.
Real-time Depth of Field Rendering via Dynamic Light Field Generation and Filtering
Computer Graphics Forum, 2010
We present a new algorithm for efficient rendering of high-quality depth-of-field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA's CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image-space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.
Rendering Synthetic Objects into Full Panoramic Scenes using Light-depth Maps
This photo realistic rendering solution address the insertion of computer generated elements in a captured panorama environment. This pipeline supports productions specially aiming at spherical displays (e.g., fulldomes). Full panoramas have been used in computer graphics for years, yet their common usage lays on environment lighting and reflection maps for conventional displays. With a keen eye in what may be the next trend in the filmmaking industry, we address the particularities of those productions, proposing a new representation of the space by storing the depth together with the light maps, in a full panoramic light-depth map. Another novelty in our rendering pipeline is the one-pass solution to solve the blending of real and synthetic objects simultaneously without the need of post processing effects.
Rendering real-world objects using view interpolation
Proceedings of IEEE International Conference on Computer Vision, 1995
This paper presents a new approach to rendering arbitrary views of real-world 3-D objects of complex shapes. We propose to represent an object by a sparse set of corresponding .2-D views, and to construct any other view as a combination of these reference views. We show that this combination can be linear, assuming proximity of the views, and we suggest how the visibility of constructed points can be determined. Our approach makes it possible to avoid difficult 3-D reconstruction, assuming only rendering is required. Moreover, almost no calibration of views is needed. We present preliminary results on real objects, indicating that the approach is feasible.
Rendering for an interactive 360° light field display
ACM Transactions on Graphics, 2007
Figure 1: A 3D object shown on the display is photographed by two stereo cameras (seen in the middle image). The two stereo viewpoints sample the 360 • field of view around the display. The right pair is from a vertically-tracked camera position and the left pair is from an untracked position roughly horizontal to the center of the display. The stereo pairs are left-right reversed for cross-fused stereo viewing.
Unstructured lumigraph rendering
Proceedings of the 28th annual conference on Computer graphics and interactive techniques - SIGGRAPH '01, 2001
We describe an image based rendering approach that generalizes many image based rendering algorithms currently in use including light field rendering and view-dependent texture mapping. In particular it allows for lumigraph style rendering from a set of input cameras that are not restricted to a plane or to any specific manifold. In the case of regular and planar input camera positions, our algorithm reduces to a typical lumigraph approach. In the case of fewer cameras and good approximate geometry, our algorithm behaves like view-dependent texture mapping. Our algorithm achieves this flexibility because it is designed to meet a set of desirable goals that we describe. We demonstrate this flexibility with a variety of examples.
Recent results in rendering massive models on horizontal parallaxonly light field displays
2009
In this contribution, we report on specialized out-of-core multiresolution real-time rendering systems able to render massive surface and volume models on a special class of horizontal parallax-only light field displays. The displays are based on a specially arranged array of projectors emitting light beams onto a holographic screen, which then makes the necessary optical transformation to compose these beams into a continuous 3D view. The rendering methods employ state-ofthe-art out-of-core multiresolution techniques able to correctly project geometries onto the display and to dynamically adapt model resolution by taking into account the particular spatial accuracy characteristics of the display. The programmability of latest generation graphics architectures is exploited to achieve interactive performance. As a result, multiple freely moving naked-eye viewers can inspect and manipulate virtual 3D objects that appear to them floating at fixed physical locations. The approach provides rapid visual understanding of complex multi-gigabyte surface models and volumetric data sets.