Adapting stereoscopic movies to the viewing conditions using depth-preserving and artifact-free novel view synthesis (original) (raw)
Related papers
2011
The 3D shape perceived from viewing a stereoscopic movie depends on the viewing conditions, most notably on the screen size and distance, and depth and size distortions appear because of the differences between the shooting and viewing geometries. When the shooting geometry is constrained, or when the same stereoscopic movie must be displayed with different viewing geometries (e.g. in a movie theater and on a 3DTV), these depth distortions may be reduced by novel view synthesis techniques. They usually involve three steps: computing the stereo disparity, computing a disparity-dependent 2D mapping from the original stereo pair to the synthesized views, and finally composing the synthesized views. In this paper, we focus on the second and third step: we examine how to generate new views so that the perceived depth is similar to the original scene depth, and we propose a method to detect and reduce artifacts in the third and last step, these artifacts being created by errors contained in the disparity from the first step.
New view synthesis for stereo cinema by hybrid disparity remapping
2010
The 3-D shape perceived from viewing a stereoscopic movie depends on the viewing conditions, most notably on the screen size and distance, and depth and size distortions appear because of the differences between the shooting and viewing geometries. When the shooting geometry is constrained, or when the same stereoscopic movie must be displayed with different viewing geometries (e.g. in a movie theater and on a 3DTV), these depth distortions may be reduced by new view synthesis techniques. They usually involve three steps: computing the stereo disparity, computing a disparity-dependent 2-D mapping from the original stereo pair to the synthesized views, and finally composing the synthesized views. In this paper, we compare different disparity-dependent mappings in terms of perceived shape distortion and alteration of the images, and we propose a hybrid mapping which does not distort depth and minimizes modifications of the image content.
Novel view synthesis for stereoscopic cinema
Proceedings of the 1st international workshop on 3D video processing - 3DVP '10, 2010
Novel view synthesis methods consist in using several images or video sequences of the same scene, and creating new images of this scene, as if they were taken by a camera placed at a different viewpoint. They can be used in stereoscopic cinema to change the camera parameters (baseline, vergence, focal length...) a posteriori, or to adapt a stereoscopic broadcast that was shot for given viewing conditions (such as a movie theater) to a different screen size and distance (such as a 3DTV in a living room) [3]. View synthesis from stereoscopic movies usually proceeds in two phases [11]: First, disparity maps and other viewpoint-independent data (such as scene layers and matting information) are extracted from the original sequences, and second, this data and the original images are used to synthesize the new sequence, given geometric information about the synthesized viewpoints. Unfortunately, since no known stereo method gives perfect results in all situations, the results of the first phase will most probably contain errors, which will result in 2D or 3D artifacts in the synthesized stereoscopic movie. We propose to add a third phase where these artifacts are detected and removed is each stereoscopic image pair, while keeping the perceived quality of the stereoscopic movie close to the original.
A stereoscopic movie player with real-time content adaptation to the display geometry
2012
3D shape perception in a stereoscopic movie depends on several depth cues, including stereopsis. For a given content, the depth perceived from stereopsis highly depends on the camera setup as well as on the display size and distance. This can lead to disturbing depth distortions such as the cardboard effect or the puppet theater effect. As more and more stereoscopic 3D content is produced in 3D (feature movies, documentaries, sports broadcasts), a key point is to get the same 3D experience on any display. For this purpose, perceived depth distortions can be resolved by performing view synthesis. We propose a real time implementation of a stereoscopic player based on the open-source software Bino, which is able to adapt a stereoscopic movie to any display, based on user-provided camera and display parameters.
Depth Mapping for Stereoscopic Videos
2013
Stereoscopic videos have become very popular in recent years. Most of these videos are developed primarily for viewing on large screens located at some distance away from the viewer. If we watch these videos on a small screen located near to us, the depth range of the videos will be seriously reduced, which can significantly degrade the 3D effects of these videos. To address this problem, we propose a linear depth mapping method to adjust the depth range of a stereoscopic video according to the viewing configuration, including pixel density and distance to the screen. Our method tries to minimize the distortion of stereoscopic image contents after depth mapping, by preserving the relationship of neighboring features and preventing line and plane bending. It also considers the depth and motion coherences. While depth coherence ensures smooth changes of the depth field across frames, motion coherence ensures smooth content changes across frames. Our experimental results show that the ...
Iterative depth recovery for multi-view video synthesis from stereo videos
Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific, 2014
We propose a novel depth maps refinement algorithm and generate multi-view video sequences from two-view video sequences for modern autostereoscopic display. In order to generate realistic contents for virtual views, high-quality depth maps are very critical to the view synthesis results. We propose an iterative depth refinement approach of a joint error detection and correction algorithm to refine the depth maps that can be estimated by an existing stereo matching method or provided by a depth capturing device. Error detection aims at two types of error: across-view color-depth-inconsistency error and local color-depth-inconsistency error. Subsequently, the detected error pixels are corrected by searching appropriate candidates under several constraints to amend the depth errors. A trilateral filter is included in the refining process that considers intensity, spatial and temporal terms into the filter weighting to enhance the consistency across frames. In the proposed view synthesis framework, it features a disparity-based view interpolation method to alleviate the translucent artifacts and a directional filter to reduce the aliasing around the object boundaries. Experimental results show that the proposed algorithm effectively fixes errors in the depth maps. In addition, we also show the refined depth maps along with the proposed view synthesis framework significantly improve the novel view synthesis on several benchmark datasets.
View Synthesis for Advanced 3D Video Systems
EURASIP Journal on Image and Video Processing, 2008
Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD) is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, highquality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo-and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.
Stereoscopic image generation based on depth images for 3D TV
IEEE Transactions on Broadcasting, 2005
A depth-image-based rendering system for generating stereoscopic images is proposed. One important aspect of the proposed system is that the depth maps are pre-processed using an asymmetric filter to smoothen the sharp changes in depth at object boundaries. In addition to ameliorating the effects of blocky artifacts and other distortions contained in the depth maps, the smoothing reduces or completely removes newly exposed (disocclusion) areas where potential artifacts can arise from image warping which is needed to generate images from new viewpoints. The asymmetric nature of the filter reduces the amount of geometric distortion that might be perceived otherwise. We present some results to show that the proposed system provides an improvement in image quality of stereoscopic virtual views while maintaining reasonably good depth quality.
Disparity depth map layers representation for image view synthesis
2014
This paper presents a method that jointly performs stereo matching and interview interpolation to obtain the depth map and virtual view image. A novel view synthesis method based on depth map layers representation of the stereo image pairs is proposed. The main idea of this approach is to separate the depth map into several layers of depth based on the disparity distance of the corresponding points. The novel view synthesis can be interpolated independently to each layer of depth by masking the particular depth layer. The final novel view synthesis obtained with the entire layers flattens into a single layer. Since the image view synthesis is performed in separate layers, the extracted new virtual object can be superimposed onto another 3D scene. The method is useful to imply free viewpoint video application with a small number of camera configurations. Based on the experimental results, it shows that the algorithm improve the efficiency of finding the depth map and to synthesis the new virtual view images. Index Terms-Depth map, free viewpoint video, stereo camera, stereo matching algorithms, view synthesis.
Depth-level-adaptive view synthesis for 3D video
2010
Abstract In the multiview video plus depth (MVD) representation for 3D video, a depth map sequence is coded for each view. In the decoding end, a view synthesis algorithm is used to generate virtual views from depth map sequences. Many of the known view synthesis algorithms introduce rendering artifacts especially at object boundaries. In this paper, a depth-level-adaptive view synthesis algorithm is presented to reduce the amount of artifacts and to improve the quality of the synthesized images.