An Object-Based System for Stereoscopic Videoconferencing with Viewpoint Adaptation (original) (raw)
Related papers
Systems for Disparity-based Multiple-view Interpolation
1998
Viewpoint adaptation from multiple-viewpoint video captures is an important tool for telepresence illusion in stereoscopic presentation of natural scenes, and for the integration of realworld video objects into virtual 3D worlds. This paper describes different low-complexity approaches for generation of virtualviewpoint camera signals, which are based on disparityprocessing techniques and can hence be implemented with much lower complexity than full 3D analysis of natural objects or scenes. A realtime hardware system, which is based on one of our algorithms, has already been developed.
Dense wide-baseline disparities from conventional stereo for immersive videoconferencing
Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., 2004
We propose an algorithm creating consistent, dense disparity maps from incomplete disparity data generated by a conventional stereo system used in a wide-baseline configuration. The reference application is IBR-oriented immersive videoconferencing, in which disparities are used by a view synthesis module to create instantaneous views of remote speakers consistent with the local speaker's viewpoint. We perform spline-based disparity interpolation within nonoverlapping regions. Regions are defined by discontinuity boundaries identified in the incomplete disparity map. We demonstrate very good results on significantly incomplete disparity data computed by a conventional correlationbased stereo algorithm on a real wide-baseline stereo pair acquired by an immersive videoconferencing system.
A New Depth and Disparity Visualization Algorithm for Stereoscopic Camera Rig
Journal of information and communication convergence engineering
In this paper, we present the effect of binocular cues which plays crucial role for the visualization of a stereoscopic or 3D image. This study is useful in extracting depth and disparity information by image processing technique. A linear relation between the object distance and the image distance is presented to discuss the cause of cybersickness. In the experimental results, three dimensional view of the depth map between the 2D images is shown. A median filter is used to reduce the noises available in the disparity map image. After the median filter, two filter algorithms such as 'Gabor' filter and 'Canny' filter are tested for disparity visualization between two images. The 'Gabor' filter is to estimate the disparity by texture extraction and discrimination methods of the two images, and the 'Canny' filter is used to visualize the disparity by edge detection of the two color images obtained from stereoscopic cameras. The 'Canny' filter is better choice for estimating the disparity rather than the 'Gabor' filter because the 'Canny' filter is much more efficient than 'Gabor' filter in terms of detecting the edges. 'Canny' filter changes the color images directly into color edges without converting them into the grayscale. As a result, more clear edges of the stereo images as compared to the edge detection by 'Gabor' filter can be obtained. Since the main goal of the research is to estimate the horizontal disparity of all possible regions or edges of the images, thus the 'Canny' filter is proposed for decipherable visualization of the disparity.
retrieved and printed on …, 2010
This paper presents a novel, real-time disparity algorithm developed for immersive teleconferencing. The algorithm combines the Census transform with a hybrid block-and pixelrecursive matching scheme. Computational effort is minimised by the efficient selection of a small number of candidate vectors, guaranteeing both spatial and temporal consistency of disparities. The latter aspect is crucial for 3-D videoconferencing applications, where novel views of the remote conferees must be synthesised with the correct motion parallax. This application requires video processing at ITU-Rec. 601 resolution. The algorithm generates disparity maps in real time and for both directions (left-to-right and right-to-left) on a Pentium III, 800 MHz processor with good quality.
A stereoscopic movie player with real-time content adaptation to the display geometry
2012
3D shape perception in a stereoscopic movie depends on several depth cues, including stereopsis. For a given content, the depth perceived from stereopsis highly depends on the camera setup as well as on the display size and distance. This can lead to disturbing depth distortions such as the cardboard effect or the puppet theater effect. As more and more stereoscopic 3D content is produced in 3D (feature movies, documentaries, sports broadcasts), a key point is to get the same 3D experience on any display. For this purpose, perceived depth distortions can be resolved by performing view synthesis. We propose a real time implementation of a stereoscopic player based on the open-source software Bino, which is able to adapt a stereoscopic movie to any display, based on user-provided camera and display parameters.
Stereo video coding based on interpolated motion and disparity estimation
In this paper, a new optimised method of coding stereoscopic image sequences is presented and compared with already known methods. Two basic methods of coding a stereoscopic image sequence are the compatible and joint. The first one uses MPEC for coding the left channel and takes advantage of the spatial disparity redundancy between the two sequences for coding the right channel. The second one employs MPEG for coding the left channel but takes advantage of both temporal redundancy among the right channel frames and spatial redundancy between the corresponding frames of the two channels The proposed method, which is called IMDE, estimates the P and B type of frames of the right channel by an interpolative scheme that takes in to account both the temporal and disparity characteristics. Investigating the effectiveness of the joint motion and disparity vectors estimation as well as the choice of the weighting factors that participate in the proposed interpolative scheme optimises the w...
Stereoscopic Displays and Applications XXII, 2011
The 3D shape perceived from viewing a stereoscopic movie depends on the viewing conditions, most notably on the screen size and distance, and depth and size distortions appear because of the differences between the shooting and viewing geometries. When the shooting geometry is constrained, or when the same stereoscopic movie must be displayed with different viewing geometries (e.g. in a movie theater and on a 3DTV), these depth distortions may be reduced by novel view synthesis techniques. They usually involve three steps: computing the stereo disparity, computing a disparity-dependent 2D mapping from the original stereo pair to the synthesized views, and finally composing the synthesized views. In this paper, we focus on the second and third step: we examine how to generate new views so that the perceived depth is similar to the original scene depth, and we propose a method to detect and reduce artifacts in the third and last step, these artifacts being created by errors contained in the disparity from the first step.
This work 9] aims at determining dense motion and disparity elds given a stereoscopic sequence of images for the construction of stereo interpolated images. At e ach time instant the two dense motion elds, for the left and the right sequences, and the disparity eld of the next stereoscopic pair are jointly estimated. The disparity eld of the current stereoscopic pair is considered as known. The disparity eld of the rst stereoscopic pair is estimated separately. For both problems multi-scale iterative relaxation algorithms are used. Stereo occlusions and motion occlusions/disclosures are detected using error condence measures. For the reconstruction of intermediate views a disparity compensated linear interpolation algorithm is used. Results are given for real stereoscopic data.