Dataset and Pipeline for Multi-view Light-Field Video (original) (raw)
Related papers
Immersive light field video with a layered mesh representation
ACM Transactions on Graphics, 2020
We present a system for capturing, reconstructing, compressing, and rendering high quality immersive light field video. We accomplish this by leveraging the recently introduced DeepView view interpolation algorithm, replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells that are better suited for representing panoramic light field content. We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser. We demonstrate light field video results using data from the 16-camera rig of [Pozo et al. 2019] as well as a new low-cost hemispherical array made from 46 synchronized ac...
Robust and dense depth estimation for light field images
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2017
We propose a depth estimation method for light field images. Light field images can be considered as a collection of 2D images taken from different viewpoints arranged in a regular grid. We exploit this configuration and compute the disparity maps between specific pairs of views. This computation is carried out by a state of the art two-view stereo method providing a non dense disparity estimation. We propose a disparity interpolation method increasing the density and improving the accuracy of this initial estimate. Disparities obtained from several pairs of views are fused to obtain a unique and robust estimation. Finally, different experiments on synthetic and real images show how the proposed method outperforms state of the art results.
4D Temporally Coherent Light-field Video
3D Vision (3DV), 2017
Light-field video has recently been used in virtual and augmented reality applications to increase realism and immersion. However, existing light-field methods are generally limited to static scenes due to the requirement to acquire a dense scene representation. The large amount of data and the absence of methods to infer temporal coherence pose major challenges in storage, compression and editing compared to conventional video. In this paper, we propose the first method to extract a spatio-temporally coherent light-field video representation. A novel method to obtain Epipolar Plane Images (EPIs) from a spare light-field camera array is proposed. EPIs are used to constrain scene flow estimation to obtain 4D temporally coherent representations of dynamic light-fields. Temporal coherence is achieved on a variety of light-field datasets. Evaluation of the proposed light-field scene flow against existing multi-view dense correspondence approaches demonstrates a significant improvement in the accuracy of temporal coherence.
5D Light Field Synthesis from a Monocular Video
2021
Commercially available light field cameras have difficulty in capturing 5D (4D + time) light field videos. They can only capture still light field images or are excessively expensive for normal users to capture the light field video. To tackle this problem, we propose a deep learning-based method for synthesizing a light field video from a monocular video. We propose a new synthetic light field video dataset that renders photorealistic scenes using Unreal Engine because no light field video dataset is available. The proposed deep learning framework synthesizes the light field video with a full set (9 × 9) of sub-aperture images from a normal monocular video. The proposed network consists of three sub-networks, namely, feature extraction, 5D light field video synthesis, and temporal consistency refinement. Experimental results show that our model can successfully synthesize the light field video for synthetic and real scenes and outperforms the previous frame-by-frame method quantita...
Camera Animation for Immersive Light Field Imaging
Electronics
Among novel capture and visualization technologies, light field has made significant progress in the current decade, bringing closer its emergence in everyday use cases. Unlike many other forms of 3D displays and devices, light field visualization does not depend on any viewing equipment. Regarding its potential use cases, light field is applicable to both cinematic and interactive contents. Such contents often rely on camera animation, which is a frequent tool for the creation and presentation of 2D contents. However, while common 3D camera animation is often rather straightforward, light field visualization has certain constraints that must be considered before implementing any variation of such techniques. In this paper, we introduce our work on camera animation for light field visualization. Different types of conventional camera animation were applied to light field contents, which produced an interactive simulation. The simulation was visualized and assessed on a real light fi...