Intrinsic textures for relightable free-viewpoint video (original) (raw)

Multiview Intrinsic Images of Outdoors Scenes with an Application to Relighting

ACM Transactions on Graphics, 2015

We introduce a method to compute intrinsic images for a multiview set of outdoor photos with cast shadows, taken under the same lighting. We use an automatic 3D reconstruction from these photos and the sun direction as input and decompose each image into reflectance and shading layers, despite the inaccuracies and missing data of the 3D model. Our approach is based on two key ideas. First, we progressively improve the accuracy of the parameters of our image formation model by performing iterative estimation and combining 3D lighting simulation with 2D image optimization methods. Second, we use the image formation model to express reflectance as a function of discrete visibility values for shadow and light, which allows to introduce a robust visibility classifier for pairs of points in a scene. This classifier is used for shadow labeling, allowing to compute high-quality reflectance and shading layers. Our multiview intrinsic decomposition is of sufficient quality to allow relighting...

Example-based reflectance estimation for capturing relightable models of people

IET 5th European Conference on Visual Media Production (CVMP 2008), 2008

We present a new approach to reflectance estimation for dynamic scenes. Non-parametric image statistics are used to transfer reflectance properties from a static example set to a dynamic image sequence. The approach allows reflectance estimation for surface materials with inhomogeneous appearance, such as those which commonly occur with patterned or textured clothing. Material reflectance properties are initially estimated from static images of the subject under multiple directional illuminations using photometric stereo. The estimated reflectance together with the corresponding image under uniform ambient illumination form a prior set of reference material observations. Material reflectance properties are then estimated for video sequences of a moving person captured under uniform ambient illumination by matching the observed local image statistics to the reference observations. Results demonstrate that the transfer of reflectance properties enables estimation of the dynamic surface normals and subsequent relighting. This approach overcomes limitations of previous work on material transfer and relighting of dynamic scenes which was limited to surfaces with regions of homogeneous reflectance. We evaluate for relighting 3D model sequences reconstructed from multiple view video. Comparison to previous model relighting demonstrates improved reproduction of detailed texture and shape dynamics.

Inverse Rendering and Relighting From Multiple Color Plus Depth Images

IEEE Transactions on Image Processing

We propose a novel relighting approach that takes advantage of multiple color plus depth images acquired from a consumer camera. Assuming distant illumination and Lambertian reflectance, we model the reflected light field in terms of spherical harmonic coefficients of the Bi-directional Reflectance Distribution Function (BRDF) and lighting. We make use of the noisy depth information together with color images taken under different illumination conditions to refine surface normals inferred from depth. We first perform refinement on the surface normals using the first order spherical harmonics. We initialize this non-linear optimization with a linear approximation to greatly reduce computation time. With surface normals refined, we formulate the recovery of albedo and lighting in a matrix factorization setting, involving second order spherical harmonics. Albedo and lighting coefficients are recovered up to a global scaling ambiguity. We demonstrate our method on both simulated and real data, and show that it can successfully recover both illumination and albedo to produce realistic relighting results.

Spatio-temporal reflectance sharing for relightable 3D video

2007

In our previous work [21], we have shown that by means of a model-based approach, relightable free-viewpoint videos of human actors can be reconstructed from only a handful of multi-view video streams recorded under calibrated illumination. To achieve this purpose, we employ a marker-free motion capture approach to measure dynamic human scene geometry. Reflectance samples for each surface point are captured by exploiting the fact that, due to the person's motion, each surface location is, over time, exposed to the acquisition sensors under varying orientations. Although this is the first setup of its kind to measure surface reflectance from footage of arbitrary human performances, our approach may lead to a biased sampling of surface reflectance since each surface point is only seen under a limited number of half-vector directions. We thus propose in this paper a novel algorithm that reduces the bias in BRDF estimates of a single surface point by cleverly taking into account reflectance samples from other surface locations made of similar material. We demonstrate the improvements achieved with this spatio-temporal reflectance sharing approach both visually and quantitatively.

Multi-view Multi-illuminant Intrinsic Dataset

Britisch Machine Vision Conference

This paper proposes a novel high-resolution multi-view dataset of complex multi-illuminant scenes with precise reflectance and shading ground-truth as well as raw depth and 3D point cloud. Our dataset challenges the intrinsic image methods by providing complex coloured cast shadows, highly textured and colourful surfaces, and specularity. This is the first publicly available multi-view real-photo dataset at such complexity with pixel-wise intrinsic ground-truth. In the effort to help evaluating different intrinsic image methods, we propose a new perception-inspired metric based on the reflectance consistency. We provide the evaluation of three intrinsic image methods using our dataset and metric.

Inverse global illumination: Recovering reflectance models of real scenes from photographs

1999

ABSTRACT In this paper we present a method for recovering the reflectance properties of all surfaces in a real scene from a sparse set of photographs, taking into account both direct and indirect illumination. The result is a lighting-independent model of the scene's geometry and reflectance properties, which can be rendered with arbitrary modifications to structure and lighting via traditional rendering methods.

Coherent Texture Synthesis for Photograph Relighting and Texture Transfer

2006

This paper presents a texture synthesis method for relighting and texture transfer. This method is based on the observation that many points on the surface of a textured object share the same material but have different normals. Hence, once we identify these points, we collect many samples of a material under different illuminations. Hence, we can relight the object by interpolating similar lighting conditions. Furthermore, we can transfer the texture to other objects under different illuminations. We acquire the source textured object by taking a few photographs for a fixed viewpoint under controlled lighting. Stereo is used to estimate the normal map and the unshaded material map of the source object. Then, points of the same material are recognized by neighborhood matching of pixel values of unshaded material map. Finally, we can transfer the texture to another object and render it under different illumination conditions using these clustered texels.

Capture and Synthesis of 3D Surface Texture

International Journal of Computer Vision, 2005

We present and compare five approaches for capturing, synthesising and relighting real 3D surface textures. Unlike 2D texture synthesis techniques they allow the captured textures to be relit using illumination conditions that differ from those of the original. We adapted a texture quilting method due to Efros and combined this with five different relighting representations, comprising: a set of three photometric images; surface gradient and albedo maps; polynomial texture maps; and two eigen based representations using 3 and 6 base images. We used twelve real textures to perform quantitative tests on the relighting methods in isolation. We developed a qualitative test for the assessment of the complete synthesis systems. Ten observers were asked to rank the images obtained from the five methods using five real textures. Statistical tests were applied to the rankings. The six-base-image eigen method produced the best quantitative relighting results and in particular was better able to cope with specular surfaces. However, in the qualitative tests there were no significant performance differences detected between it and the other two top performers. Our conclusion is therefore that the cheaper gradient and three-base-image eigen methods should be used in preference, especially where the surfaces are Lambertian or near Lambertian.

Estimating the surface radiance function from single images

Graphical Models, 2005

This paper describes a simple method for estimating the surface radiance function from single images of smooth surfaces made of materials whose reflectance function is isotropic and monotonic. The method makes implicit use of the Gauss map between the surface and a unit sphere. We assume that the material brightness is monotonic with respect to the angle between the illuminant direction and the surface normal. Under conditions in which the light source and the viewer directions are identical, we show how a tabular representation of the surface radiance function can be estimated using the cumulative distribution of image gradients. Using this tabular representation of the radiance function, surfaces may be rendered under varying light source direction by rotating the corresponding reflectance map on the Gauss sphere about the specular spike direction. We present a sensitivity study on synthetic and real-world imagery. We also present two applications which make use of the estimated radiance function. The first of these illustrates how the radiance function estimates can be used to render objects when the light and viewer directions are no longer coincident. The second application involves applying corrected Lambertian radiance to rough and shiny surfaces.

Determining reflectance parameters and illumination distribution from a sparse set of images for view-dependent image synthesis

Proceedings Eighth IEEE International Conference on Computer Vision. ICCV 2001

A framework for photo-realistic view-dependent image synthesis of a shiny object from a sparse set of images and a geometric model is proposed. Each image is aligned with the 3D model and decomposed into two images with regards to the reflectance components based on the intensity variation of object surface points. The view-independent surface reflection (diffuse reflection) is stored as one texture map. The view-dependent reflection (specular reflection) images are used to recover the initial approximation of the illumination distribution, and then a two step numerical minimization algorithm utilizing a simplified Torrance-Sparrow reflection model is used to estimate the reflectance parameters and refine the illumination distribution. This provides a very compact representation of the data necessary to render synthetic images from arbitrary viewpoints. We have conducted experiments with real objects to synthesize photorealistic view-dependent images within the proposed framework.