Variable albedo surface reconstruction from stereo and shape from shading (original) (raw)

Combining Shape from Shading and Stereo: A Variational Approach for the Joint Estimation of Depth, Illumination and Albedo

Procedings of the British Machine Vision Conference 2016, 2016

Shape from shading (SfS) and stereo are two fundamentally different strategies for image-based 3-D reconstruction. While approaches for SfS infer the depth solely from pixel intensities, methods for stereo are based on a matching process that establishes correspondences across images. In this paper we propose a joint variational method that combines the advantages of both strategies. By integrating recent stereo and SfS models into a single minimisation framework, we obtain an approach that exploits shading information to improve upon the reconstruction quality of robust stereo methods. To this end, we fuse a Lambertian SfS approach with a robust stereo model and supplement the resulting energy functional with a detail-preserving anisotropic second-order smoothness term. Moreover, we extend the novel model in such a way that it jointly estimates depth, albedo and illumination. This in turn makes it applicable to objects with non-uniform albedo as well as to scenes with unknown illumination. Experiments for synthetic and real-world images show the advantages of our combined approach: While the stereo part overcomes the albedo-depth ambiguity inherent to all SfS methods, the SfS part improves the degree of details of the reconstruction compared to pure stereo methods.

Estimation of Illuminant Direction, Albedo, and Shape from Shading

IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991

A robust approach to recovery of shape from shading information is presented. Assuming uniform albedo and Lambertian surface for the imaging model, we first present methods for the estimation of illuminant direction and surface albedo. The illuminant azimuth is estimated by averaging local estimates. The illuminant elevation and surface albedo are estimated from image statistics. Using the estimated reflectance map parameters, we then compute the surface shape using a new procedure, which implements the smoothness constraint by enforcing the gradients of reconstructed intensity to be close to the gradients of the input image. Typical results on real images are given to illustrate the usefulness of our approach.

Shape from Shading Under Various Imaging Conditions

2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007

Most of the shape from shading (SFS) algorithms have been developed under the simplifying assumptions of a Lambertian surface, an orthographic projection, and a distant light source. Due to the difficulty of the SFS problem, only a small number of algorithms have been proposed for surfaces with non-Lambertian reflectance, and among those, only very few algorithms are applicable for surfaces with specular and diffuse reflectance. In this paper we propose a unified framework that is capable of solving the SFS problem under various settings of imaging conditions i.e., Lambertian or non-Lambertian, orthographic or perspective projection, and distant or nearby light source. The proposed algorithm represents the image irradiance equation of each setting as an explicit Partial Differential Equation (PDE). In our implementation we use the Lax-Friedrichs sweeping method to solve this PDE. To demonstrate the efficiency of the proposed algorithm, several comparisons with the state of the art of the SFS literature are given.

A New Formulation for Shape from Shading for Non-Lambertian Surfaces

Lambert's model for diffuse reflection is a main assump- tion in most of shape from shading (SFS) literature. Even with this simplified model, the SFS is still a difficult prob- lem. Nevertheless, Lambert's model has been proven to be an inaccurate approximation of the diffuse component of the surface reflectance. In this paper, we propose a new solution of the SFS problem based on a more comprehensive diffuse reflectance model: the Oren and Nayar model. In this work, we slightly modify this more realistic model in order to take into account the attenuation of the illumination due to dis- tance. Using the modified non-Lambertian reflectance, we design a new explicit Partial Differential Equation (PDE) and then solve it using Lax-Friedrichs Sweeping method. Our experiments on synthetic data show that the proposed modeling gives a unique solution without any information about the height at the singular points of the surface. Ad- ditional results for real data are presented t...

Direct Differential Photometric Stereo Shape Recovery of Diffuse and Specular Surfaces

Journal of Mathematical Imaging and Vision, 2016

Recovering the 3D shape of an object from shading is a challenging problem due to the complexity of modeling light propagation and surface reflections. Photometric Stereo (PS) is broadly considered a suitable approach for high-resolution shape recovery, but its functionality is restricted to a limited set of object surfaces and controlled lighting setup. In particular, PS models generally consider reflection from objects as purely diffuse, with specularities being regarded as a nuisance that breaks down shape reconstruction. This is a serious drawback for implementing PS approaches since most common materials have prominent specular components. In this paper, we propose a PS model that solves the problem for both diffuse and specular components aimed at shape recovery of generic objects with the approach being independent of the albedo values

A method for shape-from-shading using multiple images acquired under different viewing and lighting conditions

Computer Vision and Pattern Recognition, …, 1989

A new formulation for shape-from-shading from multiple images acquired under different viewing and lighting conditions is presented. The method is based on using an explicit image formation model to create renditions of the surface being estimated, which are synthetic versions of the observed images. It is applicable in a variety of imaging situations, including those involving unknown non-uniform albedo. A probabilistic model is developed based on typical characteristics of the surface and minimizing the difference between the synthetic and observed images. This model is used to arrive at a Bayesian formulation of the shape-from-shading problem. Techniques are presented to compute an estimate that is statistically optimal in the sense that it is the expected value of the surface, given the set of observations derived from it. The method is applied to Viking imagery of Mars.

Differential algorithm for the determination of shape from shading using a point light source

Image and Vision Computing, 1992

This paper presents a linear algorithm for recovering shape information of Lambertian surfaces from the .xhading information inherent in a single 20 image. The local orientation of a planar Lambertian surface patch is Jetermined from a single video image when both camera ,rnd a light source are near to the surface. A simple linear relationship is derived between the ratios of the measured image intensities and their first partial derivatives and the components of the surface normal vector. .4n advantage of the formulation is that the resulting quations are independent of the intensity strength of the rlluminating source as well as the surface reflection coefficient (albedo). Results for noisy synthetic images as well as real images are presented.

Integrating Stereo and Shape from Shading

1999

This paper presents a new method for integrating di erent low level vision modules, stereo and shape from shading, in order to improve the 3D reconstruction of visible surfaces of objects from intensity images. The integration process is based o n c orrecting the 3D visible surface obtained from shape from shading using the sparse depth measurements from the stereo module by tting a surface into the di erence b etween the two data sets. A feedforward neural network is used to t a surface to the error di erence. An extended Kalman lter algorithm is used for the network learning. It is found that the integration of sparse depth measurements has greatly enhanced the 3D visible surface obtained f r om shape f r om shading in terms of metric measurements.

Shape and Illumination from Shading using the Generic Viewpoint Assumption

The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special. Thus, any estimated parameters from an observation should be stable under small perturbations such as object, viewpoint or light positions. The GVA has been analyzed and quantified in previous works, but has not been put to practical use in actual vision tasks. In this paper, we show how to utilize the GVA to estimate shape and illumination from a single shading image, without the use of other priors. We propose a novel linearized Spherical Harmonics (SH) shading model which enables us to obtain a computationally efficient form of the GVA term. Together with a data term, we build a model whose unknowns are shape and SH illumination. The model parameters are estimated using the Alternating Direction Method of Multipliers embedded in a multi-scale estimation framework. In this prior-free framework, we obtain competitive shape and illumination estimation results under a variety of models and lighting conditions, requiring fewer assumptions than competing methods.