Training Many-Parameter Shape-from-Shading Models Using a Surface Database (original) (raw)

Model-based shape recovery from single images of general and unknown lighting

2009 16th IEEE International Conference on Image Processing (ICIP), 2009

We present a new statistical shape-from-shading framework for images of unknown illumination. The object (e.g., face) to be reconstructed is described by a parametric model. To deal with arbitrary illumination, the framework makes use of recent results that general lighting can be expressed using low-order spherical harmonics for convex Lambertian objects. The classical shape-from-shading equation is modified according to this framework. Results show accurate shape recovery with respect to ground truth data.

Surface Reconstruction from Intensity Image using Illumination Model based Morphable Modeling

We present a new method for reconstructing depth of a known object from a single still image using deformed underneath sign matrix of a similar object. Existing Shape from Shading(SFS) methods try to establish a relationship between intensity values of a still image and surface normal of corresponding depth, but most of them resort to error minimization based approaches. Given the fact that these reconstruction approaches are fundamentally ill-posed, they have limited successes for surfaces like a human face. Photometric Stereo (PS) or Shape from Motion (SfM) based methods extend SFS by adding additional information/constraints about the target. Our goal is identical to SFS, however, we tackle the problem by building a relationship between gradient of depth and intensity value at the corresponding location of the same object. This formula is simplified and approximated for handing different materials, lighting conditions and, the underneath sign matrix is also obtained by resizing/deforming Region of Interest(ROI) with respect to its counterpart of a similar object. The target object is then reconstructed from its still image. In addition to the process, delicate details of the surface is also rebuilt using a Gabor Wavelet Network(GWN) on different ROIs. Finally, for merging the patches together, a Self-Organizing Maps(SOM) based method is used to retrieve and smooth boundary parts of ROIs. Compared with state of art SFS based methods, the proposed method yields promising results on both widely used benchmark datasets and images in the wild.

Variable albedo surface reconstruction from stereo and shape from shading

2000

We p r e s e n t a m ultiview method for the computation of object shape and re ectance characteristics based on the integration of shape from shading (SFS) and stereo, for nonconstan talbedo and non-uniformly Lambertian surfaces. First we perform stereo tting on the input stereo pairs or image sequences. When the images are uncalibrated, w e recover the camera parameters using bundle adjustment. Based on the stereo result, we can automatically segment the albedo map (which i s t a k en to be piece-wise constant) using a Minimum Description Length (MDL) based metric, to identify areas suitable for SFS (typically smooth textureless areas) and to deriv e illumination information. The shape and the illumination parameter estimates are re ned using a deformable model SFS algorithm, which i terates bet w een computing shape and illumination parameters. Our method takes into accoun tthe viewing angle dependent foreshortening and specularity e ects, and compensates as much as possible by utilizing information from more than one images. We demonstrate that we can extend the applicability of SFS algorithms to real world situations when some of its traditional assumptions are violated. We demonstrate our method by applying it to face shape reconstruction. Experimental results indicate a signi cant improvement over SFS-only or stereo-only based reconstruction. Model accuracy and detail are improved, especially in areas of low texture detail. Albedo information is retrieved and can be used to accurately re-render the model under di erent illumination conditions.

Direct method for reconstructing shape from shading

1992

A new approach to shape from shading is described, based on a connection with a calculus of variations/optimal control problem. An explicit representation is given for the surface corresponding to a shaded image; uniqueness of the surface (under suitable conditions) is an immediate consequence. The approach leads naturally to an algorithm for shape reconstruction that is simple, fast, provably convergent, and, in many cases, provably convergent to the correct solution.

A method for shape-from-shading using multiple images acquired under different viewing and lighting conditions

Computer Vision and Pattern Recognition, …, 1989

A new formulation for shape-from-shading from multiple images acquired under different viewing and lighting conditions is presented. The method is based on using an explicit image formation model to create renditions of the surface being estimated, which are synthetic versions of the observed images. It is applicable in a variety of imaging situations, including those involving unknown non-uniform albedo. A probabilistic model is developed based on typical characteristics of the surface and minimizing the difference between the synthetic and observed images. This model is used to arrive at a Bayesian formulation of the shape-from-shading problem. Techniques are presented to compute an estimate that is statistically optimal in the sense that it is the expected value of the surface, given the set of observations derived from it. The method is applied to Viking imagery of Mars.

Shape Reconstruction by Learning Differentiable Surface Representations

2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

Generative models that produce point clouds have emerged as a powerful tool to represent 3D surfaces, and the best current ones rely on learning an ensemble of parametric representations. Unfortunately, they offer no control over the deformations of the surface patches that form the ensemble and thus fail to prevent them from either overlapping or collapsing into single points or lines. As a consequence, computing shape properties such as surface normals and curvatures becomes difficult and unreliable. In this paper, we show that we can exploit the inherent differentiability of deep networks to leverage differential surface properties during training so as to prevent patch collapse and strongly reduce patch overlap. Furthermore, this lets us reliably compute quantities such as surface normals and curvatures. We will demonstrate on several tasks that this yields more accurate surface reconstructions than the state-of-the-art methods in terms of normals estimation and amount of collapsed and overlapped patches.

Improving surface reconstruction in shape from shading using easy-to-set boundary conditions

International Journal of Computational Vision and Robotics, 2013

Minimization techniques are commonly adopted methodologies for retrieving a 3D surface starting from its shaded representation (image), i.e. for solving the widely known shape from shading (SFS) problem. Unfortunately, depending on the imaged object to be reconstructed, retrieved surfaces often results to be completely different from the expected ones. In recent years, a number of interactive methods have been explored with the aim of improving surface reconstruction; however, since most of these methods require user interaction performed on a tentative reconstructed surface which often is significantly different from the desired one, it is advisable to increase the quality of the surface, to be further processed, as much as possible. Inspired by such techniques, the present work describes a new method for interactive retrieving of shaded object surface. The proposed approach is meant to recover the expected surface by using easy-to-set boundary conditions, so that the human-computer interaction primarily takes place prior to the surface retrieval. The method, tested on a set of case studies, proves to be effective in achieving sufficiently accurate reconstruction of scenes with both front and side illumination.

Generalised Perspective Shape from Shading with Oren-Nayar Reflectance

Procedings of the British Machine Vision Conference 2013, 2013

In spite of significant advances in Shape from Shading (SfS) over the last years, it is still a challenging task to design SfS approaches that are flexible enough to handle a wide range of input scenes. In this paper, we address this lack of flexibility by proposing a novel model that extends the range of possible applications. To this end, we consider the class of modern perspective SfS models formulated via partial differential equations (PDEs). By combining a recent spherical surface parametrisation with the advanced non-Lambertian Oren-Nayar reflectance model, we obtain a robust approach that allows to deal with an arbitrary position of the light source while being able to handle rough surfaces and thus more realistic objects at the same time. To our knowledge, the resulting model is currently the most advanced and most flexible approach in the literature on PDE-based perspective SfS. Apart from deriving our model, we also show how the corresponding set of sophisticated Hamilton-Jacobi equations can be efficiently solved by a specifically tailored fast marching scheme. Experiments with medical real-world data demonstrate that our model works in practice and that is offers the desired flexibility.

Shape and Illumination from Shading using the Generic Viewpoint Assumption

The Generic Viewpoint Assumption (GVA) states that the position of the viewer or the light in a scene is not special. Thus, any estimated parameters from an observation should be stable under small perturbations such as object, viewpoint or light positions. The GVA has been analyzed and quantified in previous works, but has not been put to practical use in actual vision tasks. In this paper, we show how to utilize the GVA to estimate shape and illumination from a single shading image, without the use of other priors. We propose a novel linearized Spherical Harmonics (SH) shading model which enables us to obtain a computationally efficient form of the GVA term. Together with a data term, we build a model whose unknowns are shape and SH illumination. The model parameters are estimated using the Alternating Direction Method of Multipliers embedded in a multi-scale estimation framework. In this prior-free framework, we obtain competitive shape and illumination estimation results under a variety of models and lighting conditions, requiring fewer assumptions than competing methods.