Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo (original) (raw)
Related papers
Neural apparent BRDF fields for multiview photometric stereo
arXiv (Cornell University), 2022
Volume Renderer Figure 1: We replace the view-conditioned black box radiance predicted by a NeRF [25] with a physically-based image formation model. The geometry network predicts density and surface normal direction at each position in the volume. A neural BRDF and shadow network predict reflected scene radiance for the point which is then volume rendered using predicted density as for NeRF. The resulting model is relightable while also improving geometric information using multi-light observations.
Uncalibrated Neural Inverse Rendering for Photometric Stereo of General Surfaces
2021
This paper presents an uncalibrated deep neural network framework for the photometric stereo problem. For training models to solve the problem, existing neural network-based methods either require exact light directions or ground-truth surface normals of the object or both. However, in practice, it is challenging to procure both of this information precisely, which restricts the broader adoption of photometric stereo algorithms for vision application. To bypass this difficulty, we propose an uncalibrated neural inverse rendering approach to this problem. Our method first estimates the light directions from the input images and then optimizes an image reconstruction loss to calculate the surface normals, bidirectional reflectance distribution function value, and depth. Additionally, our formulation explicitly models the concave and convex parts of a complex surface to consider the effects of interreflections in the image formation process. Extensive evaluation of the proposed method ...
CNN-PS: CNN-Based Photometric Stereo for General Non-convex Surfaces
Lecture Notes in Computer Science, 2018
Most conventional photometric stereo algorithms inversely solve a BRDF-based image formation model. However, the actual imaging process is often far more complex due to the global light transport on the non-convex surfaces. This paper presents a photometric stereo network that directly learns relationships between the photometric stereo input and surface normals of a scene. For handling unordered, arbitrary number of input images, we merge all the input data to the intermediate representation called observation map that has a fixed shape, is able to be fed into a CNN. To improve both training and prediction, we take into account the rotational pseudo-invariance of the observation map that is derived from the isotropic constraint. For training the network, we create a synthetic photometric stereo dataset that is generated by a physics-based renderer, therefore the global light transport is considered. Our experimental results on both synthetic and real datasets show that our method outperforms conventional BRDF-based photometric stereo algorithms especially when scenes are highly non-convex.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000
This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialize a multiview photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: First, we describe a robust technique to estimate light directions and intensities and, second, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and, hence, allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how, even in the case of highly textured objects, this technique can greatly improve on correspondence-based multiview stereo results.
Learning conditional photometric stereo with high-resolution features
Computational Visual Media
Photometric stereo aims to reconstruct 3D geometry by recovering the dense surface orientation of a 3D object from multiple images under differing illumination. Traditional methods normally adopt simplified reflectance models to make the surface orientation computable. However, the real reflectances of surfaces greatly limit applicability of such methods to real-world objects. While deep neural networks have been employed to handle non-Lambertian surfaces, these methods are subject to blurring and errors, especially in high-frequency regions (such as crinkles and edges), caused by spectral bias: neural networks favor low-frequency representations so exhibit a bias towards smooth functions. In this paper, therefore, we propose a self-learning conditional network with multi-scale features for photometric stereo, avoiding blurred reconstruction in such regions. Our explorations include: (i) a multi-scale feature fusion architecture, which keeps high-resolution representations and deep ...
Fusing Multiview and Photometric Stereo for 3D Reconstruction under Uncalibrated Illumination
We propose a method to obtain a complete and accurate 3D model from multiview images captured under a variety of unknown illuminations. Based on recent results showing that for Lambertian objects, general illumination can be approximated well using low-order spherical harmonics, we develop a robust alternating approach to recover surface normals. Surface normals are initialized using a multi-illumination multiview stereo algorithm, then refined using a robust alternating optimization method based on the ' 1 metric. Erroneous normal estimates are detected using a shape prior. Finally, the computed normals are used to improve the preliminary 3D model. The reconstruction system achieves watertight and robust 3D reconstruction while neither requiring manual interactions nor imposing any constraints on the illumination. Experimental results on both real world and synthetic data show that the technique can acquire accurate 3D models for Lambertian surfaces, and even tolerates small violations of the Lambertian assumption.
Multi-view Photometric Stereo using Semi-isometric Mappings
2012
Classical uncalibrated Photometric Stereo approaches are mostly constrained to the static view assumption that enforces the camera to be fixed in front of an object illuminated by different light sources. Attempts to extend PS to multi-views has delivered methods that can only be robust to images taken with short camera baselines. In this paper, we present a new uncalibrated Multi-View Photometric Stereo (MVPS) method that can obtain a dense 3D recostruction from views subject to strong baseline variations and extreme changes in illumination conditions. This approach is intrinsically photogeometric obtaining robust results using a combination of multi-view geometry and photometry. At the core of the algorithm, there is an efficient planar embedding and local image pixel registration among views that renders the problem tractable and computationally solvable. In the experiments, the results are evaluated and compared with the existing methods as well as the ground truth and shows the method is able to deal with the most complex objects and lighting conditions.
Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian Photometric Stereo
2021 International Conference on 3D Vision (3DV), 2021
The problem of estimating a surface shape from its observed reflectance properties still remains a challenging task in computer vision. The presence of global illumination effects such as inter-reflections or cast shadows makes the task particularly difficult for non-convex real-world surfaces. State-of-the-art methods for calibrated photometric stereo address these issues using convolutional neural networks (CNNs) that primarily aim to capture either the spatial context among adjacent pixels or the photometric one formed by illuminating a sample from adjacent directions. In this paper, we bridge these two objectives and introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously. In contrast to existing approaches that rely on standard 2D CNNs and regress directly to surface normals, we argue that using separable 4D convolutions and regressing to 2D Gaussian heat-maps severely reduces the size of the network and makes inference more efficient. Our experimental results on a real-world photometric stereo benchmark show that the proposed approach outperforms the existing methods both in efficiency and accuracy.
Practical 3D Reconstruction Based on Photometric Stereo
Studies in Computational Intelligence, 2010
Photometric Stereo is a powerful image based 3d reconstruction technique that has recently been used to obtain very high quality reconstructions. However, in its classic form, Photometric Stereo suffers from two main limitations: Firstly, one needs to obtain images of the 3d scene under multiple different illuminations. As a result the 3d scene needs to remain static during illumination changes, which prohibits the reconstruction of deforming objects. Secondly, the images obtained must be from a single viewpoint. This leads to depth-map based 2.5 reconstructions, instead of full 3d surfaces. The aim of this chapter is to show how these limitations can be alleviated, leading to the derivation of two practical 3d acquisition systems: The first one, based on the powerful Coloured Light Photometric Stereo method can be used to reconstruct moving objects such as cloth or human faces. The second, permits the complete 3d reconstruction of challenging objects such as porcelain vases. In addition to algorithmic details, the chapter pays attention to practical issues such as setup calibration, detection and correction of self and cast shadows. We provide several evaluation experiments as well as reconstruction results.
Exploration of Photometric Stereo Technology Applies to 3D Model Reconstruction
International journal of engineering research and technology, 2013
An efficient method has been presented to achieve an accurate dense 3D reconstruction of objects using photometric stereo technology. The task of recovering three-dimensional geometry from two dimensional views of a scene is called 3D reconstruction. Photometric Stereo is a powerful image based 3D reconstruction technique that has recently been used to obtain very high quality reconstructions. The Photometric Stereo (PS) technique uses several images of the same surface taken from the same viewpoint but under illuminations with different directions. The illumination conditions refer to the light source direction and intensity, and reflectance properties mean what type of surface is under consideration i.e. Lambertian or non-Lambertian. . The algorithm has been tested on synthetic as well as real datasets and very encouraging results have been obtained.