Deep Shape from Polarization (original) (raw)
Related papers
Shape-from-Polarisation: A Nonlinear Least Squares Approach
2017 IEEE International Conference on Computer Vision Workshops (ICCVW), 2017
In this paper we present a new type of approach for estimating surface height from polarimetric data, i.e. a sequence of images in which a linear polarising filter is rotated in front of a camera. In contrast to all previous shape-from-polarisation methods, we do not first transform the observed data into a polarisation image. Instead, we minimise the sum of squared residuals between predicted and observed intensities over all pixels and polariser angles. This is a nonlinear least squares optimisation problem in which the unknown is the surface height. The forward prediction is a series of transformations for which we provide analytical derivatives allowing the overall problem to be efficiently optimised using Gauss-Newton type methods with an analytical Jacobian matrix. The method is very general and can incorporate any (differentiable) illumination, reflectance or polarisation model. We also propose a variant of the method which uses image ratios to remove dependence on illumination and albedo. We demonstrate our methods on glossy objects, including with albedo variations, and provide comparison to a state of the art approach.
Accurate Polarimetric BRDF for Real Polarization Scene Rendering
Computer Vision – ECCV 2020, 2020
Polarization has been used to solve a lot of computer vision tasks such as Shape from Polarization (SfP). But existing methods suffer from ambiguity problems of polarization. To overcome such problems, some research works have suggested to use Convolutional Neural Network (CNN). But acquiring large scale dataset with polarization information is a very difficult task. If there is an accurate model which can describe a complicated phenomenon of polarization, we can easily produce synthetic polarized images with various situations to train CNN. In this paper, we propose a new polarimetric BRDF (pBRDF) model. We prove its accuracy by fitting our model to measured data with variety of light and camera conditions. We render polarized images using this model and use them to estimate surface normal. Experiments show that the CNN trained by our polarized images has more accuracy than one trained by RGB only.
Robust Shape from Polarisation and Shading
2010
In this paper, we present an approach to robust estimation of shape from single-view multi-spectral polarisation images. The developed technique tackles the problem of recovering the azimuth angle of surface normals robust to image noise and a low degree of polarisation. We note that the linear least-squares estimation results in a considerable phase shift from the ground truth in the presence of noise and weak polarisation in multispectral and hyperspectral imaging. This paper discusses the utility of robust statistics ...
Shape and Light Directions From Shading and Polarization
2015
We introduce a method to recover the shape of a smooth dielectric object from polarization images taken with a light source from different directions. We present two constraints on shading and polarization and use both in a single optimization scheme. This integration is motivated by the fact that photometric stereo and polarization-based methods have complementary abilities. The polarization-based method can give strong cues for the surface orientation and refractive index, which are independent of the light direction. However, it has ambiguities in selecting between two ambiguous choices of the surface orientation, in the relationship between refractive index and zenith angle (observing angle), and limited performance for surface points with small zenith angles, where the polarization effect is weak. In contrast, photometric stereo method with multiple light sources can disambiguate the surface orientation and give a strong relationship between the surface normals and light directions. However, it has limited performance for large zenith angles, refractive index estimation, and faces the ambiguity in case the light direction is unknown. Taking their advantages, our proposed method can recover the surface normals for both small and large zenith angles, the light directions, and the refractive indexes of the object. The proposed method is successfully evaluated by simulation and real-world experiments.
2021
Reconstructing the 3D geometry of objects from images is a fundamental problem in computer vision. This thesis focuses on shape from polarisation where the goal is to reconstruct a dense depth map from a sequence of polarisation images. Firstly, we propose a linear differential constraints approach to depth estimation from polarisation images. We demonstrate that colour images can deliver more robust polarimetric measurements compared to monochrome images. Then we explore different constraints by taking the polarisation images under two different light conditions with fixed view and show that a dense depth map, albedo map and refractive index can be recovered. Secondly, we propose a nonlinear method to reconstruct depth by an end-to-end method. We re-parameterise a polarisation reflectance model with respect to the depth map, and predict an optimum depth map by minimising an energy cost function between the prediction from the reflectance model and observed data using nonlinear leas...
PANDORA: Polarization-Aided Neural Decomposition Of Radiance
2022
Reconstructing an object's geometry and appearance from multiple images, also known as inverse rendering, is a fundamental problem in computer graphics and vision. Inverse rendering is inherently ill-posed because the captured image is an intricate function of unknown lighting conditions, material properties and scene geometry. Recent progress in representing scene properties as coordinate-based neural networks have facilitated neural inverse rendering resulting in impressive geometry reconstruction and novel-view synthesis. Our key insight is that polarization is a useful cue for neural inverse rendering as polarization strongly depends on surface normals and is distinct for diffuse and specular reflectance. With the advent of commodity, on-chip, polarization sensors, capturing polarization has become practical. Thus, we propose PANDORA, a polarimetric inverse rendering approach based on implicit neural representations. From multi-view polarization images of an object, PANDORA jointly extracts the object's 3D geometry, separates the outgoing radiance into diffuse and specular and estimates the illumination incident on the object. We show that PANDORA outperforms state-of-the-art radiance decomposition techniques. PANDORA outputs clean surface reconstructions free from texture artefacts, models strong specularities accurately and estimates illumination under practical unstructured scenarios.
Reconstructing Polarisation Components from Unpolarised Images
In this paper, we develop a method for reconstructing the polarisation components from unpolarised imagery. Our approach rests on a model of polarisation which accounts for reflection from rough surfaces illuminated at moderate and large angles of incidence. Departing from the microfacet structure of rough surfaces, we relate the maximal and minimal polarimetric intensities to the diffuse and specular components of an unpolarised image via the Fresnel reflection theory. This allows us to reconstruct the polarimetric components from a single unpolarised image. Thus, the model presented here provides a link between the microfacet structure and polarisation of light upon reflection from rough surfaces. We evaluate the accuracy of the reconstructed polarisation components and illustrate the utility of the method for the simulation of a polarising filter on real-world images.
Training Many-Parameter Shape-from-Shading Models Using a Surface Database
Shape-from-shading (SFS) methods tend to rely on models with few parameters because these parameters need to be hand-tuned. This limits the number of different cues that the SFS problem can exploit. In this paper, we show how machine learning can be applied to an SFS model with a large number of parameters. Our system learns a set of weighting parameters that use the intensity of each pixel in the image to gauge the importance of that pixel in the shape reconstruction process. We show empirically that this leads to a significant increase in the accuracy of the recovered surfaces. Our learning approach is novel in that the parameters are optimized with respect to actual surface output by the system.
Shape from Diffuse Polarisation
british machine vision conference, 2004
When unpolarised light is reflected from a smooth dielectric surface, it is spontaneously partially polarised. This process applies to both specular and diffuse reflection, although the effect is greatest for specular reflection. This paper is concerned with exploiting this phenomenon by processing images of smooth dielectric objects to recover surface normals and hence height. The paper presents the underlying physics of polarisation by reflection, starting with the Fresnel equations. It is explained how these equations can be used to obtain the shape of objects and some experimental results are presented to illustrate the usefulness of the theory.