Single Image BRDF Parameter Estimation with a Conditional Adversarial Network (original) (raw)

BRDF Estimation of Complex Materials with Nested Learning

2019 IEEE Winter Conference on Applications of Computer Vision (WACV), 2019

The estimation of the optical properties of a material from RGB-images is an important but extremely ill-posed problem in Computer Graphics. While recent works have successfully approached this problem even from just a single photograph, significant simplifications of the material model are assumed, limiting the usability of such methods. The detection of complex material properties such as anisotropy or Fresnel effect remains an unsolved challenge. We propose a novel method that predicts the model parameters of an artist-friendly, physically-based BRDF, from only two low-resolution shots of the material. Thanks to a novel combination of deep neural networks in a nested architecture, we are able to handle the ambiguities given by the nonorthogonality and non-convexity of the parameter space. To train the network, we generate a novel dataset of physicallybased synthetic images. We prove that our model can recover new properties like anisotropy, index of refraction and a second reflectance color, for materials that have tinted specular reflections or whose albedo changes at glancing angles.

Deep Reflectance Maps

Undoing the image formation process and therefore decomposing appearance into its intrinsic properties is a challenging task due to the under-constraint nature of this inverse problem. While significant progress has been made on inferring shape, materials and illumination from images only, progress in an unconstrained setting is still limited. We propose a convolutional neural architecture to estimate \emph{reflectance maps} of specular materials in natural lighting conditions. We achieve this in an end-to-end learning formulation that directly predicts a reflectance map from the image itself. We show how to improve estimates by facilitating additional supervision in an indirect scheme that first predicts surface orientation and afterwards predicts the reflectance map by a learning-based sparse data interpolation. In order to analyze performance on this difficult task, we propose a new challenge of Specular MAterials on SHapes with complex IllumiNation (SMASHINg) using both synthetic and real images. Furthermore, we show the application of our method to a range of image-based editing tasks on real images.

Reflectance and texture of real-world surfaces

ACM Transactions on Graphics, 1999

In this work, we investigate the visual appearance of real-world surfaces and the dependence of appearance on the geometry of imaging conditions. We discuss a new texture representation called the BTF (bidirectional texture function) which captures the variation in texture with illumination and viewing direction. We present a BTF database with image textures from over 60 different samples, each observed with over 200 different combinations of viewing and illumination directions. We describe the methods involved in collecting the database as well as the importqance and uniqueness of this database for computer graphics. A related quantity to the BTF is the familiar BRDF (bidirectional reflectance distribution function). The measurement methods involved in the BTF database are conducive to simultaneous measurement of the BRDF. Accordingly, we also present a BRDF database with reflectance measurements for over 60 different samples, each observed with over 200 different combinations of v...

Neural-PIL: Neural Pre-Integrated Lighting for Reflectance Decomposition

2021

Decomposing a scene into its shape, reflectance and illumination is a fundamental problem in computer vision and graphics. Neural approaches such as NeRF have achieved remarkable success in view synthesis, but do not explicitly perform decomposition and instead operate exclusively on radiance (the product of reflectance and illumination). Extensions to NeRF, such as NeRD, can perform decomposition but struggle to accurately recover detailed illumination, thereby significantly limiting realism. We propose a novel reflectance decomposition network that can estimate shape, BRDF and per-image illumination given a set of object images captured under varying illumination. Our key technique is a novel illumination integration network called Neural-PIL that replaces a costly illumination integral operation in the rendering with a simple network query. In addition, we also learn deep low-dimensional priors on BRDF and illumination representations using novel smooth manifold auto-encoders. Ou...

Perceptual quality of BRDF approximations: dataset and metrics

2021

Bidirectional Reflectance Distribution Functions (BRDFs) are pivotal to the perceived realism in image synthesis. While measured BRDF datasets are available, reflectance functions are most of the time approximated by analytical formulas for storage efficiency reasons. These approximations are often obtained by minimizing metrics such as L2—or weighted quadratic—distances, but these metrics do not usually correlate well with perceptual quality when the BRDF is used in a rendering context, which motivates a perceptual study. The contributions of this paper are threefold. First, we perform a large‐scale user study to assess the perceptual quality of 2026 BRDF approximations, resulting in 84138 judgments across 1005 unique participants. We explore this dataset and analyze perceptual scores based on material type and illumination. Second, we assess nine analytical BRDF models in their ability to approximate tabulated BRDFs. Third, we assess several image‐based and BRDF‐based (Lp, optimal...

BRDF Estimation for Faces from a Sparse Dataset Using a Neural Network

Lecture Notes in Computer Science, 2013

We present a novel five source near-infrared photometric stereo 3D face capture device. The accuracy of the system is demonstrated by a comparison with ground truth from a commercial 3D scanner. We also use the data from the five captured images to model the Bi-directional Reflectance Distribution Function (BRDF) in order to synthesise images from novel lighting directions. A comparison of these synthetic images created from modelling the BRDF using a three layer neural network, a linear interpolation method and the Lambertian model is given, which shows that the neural network proves to be the most photo-realistic.

SfSNet: Learning Shape, Reflectance and Illuminance of Faces 'in the Wild

2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition

We present SfSNet, an end-to-end learning framework for producing an accurate decomposition of an unconstrained human face image into shape, reflectance and illuminance. SfSNet is designed to reflect a physical lambertian rendering model. SfSNet learns from a mixture of labeled synthetic and unlabeled real world images. This allows the network to capture low frequency variations from synthetic and high frequency details from real images through the photometric reconstruction loss. SfSNet consists of a new decomposition architecture with residual blocks that learns a complete separation of albedo and normal. This is used along with the original image to predict lighting. Sf-SNet produces significantly better quantitative and qualitative results than state-of-the-art methods for inverse rendering and independent normal and illumination estimation.

NeRD: Neural Reflectance Decomposition from Image Collections

2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021

Google Research Basecolor Metallic Roughness Normal Multi-View Images NeRD Volume Decomposed BRDF Relighting & View synthesis Textured Mesh Figure 1: Neural Reflectance Decomposition for Relighting. We encode multiple views of an object under varying or fixed illumination into the NeRD volume. We decompose each given image into geometry, spatially-varying BRDF parameters and a rough approximation of the incident illumination in a globally consistent manner. We then extract a relightable textured mesh that can be re-rendered under novel illumination conditions in real-time.

Predicting Surface Reflectance Properties of Outdoor Scenes Under Unknown Natural Illumination

ArXiv, 2021

Estimating and modelling the appearance of an object under outdoor illumination conditions is a complex process. Although there have been several studies on illumination estimation and relighting, very few of them focus on estimating the reflectance properties of outdoor objects and scenes. This paper addresses this problem and proposes a complete framework to predict surface reflectance properties of outdoor scenes under unknown natural illumination. Uniquely, we recast the problem into its two constituent components involving the BRDF incoming light and outgoing view directions: (i) surface points’ radiance captured in the images, and outgoing view directions are aggregated and encoded into reflectance maps, and (ii) a neural network trained on reflectance maps of renders of a unit sphere under arbitrary light directions infers a low-parameter reflection model representing the reflectance properties at each surface in the scene. Our model is based on a combination of phenomenologi...