Interactive modification of real and virtual lights for augmented reality (original) (raw)

Interactive virtual relighting of real scenes

IEEE Transactions on Visualization and Computer Graphics, 2000

Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semi-automatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities. Our results show that we are able to virtually relight real scenes interactively, including modifications and additions of virtual light sources and objects.

Interactive Virtual Relighting and Remodeling of Real Scenes

Eurographics, 1999

Thème 3 -Interaction homme-machine, images, données, connaissances Projet iMAGIS Rapport technique n˚0230 -Avril 1999 -16 pages Abstract: Lighting design is tedious due to the required physical manipulation of real light sources and objects. As an alternative, we present an interactive system to virtually modify the lighting and geometry of scenes with both real and synthetic objects, including mixed real/virtual lighting and shadows. In our method real scene geometry is first approximatively reconstructed from photographs. Additional images are taken with a real light in different positions from the same viewpoint to estimate reflectance. A filtering process is used to compensate for modeling errors, and per image reflectances are averaged to generate an approximate reflectance image for the given viewpoint, removing shadows in the process. This estimate is used to initialise a global illumination hierarchical radiosity system, representing real-world secondary illumination; the system is optimized for interactive updates. Direct illumination from lights is calculated separately using ray-casting and a table for efficient reuse where appropriate.

Low-Cost Photometric Calibration for Interactive Relighting

Computer augmented reality is a rapidly emerging field allowing users to mix virtual and real worlds. Our interest is to allow relighting and remodelling of real scenes, using a reflectance estimation method. Most previous work focused on the quality of the results without considering the expense in computation and the price of acquisition equipment. In this paper, we present a low-cost photometric calibration method which improves the reflectance estimate of real scenes. This is achieved by adapting high-dynamic range image creation to a low-cost camera, and an iterative approach to correct reflectance estimation using a radiosity algorithm for indirect light calculation.

Outdoor Illumination Estimation for Mobile Augmented Reality: Real-time Analysis of Shadow and Lit Surfaces to Measure the Daylight Illumination

Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 2020

A realistic illumination model in Augmented Reality (AR) applications is crucial for perceiving virtual objects as real. In order to correctly blend digital content with the physical world it is necessary to measure, in real time, the illumination present in the scene surrounding the user. The paper proposes a novel solution for real-time estimation of outdoor illumination conditions, based on the video stream from the camera on handheld devices. The problem is formulated in a radiometric framework, showing how the reflected radiance from the surface maps to pixel values, and how the reflected radiance relates to surface reflectance and the illumination environment. From this we derive how to estimate the color and intensity of the sun and sky illumination, respectively, using areas in the video stream that are in direct sunlight and in shadow. The presented approach allows for rendering augmentations that adapt in real-time to dynamically changing outdoor illumination conditions.

Exact radiosity reconstruction and shadow computation using vertex tracing

2000

Methods for exact computation of irradiance and form factors associated with polygonal objects have ultimately relied on a formula for a differential area to polygon form factor attributed to Lambert. This paper introduces an alternative to Lambert's formula, an analytical expression which independent of the vertex order of the polygon. In this formulation, irradiance values in a scene consisting of partially occluded uniformly emitting polygons can be computed exactly by examining only the set of apparent vertices visible from the point of evaluation, where no vertex ordering is required. The method is particularly applicable to radiosity reconstruction, in which all the scene polygons are diffuse emitters, and also in environments where efficiency structures have already been established.

Interactive Relighting of Virtual Objects under Environment Lighting

Current relighting applications often constrain one or several factors of the rendering equation to keep the rendering speed real-time. For example, visibility is often precalculated and animations are not allowed, changes in lighting are limited to simple rotation or the lighting is not very detailed. Other techniques compromise on quality and often coarsely tabulate BRDF functions. In order to solve these problems, some techniques have started to use spherical radial basis functions. However, solving the triple product integral does not guarantee interactivity. In order to dynamically change lighting conditions or alter scene geometry and materials, these three factors need to be converted to the SRBF representation in a fast manner. This paper presents a method to perform the SRBF data construction and rendering in real-time. To support dynamic high-frequency lighting, a multiscale residual transformation algorithm is applied. Area lights are detected through a peak detection alg...

The multi-frame lighting method: A monte carlo based solution for radiosity in dynamic environments

Rendering Techniques' 96, 1996

In this paper we present a method for radiosity computation in dynamic scenes. The algorithm is intended for animations in which the motion of the objects is known in advance. Radiosity is computed using a Monte Carlo approach. Instead of computing each frame separately, we propose to compute the lighting simulation of a sequence of frames in a unique process. This is achieved by the merging of the whole sequence of frames into a single scene, so each moving object is replicated as many times as frames. We present results which show the performance of the proposed method. This is specially interesting for sequences of a signi cant number of frames. We also present an analysis of the algorithm complexity. An important feature of the algorithm is that the accuracy of the image in each frame is the same as the one we would obtain by means of computing each frame separately.

Improved explicit radiosity method for calculating non-Lambertian reflections

The Visual Computer, 1993

We present an improved radiosity method for accounting for non-Lambertian reflections. The method explicitly calculates the radiance distribution leaving each non-Lambertian surface. The method differs from previous explicit radiosity methods in two respects. First, non-Lambertian surfaces are discretized adaptively based on their effect on other surfaces, rather than on their own spatial radiance distribution. Second, the calculation of the radiance distribution for surfaces that are neither Lambertian nor mirror-like surfaces is made more efficient using the ideas of hemi-cube pixel groups and the reflectance hemisphere. The method is well suited to being used as the first pass in a multi-pass rendering method.

Analyse de l'illumination et des propriétés de réflectance en utilisant des collections d'images

Http Www Theses Fr, 2011

The main objective of this thesis is to exploit the photometric information available in large photo collections of outdoor scenes to infer characteristics of the illumination, the objects and the cameras. To achieve this goal two problems are addressed. In a preliminary work, we explore optimal representations for the sky and compare images based on its appearance. Much of the information perceived in outdoor scenes is due to the illumination coming from the sky. The solar beams are reflected and refracted in the atmosphere, creating a global illumination ambience. In turn, this environment determines the way that we perceive objects in the real world. Given the importance of the sky as an illumination source, we formulate a generic 3-step process in order to compare images based on its appearance. These three stages are: segmentation, modeling and comparing of the sky pixels. Different approaches are adopted for the modeling and comparing phases. Performance of the algorithms is validated by finding similar images in large photo collections. A second part of the thesis aims to exploit additional geometric information in order to deduce the photometric characteristics of the scene. From a 3D structure recovered using available multi-view stereo methods, we trace back the image formation process and estimate the models for the components involved on it. Since photo collections are usually acquired with different cameras, our formulation emphasizes the estimation of the radiometric calibration for all the cameras at the same time, using a strong prior on the possible space of camera response functions. Then, in a joint estimation framework, we also propose a robust computation of the global illumination for each image, the surface albedo for the 3D structure and the radiometric calibration for all the cameras.