AN AUGMENTED REALITY SYSTEM BASED ON LIGHT FIELDS (original) (raw)

Photorealistic Rendering of Images Formed by Augmented Reality Optical Systems

Programming and Computer Software, 2018

Stochastic ray tracing is used for rendering photorealistic images formed by augmented reality optical systems that combine the image generated by an optoelectronic device with the image of the environment. Methods for improving the efficiency of stochastic ray tracing that preserve the physical correctness of the simulation are proposed. Using a head-up display (HUD) as an example, it is shown that the forward stochastic ray tracing methods are sometimes more efficient than backward stochastic ray tracing methods for the visual simulation of augmented reality images. Approaches making it possible to combine the forward, backward, and bidirectional ray tracing in a unified simulation procedure are proposed. The results are illustrated by synthesized images produced by the optical system of head-up display.

A light field camera for image based rendering

2000

The cost of building a digitizing system for image-based rendering can be prohibitive. Furthermore, the physical size, weight, and complexity of these systems has, in effect, limited their use to small objects and indoor scenes. The primary motivations for this project is to reduce the acquisition cost of light-field capture devices and create a portable system suitable for acquiring outdoor scenes. This paper describes the design of such an apparatus using readily available parts. One of the strategies for reducing the system cost has been to rely on software to correct as many of the geometric and photometric inaccuracies as possible. The resulting light field acquisition device can be built for under $200. The presented light-field acquisition system employs a modified lowcost flatbed scanner. The scanner is interfaced to a standard desktop PC for indoor use or a laptop for outdoor experiments. Focused onto the glass of the scanner is an 8-by-11-grid assembly of one-inch plastic lenses. To make the scanner mobile, the DC power supply is replaced with a 12V lead acid battery. Although the construction of our light-field capture system is simple, the most significant challenges involve the processing of the raw scanner output. Necessary adjustments are color correction and radial distortion removal.

Camera Animation for Immersive Light Field Imaging

Electronics

Among novel capture and visualization technologies, light field has made significant progress in the current decade, bringing closer its emergence in everyday use cases. Unlike many other forms of 3D displays and devices, light field visualization does not depend on any viewing equipment. Regarding its potential use cases, light field is applicable to both cinematic and interactive contents. Such contents often rely on camera animation, which is a frequent tool for the creation and presentation of 2D contents. However, while common 3D camera animation is often rather straightforward, light field visualization has certain constraints that must be considered before implementing any variation of such techniques. In this paper, we introduce our work on camera animation for light field visualization. Different types of conventional camera animation were applied to light field contents, which produced an interactive simulation. The simulation was visualized and assessed on a real light fi...

Image-based rendering for mixed reality

Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), 2001

In this paper, we propose an image-based approach to synthesize a novel view image for mixed reality (MR) systems. Theoretically, the image-based method is good for synthesizing realistic images, but it is difficult to achieve interactive handling of the object. As a solution, we propose a new method based on the "surface light field rendering" technique. With this method, we can synthesize the objects with arbitrary deformation and illumination changes. To demonstrate the efficiency of this method, we describe successful experiments that we performed using objects with non-rigid effects (e.g. velvet and Tatami carpet) which are difficult to render correctly by the use of general modelbased rendering techniques.

Light Field Estimation and Control Using a Graphical Rendering Engine

State-of-the-Art feedback control of lighting depends on point sensor measurements for light field generation. However, since the occupant's perception depends on the entire light field in the room instead of the illumination at a limited set of points, the performance of these lighting control systems may be unsatisfactory. Therefore, it is critical to reconstruct the light field in the room from point sensor measurements and use it for feedback control of lights. This paper presents a framework for using graphical rendering tools along with point sensor measurements for the estimation of a light field and using these estimates for feedback control. Computer graphics software is used to efficiently and accurately model building spaces, while a game engine is used to render different lighting conditions for the space on the fly. These real-time renderings are then used together with sensor measurements to estimate and control the light field in the room using an optimization-based feedback control approach. We present a set of estimation algorithms for this purpose and analyze their convergence and performance limitations. Finally, we demonstrate closed loop lighting control systems that use these estimation algorithms and compare their relative performance, highlighting their benefits and disadvantages.

Rendering for an interactive 360° light field display

ACM Transactions on Graphics, 2007

Figure 1: A 3D object shown on the display is photographed by two stereo cameras (seen in the middle image). The two stereo viewpoints sample the 360 • field of view around the display. The right pair is from a vertically-tracked camera position and the left pair is from an untracked position roughly horizontal to the center of the display. The stereo pairs are left-right reversed for cross-fused stereo viewing.

Introducing Extended and Augmented Light Fields for Autostereoscopic Displays

2006

Autostereoscopic displays have recently received a lot of attention because they allow multiple users to view true 3D images of the same object. These devices usually display either 3D volumetric data or 4D light-field data. In this paper we address the issue of representing, building and rendering 4D light-field models like those used in autostereoscopic displays. We present a representation

A Generalized Light-Field API and Management System

2000

Light fields are a computer graphics modeling technique that represents objects using radiance samples instead of geometry. Radiance samples may be stored as sets of images or 4D arrays of pixel values. Light fields have various advantages: their rendering complexity only depends on the output image's, they can represent sophisticated illumination effects, and they are well-suited for display using autostereoscopic

3-D Rendering of objects using Augmented Reality

Journal of emerging technologies and innovative research, 2020

With the expeditious evolution of 3-D Rendering and Processing capacities, Augmented Reality experiences by the use of advanced technology have seen a swift elevation along with the Real-world experiences. Augmented Reality Core permits one to have interaction with the virtual as well as the real-time applications and gives a natural experience to the user. Firstly, the introduction which includes the past, present trends and advancement about the technology is summarized. Then, the process of creating a mobile application in Android Studio which provides the user with several day-to-day use furniture items that can be visualized in a 3-D environment by the user. ARCore can’t be considered as SDK because it doesn’t directly develop AR experiences, instead it can be considered as an engine which helps SDK’s in rendering the objects. To use this capability of ARCore, Google launched the Sceneform SDK so that the developers could use this functionality to develop AR enabled application...