Interactive HDR image-based rendering from unstructured LDR photographs (original) (raw)
Related papers
Real-Time Image Based Rendering from Uncalibrated Images
Fifth International Conference on 3-D Digital Imaging and Modeling (3DIM'05), 2005
We present a novel real-time image-based rendering system for generating realistic novel views of complex scenes from a set of uncalibrated images. A combination of structureand-motion and stereo techniques is used to obtain calibrated cameras and dense depth maps for all recorded images. These depth maps are converted into restrictive quadtrees, which allow for adaptive, view-dependent tessellations while storing per-vertex quality. When rendering a novel view, a subset of suitable cameras is selected based upon a ranking criterion. In the spirit of the unstructured lumigraph rendering approach a blending field is evaluated, although the implementation is adapted on several points. We alleviate the need for the creation of a geometric proxy for each novel view while the camera blending field is sampled in a more optimal, non-uniform way and combined with the per-vertex quality to reduce texture artifacts. In order to make real-time visualization possible, all critical steps of the visualization pipeline are programmed in a highly optimized way on commodity graphics hardware using the OpenGL Shading Language. The proposed system can handle complex scenes such as large outdoor scenes as well as small objects with a large number of acquired images.
From capture to immersive viewing of 3D HDR point clouds
HAL (Le Centre pour la Communication Scientifique Directe), 2022
The collaborators of the ReVeRY project address the design of a specific grid of cameras, a cost-efficient system that acquires at once several viewpoints, possibly under several exposures and the converting of multiview, multiexposed, video stream into a high quality 3D HDR point cloud. In the last two decades, industries and researchers proposed significant advances in media content acquisition systems in three main directions: increase of resolution and image quality with the new ultra-high-definition (UHD) standard; stereo capture for 3D content; and high-dynamic range (HDR) imaging. Compression, representation, and interoperability of these new media are active research fields in order to reduce data size and be perceptually accurate. The originality of the project is to address both HDR and depth through the entire pipeline. Creativity is enhanced by several tools, which answer challenges at the different stages of the pipeline: camera setup, data processing, capture visualisation, virtual camera controller, compression, perceptually guided immersive visualisation. It is the experience acquired by the researchers of the project that is exposed in this tutorial. CCS Concepts • Computing methodologies → Computational photography; Image processing; Virtual reality; Perception; 3D imaging; 1. Rig capture still presents major chalenges, both in terms of equipment set up and data flow management, 2. Depth and HDR content is now predominant in many applications but higher resolution shouldn't be neglected,
A framework for depth image-based modeling and rendering
2003
We present an image-based system for automatic modeling and interactive rendering of 3D objects. We describe our contribution to image-based modeling and interactive multi-resolution rendering algorithms. Our representation is based on images with depths, which allow it to be compact and flexible, suitable for static and animated scenes, simplify streaming network transmission. The representation has been proposed and accepted into MPEG-4 AFX (Animated Framework eXtension).
HDR multiview image sequence generation: toward 3D HDR video
Creating High Dynamic Range (HDR) images of static scenes combining several Low Dynamic Range (LDR) images is a common procedure nowadays. However, 3D HDR video content creation and management is an active, unsolved research topic. This work analyze the latest advances in 3D HDR imaging and proposes a method to build Stereo HDR images from LDR input image sequences acquired with a multi-view camera. Our method is based on the Patch Match algorithm which has been adapted to take advantage of epipolar geometry constraints of multi- view cameras. Our method does not require the traditional stereo matching for disparity map calculation to obtain accurate matching between the stereo images. Geometric calibration is not required either. We use an 8-view LDR camera from which we generate an 8-view HDR output. The eight tonnemapped HDR images are used to feed an auto-stereoscopic display. Experimental results show accurate registration and HDR reconstruction for each LDR view.
Why HDR is Important for 3DTV Model Acquisition
2008
Mechanisms for automatically acquiring high quality 3D content of both indoor and outdoor scenes are essential research topics within 3D Television (3DTV). The envisioned goal is that a photo-realistic 3D real-time rendering from the actual and potentially arbitrary viewpoint of the beholder who is watching 3DTV becomes possible. Such scenes include movie sets in studios, e.g., for talk shows, TV series and blockbuster movies, but also outdoor scenes, e.g., buildings in a neighborhood for a car chase or cultural heritage sites for a documentary. The goal of 3D model acquisition is to provide the 3D background models where potential 3D actors can be embedded. Although model acquisition platforms have become available, the dynamic range of the used cameras is too limited to fully cover real world environments. We present two multi-sensor systems for 3D model acquisition, demonstrate the need for high dynamic range (HDR) capable acquisition platforms and propose a system that performs HDR computation to produce 3D models with tonemapped HDR information. Combining depth and HDR color information, a Markov Random Field based technique is used to get best out of both worlds -the high spatial resolution of the tonemapped HDR image with the depth field of a time-of-flight camera.
HDR image construction from multi-exposed stereo LDR images
2010 IEEE International Conference on Image Processing, 2010
In this paper, we present an algorithm that generates high dynamic range (HDR) images from multi-exposed low dynamic range (LDR) stereo images. The vast majority of cameras in the market only capture a limited dynamic range of a scene. Our algorithm first computes the disparity map between the stereo images. The disparity map is used to compute the camera response function which in turn results in the scene radiance maps. A refinement step for the disparity map is then applied to eliminate edge artifacts in the final HDR image. Existing methods generate HDR images of good quality for still or slow motion scenes, but give defects when the motion is fast. Our algorithm can deal with images taken during fast motion scenes and tolerate saturation and radiometric changes better than other stereo matching algorithms.
Rendering from unstructured collections of images
2002
Computer graphics researchers recently have turned to image-based rendering to achieve the goal of photorealistic graphics. Instead of constructing a scene with millions of polygons, the scene is represented by a collection of photographs along with a greatly simplified geometric model. This simple representation allows traditional light transport simulations to be replaced with basic image-processing routines that combine multiple images together to produce never-before-seen images from new vantage points. This thesis presents a new image-based rendering algorithm called unstructured lumigraph rendering (ULR). ULR is an image-based rendering algorithm that is specifically designed to work with unstructured (i.e., irregularly arranged) collections of images. The algorithm is unique in that it is capable of using any amount of geometric or image information that is available about a scene. Specifically, the research in this thesis makes the following contributions: * An enumeration of image-based rendering properties that an ideal algorithm should attempt to satisfy. An algorithm that satisfies these properties should work as well as possible with any configuration of input images or geometric knowledge. " An optimal formulation of the basic image-based rendering problem, the solution to which is designed to satisfy the aforementioned properties. * The unstructured lumigraph rendering algorithm, which is an efficient approximation to the optimal image-based rendering solution. " A non-metric ULR algorithm, which generalizes the basic ULR algorithm to work with uncalibrated images. " A time-dependent ULR algorithm, which generalizes the basic ULR algorithm to work with time-dependent data.
This paper presents our attempt to explore and utilize High Dynamic Range Images (HDRI) as a method to generate photorealistic renderings, particularly in local 3D architectural visualizations. First we present an overview of HDRI, the definition, application and advantage. For the purpose of experiment, a database of HDR images have been collected and generated. This consists of indoor and outdoor images, and these images have been converted to HDRI using special software known as HDRIshop. An experiment has been carried out to compare the results of HDRI rendering and normal (traditional) rendering in terms of quality, rendering time and memory usage. Based on the results, it has shown that HDRI rendering has major advantage over normal rendering such as it produce more realistic rendering with less rendering time and memory usage. Finally the authors conclude the rendering quality and the effectiveness of the methods discussed and suggests some recommendation to further explore a...
A high dynamic range rendering pipeline for interactive applications
The Visual Computer, 2010
Realistic images can be computed at interactive frame rates for Computer Graphics applications. Meanwhile, High Dynamic Range (HDR) rendering has a growing success in video games and virtual reality applications, as it improves the image quality and the player’s immersion feeling. In this paper, we propose a new method, based on a physical lighting model, to compute in real time a HDR illumination in virtual environments. Our method allows to re-use existing virtual environments as input, and computes HDR images in photometric units. Then, from these HDR images, displayable 8-bit images are rendered with a tone mapping operator and displayed on a standard display device. The HDR computation and the tone mapping are implemented in OpenSceneGraph with pixel shaders. The lighting model, together with a perceptual tone mapping, improves the perceptual realism of the rendered images at low cost. The method is illustrated with a practical application where the dynamic range of the virtual environment is a key rendering issue: night-time driving simulation.