A Progressive Rendering Algorithm Using an Adaptive Perceptually Based Image Metric (original) (raw)

A Perceptual Adaptive Image Metric for Computer Graphics

2004

This paper presents two points: a new simple color vision model and an adaptive way to compute an image metric based on a vision model. Metrics are very useful in computer graphics. Applications include perceptually-based rendering or image comparison for photorealism. Usual vision model-based metrics make an expensive use of memory and cpu resources, mainly for two reasons. First, the vision model is a pipeline of non linear functions applying on a multi-scale decomposition of the image. Second, the model is computed for every single pixel of the picture. In this paper, we designed a very simple mono-scale vision model taking into account many perceptual issues like masking effects and adaptation. We also propose an adaptive approach of distance computation : the image plane sample scheme is designed to be denser when distance variation is greater. This method is usable with any vision model and only uses two parameters, making it very easy to congure. By combining it with our simp...

Selective rendering: Computing only what you see

Proceedings - GRAPHITE 2006: 4th International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia, 2006

The computational requirements of a full physically-based global illumination solution are significant, currently precluding its solution on even a powerful modern PC in reasonable let alone real time.

Interactive diffuse global illumination discretization methods for dynamic environments

2012

This research has been conducted in the context of computer graphics, in particular in the field of realistic image synthesis. The goal of this research is to develop new algorithms that improve the quality of the illumination in large and fully dynamic complex environments or accelerate the expensive computation of existing algorithms, leading to as realistic illumination effects as possible in shorter computation time.

Perception-driven Accelerated Rendering

Computer Graphics Forum

Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.

Efficient Approximations for Global Illumination

"An Approximate Global Illumination System for Computer Generated Films" presents for the first time a complete rendering framework with Global Illumination capabilities used to produce Shrek 2. Although this article does not introduce significant innovations for the research area, the introduced framework allows artists to easily incorporate Global Illumination effects and quickly gives them visual feedback. To achieve this amazing feat, Dreamworks and PDI have modified existing acceleration techniques and made geometrical and physical simplifications. This document reviews them and presents alternative or complementary published techniques that may also be used as efficient simplifications.

A Texture-Based Appearance Preserving Level of Detail Algorithm for Real-time Rendering of High Quality Images

2011

In this paper we present an appearance preserving Level of Detail (LOD) algorithm that treats attributes as maps on surfaces. Using this approach it is possible to render high quality images of extremely complex models at high frame rates. It is based on the Hierarchical Dynamic Simplification (HDS) framework for LOD proposed by Luebke and Errikson[LE97]. Our algorithm consists of a pre-process and a runtime phase. In the pre-process each object of a scene is segmented into so-called components by normal clustering. A component represents a set of nearly coplanar and partially non-adjacent triangles that do not occlude each other for a proper view. For each component textures containing color and normal information are generated hardware accelerated. Further on a mapping between object space and texture space is determined for each component. This allows a dynamic hardware accelerated computation of texture coordinates at runtime. Finally a dynamic multiresolution representation of ...

A perceptual stopping condition for global illumination computations

Proceedings of the 23rd Spring Conference on Computer Graphics - SCCG '07, 2007

The aim of realistic image synthesis is to produce high fidelity images that authentically represent real scenes. As these images are produced for human observers, we may exploit the fact that not everything is perceived when viewing scene with our eyes. Thus, it is clear that taking advantage of the limited capacity of the human visual system (HVS), can significantly contribute to optimize rendering software.

A study of incremental update of global illumination algorithms

2007

Global illumination solutions provide a very accurate representation of illumination. However, they are usually costly to calculate. In the common case of quasi-static scenario, in which most of the scene is static and only a few objects move, most of the illumination can be reused from previous frames, yielding increased performance. This article studies theoretically the performance of global illumination algorithms for the case of interactive recalculation of quasi-static scenes, concentrating in the Density Estimation on the Tangent Plane algorithm, although the study is applicable to other techniques. The results are validated empirically with a test scene. Guidelines are given to choose the best algorithm for each case.

Perceptually optimized image rendering

We develop a framework for rendering photographic images, taking into account display limitations, so as to optimize perceptual similarity between the rendered image and the original scene. We formulate this as a constrained optimization problem, in which we minimize a measure of perceptual dissimilarity, the Normalized Laplacian Pyramid Distance (NLPD), which mimics the early stage transformations of the human visual system. When rendering images acquired with higher dynamic range than that of the display, we find that the optimized solution boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the art methods with no manual intervention or parameter settings. We also examine a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). Finally, we show that the method may be used to enhance details and contrast of images degraded by optical scattering (e.g. fog).

Dynamic Algorithm Selection: a New Approach to the Real- Time Rendering of Complex Scenes Problem

2015

This paper presents a novel approach to the real-time rendering of complex scenes problem. Up to the present date, a huge number of acceleration techniques have been proposed, although most are geared towards a specific kind of scene. Instead of using a single, or a fixed set of rendering acceleration techniques, we propose the use of several, and to select the best one based on the current viewpoint. Thus, dynamically adapting the rendering process to the contents of the scene, it is possible to take advantage of all these techniques when they are better suited, rendering scenes that would otherwise be too complex to display at interactive frame rates. We describe a framework capable of achieving this purpose, consisting on a pre-processor and an interactive rendering engine. The framework is geared towards interactive applications were a complex and large scene has to be rendered at interactive frame rates. Finally, results and performance measures taken from our test implementati...