Perceptually optimized image rendering (original) (raw)
Related papers
Perceptual rendering of participating media
ACM Transactions on Applied Perception, 2007
High-fidelity image synthesis is the process of computing images that are perceptually indistinguishable from the real world they are attempting to portray. Such a level of fidelity requires that the physical processes of materials and the behavior of light are accurately simulated. Most computer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, we also need to take into account how the light interacts with media, such as dust, smoke, fog, etc., between the surfaces. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many hours and rendering effort is often spent on computing parts of the scene that may not be perceived by the viewer. In this paper, we present a novel perceptual strategy for physically based rendering of participating media. By using a combination of a saliency map with our new extinction map (X map), we can significantly reduce rendering times for inhomogeneous media. The visual quality of the resulting images is validated using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the subjective validation indicates that the degradation in quality still is noticeable for certain scenes. We thus introduce and validate a novel light map (L map) that accounts for salient features caused by multiple light scattering around light sources.
A perceptually based physical error metric for realistic image synthesis
1999
We introduce a new concept for accelerating realistic image synthesis algorithms. At the core of this procedure is a novel physical error metric that correctly predicts the perceptual threshold for detecting artifacts in scene features. Built into this metric is a computational model of the human visual system's loss of sensitivity at high background illumination levels, high spatial frequencies, and high contrast levels (visual masking). An important feature of our model is that it handles the luminance-dependent processing and spatiallydependent processing independently. This allows us to precompute the expensive spatially-dependent component, making our model extremely efficient.
Efficient selective rendering of participating media
2006
Realistic image synthesis is the process of computing photorealistic images which are perceptually and measurably indistinguishable from real-world images. In order to obtain high fidelity rendered images it is required that the physical processes of materials and the behavior of light are accurately modelled and simulated. Most computer graphics algorithms assume that light passes freely between surfaces within an environment. However, in many applications, ranging from evaluation of exit signs in smoke filled rooms to design of efficient headlamps for foggy driving, realistic modelling of light propagation and scattering is required. The computational requirements for calculating the interaction of light with such participating media are substantial. This process can take many minutes or even hours. Many times rendering efforts are spent on computing parts of the scene that will not be perceived by the viewer. In this paper we present a novel perceptual strategy for physicallybased rendering of participating media. By using a combination of a saliency map with our new extinction map (X-map) we can significantly reduce rendering times for inhomogenous media. We also validate the visual quality of the resulting images using two objective difference metrics and a subjective psychophysical experiment. Although the average pixel errors of these metric are all less than 1%, the experiment using human observers indicate that these degradation in quality is still noticeable in certain scenes, unlike previous work has suggested.
An interactive perceptual rendering pipeline using contrast and spatial masking
Rendering …, 2007
We present a new perceptual rendering pipeline which takes into account visual masking due to contrast and spatial frequency. Our framework predicts inter-object, scene-level masking caused by partial occlusion and shadows. It is designed for interactive applications and runs efficiently on the GPU. This is achieved using a layer-based approach together with an efficient GPU-based computation of threshold maps. We build upon this prediction framework to introduce a perceptually-based level of detail control algorithm. ...
Perception-driven Accelerated Rendering
Computer Graphics Forum
Advances in computer graphics enable us to create digital images of astonishing complexity and realism. However, processing resources are still a limiting factor. Hence, many costly but desirable aspects of realism are often not accounted for, including global illumination, accurate depth of field and motion blur, spectral effects, etc. especially in real-time rendering. At the same time, there is a strong trend towards more pixels per display due to larger displays, higher pixel densities or larger fields of view. Further observable trends in current display technology include more bits per pixel (high dynamic range, wider color gamut/fidelity), increasing refresh rates (better motion depiction), and an increasing number of displayed views per pixel (stereo, multi-view, all the way to holographic or lightfield displays). These developments cause significant unsolved technical challenges due to aspects such as limited compute power and bandwidth. Fortunately, the human visual system has certain limitations, which mean that providing the highest possible visual quality is not always necessary. In this report, we present the key research and models that exploit the limitations of perception to tackle visual quality and workload alike. Moreover, we present the open problems and promising future research targeting the question of how we can minimize the effort to compute and display only the necessary pixels while still offering a user full visual experience.
A Progressive Rendering Algorithm Using an Adaptive Perceptually Based Image Metric
Computer Graphics Forum, 2004
In this paper, we propose to solve the global illumination problem through a progressive rendering method relying on an adaptive sampling of the image space. The refinement of this sample scheme is driven by an image metric based on a powerful vision model. A Delaunay triangulation of the sampled points is followed by a classification of these triangles into three classes. By interpolating each triangle according to the class it belongs to, we can obtain a high quality image by computing only a fraction of all the pixels and thus saving computation time.
Perceptual quality of BRDF approximations: dataset and metrics
2021
Bidirectional Reflectance Distribution Functions (BRDFs) are pivotal to the perceived realism in image synthesis. While measured BRDF datasets are available, reflectance functions are most of the time approximated by analytical formulas for storage efficiency reasons. These approximations are often obtained by minimizing metrics such as L2—or weighted quadratic—distances, but these metrics do not usually correlate well with perceptual quality when the BRDF is used in a rendering context, which motivates a perceptual study. The contributions of this paper are threefold. First, we perform a large‐scale user study to assess the perceptual quality of 2026 BRDF approximations, resulting in 84138 judgments across 1005 unique participants. We explore this dataset and analyze perceptual scores based on material type and illumination. Second, we assess nine analytical BRDF models in their ability to approximate tabulated BRDFs. Third, we assess several image‐based and BRDF‐based (Lp, optimal...
2011
In this paper we present an appearance preserving Level of Detail (LOD) algorithm that treats attributes as maps on surfaces. Using this approach it is possible to render high quality images of extremely complex models at high frame rates. It is based on the Hierarchical Dynamic Simplification (HDS) framework for LOD proposed by Luebke and Errikson[LE97]. Our algorithm consists of a pre-process and a runtime phase. In the pre-process each object of a scene is segmented into so-called components by normal clustering. A component represents a set of nearly coplanar and partially non-adjacent triangles that do not occlude each other for a proper view. For each component textures containing color and normal information are generated hardware accelerated. Further on a mapping between object space and texture space is determined for each component. This allows a dynamic hardware accelerated computation of texture coordinates at runtime. Finally a dynamic multiresolution representation of ...
LCIS: a boundary hierarchy for detail-preserving contrast reduction
Annual Conference on Computer Graphics, 1999
High contrast scenes are difficult to depict on low contrast displays without loss of important fine details and textures. Skilled artists preserve these details by drawing scene contents in coarseto-fine order using a hierarchy of scene boundaries and shadings. We build a similar hierarchy using multiple instances of a new low curvature image simplifier (LCIS), a partial differential equation inspired by anisotropic diffusion. Each LCIS reduces the scene to many smooth regions that are bounded by sharp gradient discontinuities, and a single parameter K chosen for each LCIS controls region size and boundary complexity. With a few chosen K values K1 K 2 K 3 ::: LCIS makes a set of progressively simpler images, and image differences form a hierarchy of increasingly important details, boundaries and large features.
Image-Based Rendering for Non-Diffuse Synthetic Scenes
Rendering Techniques ’98, 1998
Most current image-based rendering methods operate under the assumption that all of the visible surfaces in the scene are opaque ideal diffuse (Lambertian) reflectors. This paper is concerned with image-based rendering of non-diffuse synthetic scenes. We introduce a new family of image-based scene representations and describe corresponding image-based rendering algorithms that are capable of handling general synthetic scenes containing not only diffuse reflectors, but also specular and glossy objects. Our image-based representation is based on layered depth images. It represents simultaneously and separately both view-independent scene information and view-dependent appearance information. The view-dependent information may be either extracted directly from our data-structures, or evaluated procedurally using an image-based analogue of ray tracing. We describe image-based rendering algorithms that recombine the two components together in a manner that produces a good approximation to the correct image from any viewing position. In addition to extending image-based rendering to non-diffuse synthetic scenes, our paper has an important methodological contribution: it places image-based rendering, light field rendering, and volume graphics in a common framework of discrete raster-based scene representations.