LCIS: a boundary hierarchy for detail-preserving contrast reduction (original) (raw)

Edge-Preserving Decompositions for Multi-Scale Tone and Detail Manipulation

: Multi-scale tone manipulation. Left: input image (courtesy of Norman Koren, www.normankoren.com). Middle: results of (exaggerated) detail boosting at three different spatial scales. Right: final result, combining a somewhat milder detail enhancement at all three scales. Note: all of the images in this paper are much better appreciated when viewed full size on a computer monitor.

LCIS

Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH '99, 1999

High contrast scenes are difficult to depict on low contrast displays without loss of important fine details and textures. Skilled artists preserve these details by drawing scene contents in coarseto-fine order using a hierarchy of scene boundaries and shadings. We build a similar hierarchy using multiple instances of a new low curvature image simplifier (LCIS), a partial differential equation inspired by anisotropic diffusion. Each LCIS reduces the scene to many smooth regions that are bounded by sharp gradient discontinuities, and a single parameter K chosen for each LCIS controls region size and boundary complexity. With a few chosen K values K1 K 2 K 3 ::: LCIS makes a set of progressively simpler images, and image differences form a hierarchy of increasingly important details, boundaries and large features.

Rendering surface details with diffusion curves

ACM Transactions on Graphics, 2009

Diffusion curve images (DCI) provide a powerful tool for efficient 2D image generation, storage and manipulation. A DCI consist of curves with colors defined on either side. By diffusing these colors over the image, the final result includes sharp boundaries along the curves with smoothly shaded regions between them. This paper extends the application of diffusion curves to render high quality surface details on 3D objects. The first extension is a view dependent warping technique that dynamically reallocates texture space so that object parts that appear large on screen get more texture for increased detail. The second extension is a dynamic feature embedding technique that retains crisp, anti-aliased curve details even in extreme closeups. The third extension is the application of dynamic feature embedding to displacement mapping and geometry images. Our results show high quality renderings of diffusion curve textures, displacements, and geometry images, all rendered interactively.

A Texture-Based Appearance Preserving Level of Detail Algorithm for Real-time Rendering of High Quality Images

2011

In this paper we present an appearance preserving Level of Detail (LOD) algorithm that treats attributes as maps on surfaces. Using this approach it is possible to render high quality images of extremely complex models at high frame rates. It is based on the Hierarchical Dynamic Simplification (HDS) framework for LOD proposed by Luebke and Errikson[LE97]. Our algorithm consists of a pre-process and a runtime phase. In the pre-process each object of a scene is segmented into so-called components by normal clustering. A component represents a set of nearly coplanar and partially non-adjacent triangles that do not occlude each other for a proper view. For each component textures containing color and normal information are generated hardware accelerated. Further on a mapping between object space and texture space is determined for each component. This allows a dynamic hardware accelerated computation of texture coordinates at runtime. Finally a dynamic multiresolution representation of ...

Pixelated image abstraction

2012

Abstract We present an automatic method that can be used to abstract high resolution images into very low resolution outputs with reduced color palettes in the style of pixel art. Our method simultaneously solves for a mapping of features and a reduced palette needed to construct the output image. The results are an approximation to the results generated by pixel artists. We compare our method against the results of a naive process common to image manipulation programs, as well as the hand-crafted work of pixel artists.

Detail-Preserving Contrast Reduction for Still Cameras

2006

This paper describes a detail-preserving contrast reduction technique for images with excessive illumination variation. In an attempt to preserve detail, an image is first separated into its illumination and reflectance components, by a partial differential equation. The illumination component is then scaled to globally reduce contrast. This enhances the perception of detail in the dark areas but at the expense of losing detail in the bright areas. The proposed approach minimizes this loss of detail using a novel recombination strategy. Additionally, the present algorithm preserves colors, avoids halo artifacts, and can be embedded into existing still-camera pipelines.

Image Abstraction Using Anisotropic Diffusion Symmetric Nearest Neighbor Filter

Advances in Multimedia Information Processing – PCM 2014, Lecture Notes in Computer Science Volume 8879, 2014. Pages 343-352, 2014

Image abstraction is an increasingly important task in various multimedia applications. It involves the artificial transformation of photorealistic images into cartoon-like images. To simplify image content, the bilateral and Kuwahara filters remain popular choices to date. However, these methods often produce undesirable over-blurring effects and are highly susceptible to the presence of noise. In this paper, we propose an image abstraction technique that balances region smoothing and edge preservation. The coupling of a classic Symmetric Nearest Neighbor (SNN) filter with anisotropic diffusion within our abstraction framework enables effective suppression of local patch artifacts. Our qualitative and quantitative evaluation demonstrate the significant appeal and advantages of our technique in comparison to standard filters in literature.

An improved anisotropic diffusion model for detail- and edge-preserving smoothing

Pattern Recognition Letters, 2010

It is important in image restoration to remove noise while preserving meaningful details such as blurred thin edges and low-contrast fine features. The existing edge-preserving smoothing methods may inevitably take fine details as noise or vice versa. In this paper, we propose a new edge-preserving smoothing technique based on a modified anisotropic diffusion. The proposed method can simultaneously preserve edges and fine details while filtering out noise in the diffusion process. The classical anisotropic diffusion models consider only the gradient information of a diffused pixel, and cannot preserve detailed features with low gradient. Since the fine details in the neighborhood of the image generally have larger gray-level variance than the noisy background, the proposed diffusion model incorporates both local gradient and gray-level variance to preserve edges and fine details while effectively removing noise. Experimental results from a variety of test samples including shoulder patch images, medical images and artwork images have shown that the proposed anisotropic diffusion scheme can effectively smooth noisy background, yet well preserve edge and fine details in the restored image.

Image-Driven Simplification

ACM Transactions on Graphics (ToG), 2000

We introduce the notion of image-driven simplification, a framework that uses images to decide which portions of a model to simplify. This is a departure from approaches that make polygonal simplification decisions based on geometry. As with many methods, we use the edge collapse operator to make incremental changes to a model. Unique to our approach, however, is the use of comparisons between images of the original model against those of a simplified model to determine the cost of an edge collapse. We use common graphics rendering hardware to accelerate the creation of the required images. As expected, this method produces models that are close to the original model according to image differences. Perhaps more surprising, however, is that the method yields models that have high geometric fidelity as well. Our approach also solves the quandary of how to weight the geometric distance versus appearance properties such as normals, color, and texture. All of these trade-offs are balanced by the image metric. Benefits of this approach include high fidelity silhouettes, extreme simplification of hidden portions of a model, attention to shading interpolation effects, and simplification that is sensitive to the content of a texture. In order to better preserve the appearance of textured models, we introduce a novel technique for assigning texture coordinates to the new vertices of the mesh. This method is based on a geometric heuristic that can be integrated with any edge collapse algorithm to produce high quality textured surfaces.

Perceptually optimized image rendering

We develop a framework for rendering photographic images, taking into account display limitations, so as to optimize perceptual similarity between the rendered image and the original scene. We formulate this as a constrained optimization problem, in which we minimize a measure of perceptual dissimilarity, the Normalized Laplacian Pyramid Distance (NLPD), which mimics the early stage transformations of the human visual system. When rendering images acquired with higher dynamic range than that of the display, we find that the optimized solution boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the art methods with no manual intervention or parameter settings. We also examine a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). Finally, we show that the method may be used to enhance details and contrast of images degraded by optical scattering (e.g. fog).