Local Ambient Occlusion in Direct Volume Rendering (original) (raw)

Interactive Volume Rendering with Dynamic Ambient Occlusion and Color Bleeding

Computer Graphics Forum, 2008

We propose a method for rendering volumetric data sets at interactive frame rates while supporting dynamic ambient occlusion as well as an approximation to color bleeding. In contrast to ambient occlusion approaches for polygonal data, techniques for volumetric data sets have to face additional challenges, since by changing rendering parameters, such as the transfer function or the thresholding, the structure of the data set and thus the light interactions may vary drastically. Therefore, during a preprocessing step which is independent of the rendering parameters we capture light interactions for all combinations of structures extractable from a volumetric data set. In order to compute the light interactions between the different structures, we combine this preprocessed information during rendering based on the rendering parameters defined interactively by the user. Thus our method supports interactive exploration of a volumetric data set but still gives the user control over the most important rendering parameters. For instance, if the user alters the transfer function to extract different structures from a volumetric data set the light interactions between the extracted structures are captured in the rendering while still allowing interactive frame rates. Compared to known local illumination models for volume rendering our method does not introduce any substantial rendering overhead and can be integrated easily into existing volume rendering applications. In this paper we will explain our approach, discuss the implications for interactive volume rendering and present the achieved results.

A Directional Occlusion Shading Model for Interactive Direct Volume Rendering

Computer Graphics Forum, 2009

Volumetric rendering is widely used to examine 3D scalar fields from scanners and direct numerical simulation datasets. One key aspect of volumetric rendering is the ability to provide shading cues to aid in understanding structure contained in the datasets. While shading models that reproduce natural lighting conditions have been shown to better convey depth information and spatial relationships, they traditionally require considerable (pre-)computation. In this paper, we propose a novel shading model for interactive direct volume rendering that provides perceptual cues similar to that of ambient occlusion, for both solid and transparent surface-like features. An image space occlusion factor is derived from the radiative transport equation based on a specialized phase function. Our method does not rely on any precomputation and thus allows for interactive explorations of volumetric data sets via on-the-fly editing of the shading model parameters or (multi-dimensional) transfer functions. Unlike ambient occlusion methods, modifications to the volume, such as clipping planes or changes to the transfer function, are incorporated into the resulting occlusion-based shading. Figure 1: From left to right: a) Visible male data set with occlusion of solid and transparent materials (3.4 FPS, 996 slices) b) CT scan of an engine block where a clipping plane was used to show the exhaust port (13.3 FPS, 679 slices) c) Bonsai data set of which complex features are exposed by our ambient occlusion approximation (4.

Estimation of Volume Rendering Efficiency with GPU in a Parallel Distributed Environment

Procedia Computer Science, 2013

Visualization methods of medical imagery based on volumetric data constitute a fundamental tool for medical diagnosis, training and pre-surgical planning. Often, large volume sizes and/or the complexity of the required computations present serious obstacles for reaching higher levels of realism and real-time performance. Performance and efficiency are two critical aspects in traditional algorithms based on complex lighting models. To overcome these problems, a volume rendering algorithm, PD-Render intra for individual networked nodes in a parallel distributed architecture with a single GPU per node is presented in this paper. The implemented algorithm is able to achieve photorealistic rendering as well as a high signal-tonoise ratio at interactive frame rates. Experiments show excellent results in terms of efficiency and performance for rendering medical volumes in real time.

GPU-based volume rendering for medical imagery

We present a method for fast volume rendering using graphics hardware (GPU). To our knowledge, it is the first implementation on the GPU. Based on the Shear-Warp algorithm, our GPU-based method provides real-time frame rates and outperforms the CPU-based implementation. When the number of slices is not sufficient, we add in-between slices computed by interpolation. This improves then the quality of the rendered images. We have also implemented the ray marching algorithm on the GPU. The results generated by the three algorithms (CPU-based and GPU-based Shear-Warp, GPU-based Ray Marching) for two test models has proved that the ray marching algorithm outperforms the shear-warp methods in terms of speed up and image quality.

Progressive fast volume rendering for medical images

Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures, 2001

There are various 3D visualization methods such as volume rendering and surface renderingS The volume rendering (VR) is a useful tool to visualize 3D medical images. However, a requirement of large computation amount makes it difficult for the VR to be used in real-time medical applications. In order to overcome the large computation amount of the VR, we have developed a progressive VR (PVR) method that can perform the low-resolution VR for fast and intuitive processing and use the depth information from the low-resolution VR to generate the full-resolution VR image with a reduced computation time. The developed algorithm can be applicable to the real-time applications of the YR. Le., the low-resolution VR is performed interactively according to change of view direction, and the full-resolution VR is performed once we fix the view direction In this paper its computation complexity and image quality are analyzed Also an extension of its progressive refinement is introduced.

Hardware-Accelerated Dynamic Volume Rendering for Real-Time Surgical Simulation

2004

We developed a direct volume rendering technique, that supports low latency real time visual feedback in parallel with physical simulation on commodity graphics platforms. In our approach, a fast approximation of the diffuse shading equation is computed on the fly by the graphics pipeline directly from the scalar data. We do this by exploiting the possibilities offered by multi-texturing with the register combiner OpenGL extension, that provides a configurable means to determine per-pixel fragment coloring. The effectiveness of our approach, that supports a full decoupling of simulation and rendering, is demonstrated in a training system for temporal bone surgery.

Real-Time Rendering of Temporal Volumetric Data on a GPU

2011 15th International Conference on Information Visualisation, 2011

Real-time rendering of static volumetric data is generally known to be a memory and computationally intensive process. With the advance of graphic hardware, especially GPU, it is now possible to do this using desktop computers. However, with the evolution of real-time CT and MRI technologies, volumetric rendering is an even bigger challenge. The first one is how to reduce the data transmission between the main memory and the graphic memory. The second one is how to efficiently take advantage of the time redundancy which exists in time-varying volumetric data. We proposed an optimized compression scheme that explores the time redundancy as well as space redundancy of time-varying volumetric data. The compressed data is then transmitted to graphic memory and directly rendered by the GPU, reducing significantly the data transfer between main memory and graphic memory. 2 Literature Review Volume rendering has been extensively studied for many years. Generally, there are five different optical rendering models for volume rendering, which are Absorption Only [7] [2], Emission Only [8], Emission-Absorption Model [3] [1], Single Scattering [14] [4], and Shadowing [11] [9] and Multiple Scattering [4]. Single Scattering and multiple scattering calculations are expensive in computer time, hence not suitable for real-time rendering yet.

Several approaches for improvement of the Direct Volume Rendering in scientific and medical visualization

This paper presents Direct Volume Rendering (DVR) improvement strategies, which provide new opportunities for scientific and medical visualization which are not available in due measure in analogues: 1) multi-volume rendering in a single space of up to 3 volumetric datasets determined in different coordinate systems and having sizes as big as up to 512x512x512 16-bit values; 2) performing the above process in real time on a middle class GPU, e. g. nVidia GeForce GTS 250 512 M B; 3) a custom bounding mesh for more accurate selection of the desired region in addition to the clipping bounding box; 4) simultaneous usage of a number of visualization techniques including the shaded Direct Volume Rendering via the 1D-or 2D-transfer functions, multiple semi-transparent discrete iso-surfaces visualization, M IP, and M IDA. The paper discusses how the new properties affect the implementation of the DVR. In the DVR implementation we use such optimization strategies as the early ray termination and the empty space skipping. The clipping ability is also used as the empty space skipping approach to the rendering performance improvement. We use the random ray start position generation and the further frame accumulation in order to reduce the rendering artifacts. The rendering quality can be also improved by the onthe-fly tri-cubic filtering during the rendering process. Our framework supports 4 different stereoscopic visualization modes. Finally we outline the visualization performance in terms of the frame rates for different visualization techniques on different graphic cards.

Volumetric ambient occlusion for volumetric models

The Visual Computer, 2010

This paper presents new algorithms to compute ambient occlusion for volumetric data. Ambient occlusion is used in video-games and film animation to mimic the indirect lighting of global illumination. We extend a novel interpretation of ambient occlusion to volumetric models that measures how big portion of the tangent sphere of a surface belongs to the set of occluded points, and propose statistically robust estimates for the ambient occlusion value. The data needed by this estimate can be obtained by separable filtering of the voxel array. As ambient occlusion is meant to obtain global illumination effects, it can provide decisive clues in interpreting the data. The new algorithms obtain smooth shading and can be computed at interactive rates, being thus appropriate for dynamic models exploration.

Global illumination models for volume rendering

1994

The increasing demand for realistic images has led to the development of several global illumination models and rendering techniques. Great effort has been taken to extend these illumination models and optimize the rendering techniques to produce more realistic images in less time. Despite this effort, most of these methods are designed for scenes consisting of geometric surface descriptions, and cannot directly render volumetric data. Volumetric data sets can be rendered using volume rendering techniques that, in order to decrease rendering times and therefore increase interactivity, typically employ only a local illumination model. The development of global illumination models and rendering techniques for volumetric data is the focus of this work. These illumination methods can be used to generate realistic images of scenes containing volumetric as well as geometric data. The volumetric global illumination methods can be employed by a visualization system in order to add intuitive...