Interactive spatio-temporal exploration of massive time-Varying rectilinear scalar volumes based on a variable bit-rate sparse representation over learned dictionaries (original) (raw)

Interactive rendering of large volume data sets

2002

We present a new algorithm for rendering very large volume data sets at interactive framerates on standard PC hardware. The algorithm accepts scalar data sampled on a regular grid as input. The input data is converted into a compressed hierarchical wavelet representation in a preprocessing step. During rendering, the wavelet representation is decompressed on-the-fly and rendered using hardware texture mapping. The level of detail used for rendering is adapted to the local frequency spectrum of the data and its position relative to the viewer. Using a prototype implementation of the algorithm we were able to perform an interactive walkthrough of large data sets such as the visible human on a single of-the-shelf PC.

Large volume visualization of compressed time-dependent datasets on GPU clusters

Parallel Computing, 2005

We describe a system for the texture-based direct volume visualization of large data sets on a PC cluster equipped with GPUs. The data is partitioned into volume bricks in object space, and the intermediate images are combined to a final picture in a sort-last approach. Hierarchical wavelet compression is applied to increase the effective size of volumes that can be handled. An adaptive rendering mechanism takes into account the viewing parameters and the properties of the data set to adjust the texture resolution and number of slices. We discuss the specific issues of this adaptive and hierarchical approach in the context of a distributed memory architecture and present corresponding solutions. Furthermore, our compositing scheme takes into account the footprints of volume bricks to minimize the costs for reading from framebuffer, network communication, and blending. A detailed performance analysis is provided for several network, CPU, and GPU architectures-and scaling characteristics of the parallel system are discussed. For example, our tests on a 8-node AMD64 cluster with InfiniBand show a rendering speed of 6 frames per second for a 2048×1024×1878 data set on a 1024 2 viewport.

An Exploratory Technique for Coherent Visualization of Time-varying Volume Data

Computer Graphics Forum, 2010

The selection of an appropriate global transfer function is essential for visualizing time-varying simulation data. This is especially challenging when the global data range is not known in advance, as is often the case in remote and in-situ visualization settings. Since the data range may vary dramatically as the simulation progresses, volume rendering using local transfer functions may not be coherent for all time steps. We present an exploratory technique that enables coherent classification of time-varying volume data. Unlike previous approaches, which require pre-processing of all time steps, our approach lets the user explore the transfer function space without accessing the original 3D data. This is useful for interactive visualization, and absolutely essential for in-situ visualization, where the entire simulation data range is not known in advance. Our approach generates a compact representation of each time step at rendering time in the form of ray attenuation functions, which are used for subsequent operations on the opacity and color mappings. The presented approach offers interactive exploration of time-varying simulation data that alleviates the cost associated with reloading and caching large data sets.

Real-Time Rendering of Temporal Volumetric Data on a GPU

2011 15th International Conference on Information Visualisation, 2011

Real-time rendering of static volumetric data is generally known to be a memory and computationally intensive process. With the advance of graphic hardware, especially GPU, it is now possible to do this using desktop computers. However, with the evolution of real-time CT and MRI technologies, volumetric rendering is an even bigger challenge. The first one is how to reduce the data transmission between the main memory and the graphic memory. The second one is how to efficiently take advantage of the time redundancy which exists in time-varying volumetric data. We proposed an optimized compression scheme that explores the time redundancy as well as space redundancy of time-varying volumetric data. The compressed data is then transmitted to graphic memory and directly rendered by the GPU, reducing significantly the data transfer between main memory and graphic memory.

Rendering of Feature-rich Dynamically Changing Volumetric Datasets on GPU

Procedia Computer Science, 2014

Interactive photo-realistic representation of dynamic liquid volumes is a challenging task for today's GPUs and state-of-the-art visualization algorithms. Methods of the last two decades consider either static volumetric datasets applying several optimizations for volume casting, or dynamic volumetric datasets with rough approximations to realistic rendering. Nevertheless, accurate real-time visualization of dynamic datasets is crucial in areas of scientific visualization as well as areas demanding for accurate rendering of feature-rich datasets. An accurate and thus realistic visualization of such datasets leads to new challenges: due to restrictions given by computational performance, the datasets may be relatively small compared to the screen resolution, and thus each voxel has to be rendered highly oversampled. With our volumetric datasets based on a real-time lattice Boltzmann fluid simulation creating dynamic cavities and small droplets, existing real-time implementations are not applicable for a realistic surface extraction. This work presents a volume tracing algorithm capable of producing multiple refractions which is also robust to small droplets and cavities. Furthermore we show advantages of our volume tracing algorithm compared to other implementations.

Interactive rendering of volumetric data sets

Computers & Graphics, 1995

The bela architecture for interactive rendering of regularly structured volumetric data sets is presented. The proposed architecture is scalable and uses custom processors to achieve high-speed shading, projection. and composition of voxel primitives. A general purpose image composition network supports the accumulation of both volumetric and geometric elements into the final rendered scene. Data access contentions between processors are eliminated via the use of an enhanced dual object space and image space partitioning scheme that does not require replication or redistribution of rendered data. The bela architecture is intended for rendering large data sets and meets the performance requirements of a full frame interactive image generation system.

Dynamic acceleration structures for the visualization of time-dependent volume data on the GPU

2015

Volume rendering extensive 3D data is requiring much computing power and doing this efficiently has been a challenge. To build a data structure of which rendering can take advantage is a common solution. Among others these data structures are built with the purpose of empty space skipping and adaptive sampling. Empty space skipping is the main strategy of acceleration in this work. Octree and Kd-tree is constructed and analyzed in reference to construction and rendering performance. Time-dependent data often come from numerical simulation and show similarities between two successive time instances. These similarities can be exploited to facilitate construction. The method and consequences of this exploitation will be discussed. Naive volume rendering is conceptually suitable for parallel computing. On the other side, both constructing and traversal of hiearchical data structures do not seem to agree with parallel nature of GPUs. Still implementation techniques to bypass GPU-specific...

Compression and visualization of large and animated volume data

2004

at NVIDIA for an invaluable 3 month Internship in 2003. Thanks must also go to David Kirk at NVIDIA and Michael Dogget at ATI for always supplying me with the latest graphics hardware, drivers and bug fixes. Since working alone is impossible nowadays, I would like to thank my colleagues and other people I had the opportunity to work with. Stefan Gumhold for suggesting wavelet compression for animated volume data and his support at the very beginning of my work, Michael Meissner for asking me if I would like to do volume rendering on the graphics card (a GeForce3 at that time), Michael Wand for helping me with the ideas of multiresolution rendering and writing the paper together with me, Martin Kraus for his ideas on rendering of tetrahedral meshes, Stefan Roettger for further ideas on that topic, Johannes Hirche for his displacement mapping algorithm and other valuable discussions, Armin Kanitsar for the Christmas Tree data set, the Christmas Tree case study and the best case study award, and Gunter Knittel for always telling me what I still can't do on the graphics card. I would also like to thank two students that did some work on this thesis. Thanks to Julius Gonser for his efficient cache implementation for multiresolution volume rendering and Andreas Schieber for his tetrahedral sorting algorithm. Most of this work was funded by the Sonderforschungsbereich (SFB) 382 of the German Research Council (DFG). During the last time period, this work was sponsored by the NVIDIA Fellowship Program 2003-2004. I would like to thank Melanie Künzel for always believing in me, Stefan Kimmerle, Sven Fleck and Johannes Hirche for proof reading, and finally my friends and my family for their never ending support. vi An important technique in interactively exploring these data sets is direct volume rendering. To achieve interactive frame rates, general purpose graphics hardware is used nowadays. This approach has two advantages. First, the central processing unit (CPU) is free for other processing, such as on the fly decompression. Second, the graphics processing unit (GPU) is a lot faster by now when it comes to volume rendering. To store volume data on disc usually either the raw data or a simply compressed 1.1.4 Structured Grids Structured grids are usually encountered in physical simulations where both, a regular connectivity and adaptation to local details, are necessary. While physical simulations mostly use curvilinear grids (see Figure 1.3), i.e. regular grids that have been deformed with a continuous function, structured grids can always be described as regular grids with arbitrary sample point positions. In order to render these grids using graphics hardware, the grid cells have to be traversed in either a front-to-back or back-to-front order and will be processed individually. Since the structure is regular and the data set only consists of hexahedral cells, the sorting can be done very efficiently. The rendering of a single cell can either be done directly or by splitting each hexahedron into 5 or 6 tetrahedra. Introducing higher order lighting terms, i.e. scattering, increases the realism of the resulting images but also dramatically increases the rendering costs. Besides the increased realism there are very few additional details to be discovered so that most rendering schemes will only use a local light source for shading the voxel. κp g (s, − → n) = g a (s) + g d (s) l d (− → n) + g s (s) l s (− → n) The dependency between g d (s) and − → n can also depend on any other nearly linear mapping, i.e. fake-shading or even a very smooth cube map or light map. The only

Interactive visualization of very large datasets using an out-of-core point-based approach

2003

We present an out-of-core, point-based approach for interactive rendering of very large volumetric datasets. Our approach is based on the assumption that the density of voxels with the same function-value in large discretized volumetric scalar fields is high enough to be used to render contour and volume approximations using points to represent the voxels. This approach allows us to visualize isovalue-structures in high-resolution datasets at full resolution and interactive frame rates.

VDB: High-Resolution Sparse Volumes with Dynamic Topology

We have developed a novel hierarchical data structure for the efficient representation of sparse, time-varying volumetric data discretized on a 3D grid. Our "VDB", so named because it is a Volumetric, Dynamic grid that shares several characteristics with B+trees, exploits spatial coherency of time-varying data to separately and compactly encode data values and grid topology. VDB models a virtually infinite 3D index space that allows for cache-coherent and fast data access into sparse volumes of high resolution. It imposes no topology restrictions on the sparsity of the volumetric data, and it supports fast (average O(1)) random access patterns when the data are inserted, retrieved, or deleted. This is in contrast to most existing sparse volumetric data structures, which assume either static or manifold topology and require specific data access patterns to compensate for slow random access. Since the VDB data structure is fundamentally hierarchical, it also facilitates adaptive grid sampling, and the inherent acceleration structure leads to fast algorithms that are well-suited for simulations. As such, VDB has proven useful for several applications that call for large, sparse, animated volumes, for example, level set dynamics and cloud modeling. In this article, we showcase some of these algorithms and compare VDB with existing, state-of-the-art data structures.