Out-of-Core Framework for QEM-based Mesh Simplification (original) (raw)

Out-of-Core Simplification of Large Polygonal Models

Proceedings of the 27th annual conference on …, 2000

We present an algorithm for out-of-core simplification of large polygonal datasets that are too complex to fit in main memory. The algorithm extends the vertex clustering scheme of Rossignac and Borrel [13] by using error quadric information for the placement of each cluster’s representative vertex, which better preserves fine details and results in a low mean geometric error. The use of quadrics instead of the vertex grading approach in [13] has the additional benefits of requiring less disk space and only a single pass over the model rather than two. The resulting linear time algorithm allows simplification of datasets of arbitrary complexity. In order to handle degenerate quadrics associated with (near) flat regions and regions with zero Gaussian curvature, we present a robust method for solving the corresponding underconstrained leastsquares problem. The algorithm is able to detect these degeneracies and handle them gracefully. Key features of the simplification method include a bounded Hausdorff error, low mean geometric error, high simplification speed (up to 100,000 triangles/second reduction), output (but not input) sensitive memory requirements, no disk space overhead, and a running time that is independent of the order in which vertices and triangles occur in the mesh.

A Linear Time Algorithm for High Quality Mesh Simplification

IEEE Sixth International Symposium on Multimedia Software Engineering, 2004

High resolution 3D range scanning as well as isosurface extraction have introduced densely and uniformly sampled models that are difficult to render at an interactive rate. To remove excessive details and produce meshes of various resolutions for different kinds of applications, the study of fast and high quality polygonal mesh simplification algorithms has become important. In this paper, we propose a new linear time algorithm that can achieve fast and high quality mesh simplification. In the new algorithm, we pipeline the cost computation, optimization, and edge collapse, and use a small constantsized Replacement Selection min-heap instead of a large greedy queue to effectively reduce the runtime complexity to linear complexity. Compared to previous works, our new algorithm has at least three advantages. First, the new algorithm is runtime efficient. Second, the new algorithm is memory efficient. Third, the algorithm is capable of generating competitive high quality outputs.

A comparison of mesh simplification algorithms

1998

In many applications the need for an accurate simplification of surface meshes is becoming more and more urgent. This need is not only due to rendering speed reasons, but also to allow fast transmission of 3D models in network-based applications. Many different approaches and algorithms for mesh simplification have been proposed in the last few years. We present a survey and a characterization of the fundamental methods.

A Feature preserved mesh simplification algorithm

Journal of Engineering and Computer Innovations, 2011

Large-volume mesh model faces challenge in rendering, storing, and transmission due to large size of polygon data. Mesh simplification is one of solutions to reduce the data size. This paper presents a mesh simplification method based on feature extraction with curvature estimation to triangle mesh. The simplified topology preserves good geometrical features in the area with distinct features, that is, coarse simplified mesh in the flat region and fine simplified mesh around the areas of crease and corner. Sequence of mesh simplification is controlled on the basis of geometrical feature sensitivity, which results in reasonable simplification topology with less data size. This algorithm can decrease the size of the file by largely simplifying flat areas and preserving the geometric feature as well.

Experimental Analysis of QEM Based Mesh Simplification Techniques

In this research study, effects of mesh simplification on visual quality are examined by using quadric edge collapse decimation method. In this context, we analyze simplifications of various objects by investigating the Peak Signal-to-Noise Ratio (PSNR) values, difference images, and compression ratios. Experiments are performed in MeshLab environment and it is shown that when model is chosen as complex, simplification error between reference and simplified models increases much more in comparison with simpler models. At the same time, if we use high compression ratio, higher simplification error is reached. It could be concluded that compression ratio affects the error linearly.

Practical quad mesh simplification

Computer Graphics …, 2010

In this paper we present an innovative approach to incremental quad mesh simplification, i.e. the task of producing a low complexity quad mesh starting from a high complexity one. The process is based on a novel set of strictly local operations which preserve quad structure. We show how good tessellation quality (e.g. in terms of vertex valencies) can be achieved by pursuing uniform length and canonical proportions of edges and diagonals. The decimation process is interleaved with smoothing in tangent space. The latter strongly contributes to identify a suitable sequence of local modification operations. The method is naturally extended to manage preservation of feature lines (e.g. creases) and varying (e.g. adaptive) tessellation densities. We also present an original Triangleto-Quad conversion algorithm that behaves well in terms of geometrical complexity and tessellation quality, which we use to obtain the initial quad mesh from a given triangle mesh.

Completely adaptive simplification of massive meshes

2002

ABSTRACT The growing availability of massive models and the inability of most existing visualization tools to work with them requires efficient new methods for massive mesh simplification. In this paper, we present a completely adaptive, virtual memory based simplification algorithm for large polygonal datasets. The algorithm is an enhancement of RSimp [2], enabling out of core simplification without reducing the output quality of the original algorithm.

User-assisted mesh simplification

Proceedings of the 2006 …, 2006

a) Original mesh (b) Simplified mesh (c) After applying weighting scheme (d) After local refinement Figure 1: Male model in two-stage user-assisted simplification.

Large Mesh Simplification Using Processing Sequences

2003

In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm, using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such as parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.