Hausdorff Metric Based Vector Quantization of Binary Images (original) (raw)
Fast quantization and matching of histogram-based image features
2010
We review construction of a Compressed Histogram of Gradients (CHoG) image feature descriptor, and study quantization problem that arises in its design. We explain our choice of algorithms for solving it, addressing both complexity and performance aspects. We also study design of algorithms for decoding and matching of compressed descriptors, and offer several techniques for speeding up these operations.
Using vector quantization for image processing
Proceedings of the IEEE, 1993
Image compression is the process of reducing the number of bits required to represent an image. Vector quantization, the mapping of pixel intensity vectors into binary vectors indexing a limited number of possible reproductions, is a popular image compression algorithm. Compression has traditionally been done with little regard for image processing operations that may precede or follow the compression step. Recent work has used vector quantization both to simplify image processing tasks { such as enhancement, classi cation, halftoning, and edge detection { and to reduce the computational complexity by performing them simultaneously with the compression. After brie y reviewing the fundamental ideas of vector quantization, we present a survey of vector quantization algorithms that perform image processing. 0 1 0 1 J J J Ĵ
Vector quantization using information theoretic concepts
Natural Computing, 2005
The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen Self Organizing Map (SOM) and the Linde Buzo Gray (LBG) algorithm have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical interpretation and relies on minimization of a well defined cost-function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact equivalent to minimizing a divergence measure between the distribution of the data and the distribution of the processing element, hence, the algorithm can be seen as a density matching method.
Efficient vector quantization using an n-path binary tree search algorithm
Conference of the International Speech Communication Association, 1999
We propose the utilization of a new n-path binary tree search algorithm for vector quantization. Our target is to reduce the complexity (time processing) of the vector quantizer maintaining the quantization distortion. The algorithm has been applied to an isolated digit recognizer by telephone based on DHMM and to a continuous speech system based on SCHMM, so we will also give the recognition results for both of them. We have tested several alternatives to calculate the centroids of the higher levels of the tree. In all the experiments we have considered the following parameters for the evaluation: average distortion, same choice percentage, average distortion for the mistakes and processing time. Our reference has been the standard quantization (computing the distance with all centroids). In this reference case the distortion was 220.9 and the processing time was 2.1 seconds. With the npath binary tree search algorithm, we have obtained a 0.7 seconds processing time with a similar distortion: 226.4. In the semicontinuous system, we have obtained a reduction of 71 % in vector quantization processing time, maintaining the word accuracy.
Multi-level local descriptor quantization for bag-of-visterms image representation
Proceedings of the 6th ACM international conference on Image and video retrieval - CIVR '07, 2007
In the past, quantized local descriptors have been shown to be a good base for the representation of images, that can be applied to a wide range of tasks. However, current approaches typically consider only one level of quantization to create the final image representation. In this view they somehow restrict the image description to one level of visual detail. We propose to build image representations from multi-level quantization of local interest point descriptors, automatically extracted from the images. The use of this new multi-level representation will allow for the description of fine and coarse local image detail in one framework. To evaluate the performance of our approach we perform scene image classification using a 13-class data set. We show that the use of information from multiple quantization levels increases the classification performance, which suggests that the different granularity captured by the multi-level quantization produces a more discriminant image representation. Moreover, by using a multi-level approach, the time necessary to learn the quantization models can be reduced by learning the different models in parallel.
Fast vector quantization with topology learning
A new vector quantization algorithm is introduced that is capable of learning the topology of the input distribution. The algorithm uses a tree-structured vector quantizer combined with topology learning to achieve fast performance and high accuracy. The approach can be applied to improve the performance of different types of tree-structured vector quantizers. This is illustrated with results for k-d trees and TSVQ on two high-dimensional datasets. The proposed method also allows the construction of topology-preserving graphs with one node per input row. The algorithm can be used for vector quantization, clustering, indexing, and link analysis.
Object detection with vector quantized binary features
Computer Vision and Pattern Recognition, 1997
This paper presents a new algorithm for detecting objects in images, one of the fundamental tasks of computer vision. The algorithm extends the representational efficiency of eigenimage methods to binary features, which are less sen- sitive to illumination changes than gray-level values nor- mally used with eigenimages. Binary features (square subtemplates) are automatically chosen on each training image. Using features