Relevance Matrices in LVQ (original) (raw)
Related papers
Generalized relevance learning vector quantization
Neural Networks, 2002
We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The factors allow an appropriate scaling of the input dimensions according to their relevance. They are adapted automatically during training according to the specific classification task whereby training can be interpreted as stochastic gradient descent on an appropriate error function. This method leads to a more powerful classifier and to an adaptive metric with little extra cost compared to standard GLVQ. Moreover, the size of the weighting factors indicates the relevance of the input dimensions. This proposes a scheme for automatically pruning irrelevant input dimensions. The algorithm is verified on artificial data sets and the iris data from the UCI repository. Afterwards, the method is compared to several well known algorithms which determine the intrinsic data dimension on real world satellite image data.
Regularization in matrix relevance learning
IEEE Transactions on Neural Networks, 2010
In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can display a tendency towards oversimplification in the course of training. An overly pronounced elimination of dimensions in feature space can have negative effects on the performance and may lead to instabilities in the training. We focus on matrix learning in generalized LVQ (GLVQ). Extending the cost function by an appropriate regularization term prevents the unfavorable behavior and can help to improve the generalization ability. The approach is first tested and illustrated in terms of artificial model data. Furthermore, we apply the scheme to benchmark classification data sets from the UCI Repository of Machine Learning. We demonstrate the usefulness of regularization also in the case of rank limited relevance matrices, i.e., matrix learning with an implicit, low-dimensional representation of the data. . Her recent work has focused on machine learning techniques, especially learning vector quantization and their usability in the field of image processing, dimension reduction, and visualization.
Relevance Determination for Learning Vector Quantization using the Fisher Criterion Score
Two new feature relevance determination algorithms are proposed for learning vector quantization. The algorithms exploit the positioning of the prototype vectors in the input feature space to estimate Fisher criterion scores for the input dimensions during training. These scores are used to form online estimates of weighting factors for an adaptive metric that accounts for dimensional relevance with respect to classifier output. The methods offer theoretical advantages over previously proposed LVQ relevance determination techniques based on gradient descent, as well as performance advantages as demonstrated in experiments on various datasets including a visual dataset from a cognitive robotics object affordance learning experiment.
Energy generalized LVQ with relevance factors
2004 IEEE International Joint Conference on Neural Networks (IEEE Cat. No.04CH37541), 2004
Input feature ranking and selection represent a neCeSSarv ureurocessine staee in classification. esoeeiallv when it is stated that other weighted distances could be applied I S U l P l l _ _ ~-I ,. _" .. one is required to manage large quantities of data. We introduce a weighted generalized LVQ algorithm, called Energy Generalized tional e n e m 111. EGRLVO is an incremental learning algorithm. Weights and codebook are in Relevance LVQ (EGRLVQ), based on the Oniescu's informa. during the learning phase' The updating can be performed on-line, after processing each learning sample. _ _ for supervised classification and feature ranking. The modification of the LVQ algorithm is minimal: only the distance function has to be replaced. I. INTRODUCTION. The resulted weights may be regarded as feature ranks Kohonen introduced Learning Vector Quantization (LVQ) and used for dimensional reduction of the input space [2] as a prototype based supervised clustering algorithm. This (i.e., feature selection). algorithm can be used as a simple, universal and efficient Another LVQ variation similar to DSLVQ is Relevance adaptive classifier. The idea of LVQ is to approximate optimal LVQ (KVQ). introduced in [7], the weights being called Bayesian decision borders between different classes in an n-relevances. dimensional feature space with a number of labeled Prototypes A different heuristic updating scheme of the feature weights, named codebook vecfors. An example vector x is classified used for computiog a modified distance in LVQ, is described to the label of its closest codebook vector according to a in [SI, The weights (relevances) are computed as Ordered distance function as, for instance, the Euclidean distance. Weighted Aggregation (Ow!,) weights. The codebook vectors are trained iteratively, using a set of The above apixoaches share the following characteristic: training data. Various modifications to the basic algorithm they are only heuristically motivated and they do not obey a were proposed to ensure a faster convergence (OLVQ), a gradient dynamic. In DSLVQ and RLVQ. the updating of the better adaptation of the borders (LVQ2.1, LVQ3), an adequate weights is related to simple perceptron learning and difficulties initialization of the codebook vectors according to the data arise if provided with non separable data [7].
Learning vector quantization classification with local relevance determination for medical data
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2006
In this article we extend the global relevance learning vector quantization approach by local metric adaptation to obtain a locally optimized model for classification. In this sense we make a step in the direction of quadratic discriminance analysis in statistics where classwise variance matrices are used for class adapted discriminance functions. We demonstrateb the performance of the model for a medical application.
Adaptive feature space Transformation in Generalised Matrix Learning Vector Quantization
2010
We propose and investigate a modification of Generalized Matrix Relevance Learning Vector Quantization (GMLVQ). In the novel approach we restrict the linear transformation to only the data set instead of transforming both the prototypes and the data like in the original GMLVQ. The method is implemented using a rectangular transformation matrix in a modified Euclidean distance measure. We analyse the performance of the modified algorithm and compare with original GMLVQ. In this paper, the method is outlined and experimental results are discussed in terms of a benchmark classification task.
Generalized relevance LVQ with correlation measures for biological data
2005
Generalized Relevance Learning Vector Quantization (GRLVQ) is combined with correlation-based similarity measures. These are derived from the Pearson correlation coefficient in order to replace the adaptive squared Euclidean distance which is typically used for GRLVQ. Patterns can thus be used without further preprocessing and compared in a manner invariant to data shifting and scaling transforms. High accuracies are demonstrated for a reference experiment of handwritten character recognition and good discrimination ability is shown for the detection of systematic differences between gene expression experiments.
Stationarity of matrix relevance learning vector quantization
2009
We investigate the convergence properties of heuristic matrix relevance updates in Learning Vector Quantization. Under mild assumptions on the training process, stationarity conditions can be worked out which characterize the outcome of training in terms of the relevance matrix. It is shown that the original training schemes single out one specific direction in feature space which depends on the statistical properties of the data relative to the approached prototype configuration. Appropriate regularization terms can be used to favor full rank relevance matrices and to prevent oversimplification effects. The structure of the stationary solution is derived, giving insight into the influence of the regularization parameter.