Fast Laplacian twin support vector machine with active learning for pattern classification (original) (raw)
Related papers
Multiclass Least Squares Twin Support Vector Machine for Pattern Classification
International Journal of Database Theory and Application, 2015
This paper proposes a Multiclass Least Squares Twin Support Vector Machine (MLSTSVM) classifier for multi-class classification problems. The formulation of MLSTSVM is obtained by extending the formulation of recently proposed binary Least Squares Twin Support Vector Machine (LSTSVM) classifier. For M-class classification problem, the proposed classifier seeks M-non parallel hyper-planes, one for each class, by solving M-linear equations. A regularization term is also added to improve the generalization ability. MLSTSVM works well for both linear and non-linear type of datasets. It is relatively simple and fast algorithm as compared to the other existing approaches. The performance of proposed approach has been evaluated on twelve benchmark datasets. The experimental result demonstrates the validity of proposed MLSTSVM classifier as compared to the typical multi-classifiers based on 'Support Vector Machine' and 'Twin Support Vector Machine'. Statistical analysis of the proposed classifier with existing classifiers is also performed by using Friedman's Test statistic and Nemenyi post hoc techniques.
Improvements on ν-Twin Support Vector Machine
Neural networks : the official journal of the International Neural Network Society, 2016
In this paper, we propose two novel binary classifiers termed as "Improvements on ν-Twin Support Vector Machine: Iν-TWSVM and Iν-TWSVM (Fast)" that are motivated by ν-Twin Support Vector Machine (ν-TWSVM). Similar to ν-TWSVM, Iν-TWSVM determines two nonparallel hyperplanes such that they are closer to their respective classes and are at least ρ distance away from the other class. The significant advantage of Iν-TWSVM over ν-TWSVM is that Iν-TWSVM solves one smaller-sized Quadratic Programming Problem (QPP) and one Unconstrained Minimization Problem (UMP); as compared to solving two related QPPs in ν-TWSVM. Further, Iν-TWSVM (Fast) avoids solving a smaller sized QPP and transforms it as a unimodal function, which can be solved using line search methods and similar to Iν-TWSVM, the other problem is solved as a UMP. Due to their novel formulation, the proposed classifiers are faster than ν-TWSVM and have comparable generalization ability. Iν-TWSVM also implements structural r...
Comprehensive review on twin support vector machines
Annals of Operations Research, 2022
Twin support vector machine (TWSVM) and twin support vector regression (TSVR) are newly emerging efficient machine learning techniques which offer promising solutions for classification and regression challenges respectively. TWSVM is based upon the idea to identify two nonparallel hyperplanes which classify the data points to their respective classes. It requires to solve two small sized quadratic programming problems (QPPs) in lieu of solving single large size QPP in support vector machine (SVM) while TSVR is formulated on the lines of TWSVM and requires to solve two SVM kind problems. Although there has been good research progress on these techniques; there is limited literature on the comparison of different variants of TSVR. Thus, this review presents a rigorous analysis of recent research in TWSVM and TSVR simultaneously mentioning their limitations and advantages. To begin with, we first introduce the basic theory of support vector machine, TWSVM and then focus on the various improvements and applications of TWSVM, and then we introduce TSVR and its various enhancements. Finally, we suggest future research and development prospects.
An Improvement Of Least Square - Twin Support Vector Machine
Journal of Research and Development on Information and Communication Technology, 2021
In binary classification problems, two classes of data seem to be different from each other. It is expected to bemore complicated due to the number of data points of clusters in each class also be different. Traditional algorithmsas Support Vector Machine (SVM), Twin Support Vector Machine (TSVM), or Least Square Twin Support VectorMachine (LSTSVM) cannot sufficiently exploit information about the number of data points in each cluster of the data.Which may be effect to the accuracy of classification problems. In this paper, we propose a new Improvement LeastSquare - Support Vector Machine (called ILS-SVM) for binary classification problems with a class-vs-clusters strategy.Experimental results show that the ILS-SVM training time is faster than that of TSVM, and the ILS-SVM accuracy isbetter than LSTSVM and TSVM in most cases.
Angle-based twin parametric-margin support vector machine for pattern classification
Knowledge-Based Systems, 2018
In this paper, a novel angle-based twin parametric-margin support vector machine (ATP-SVM) is proposed, which can efficiently handle heteroscedastic noise. Taking motivation from twin parametric-margin support vector machine (TPMSVM), ATP-SVM determines two nonparallel parametric-margin hyperplanes, such that the angle between their normal is maximized. Unlike TPMSVM, it solves only one modified quadratic programming problem (QPP) with fewer number of representative samples. Further, it avoids the explicit computation of inverse of matrices in the dual and has efficient learning time as compared to other single problem classifiers like nonparallel SVM based on one optimization problem (NSVMOOP). The efficacy of ATP-SVM is tested by conducting experiments on a wide range of benchmark UCI datasets. ATP-SVM is extended for multi-category classification using state-of-the-art one-against-all (OAA) and binary tree (BT) based multi-category classification approaches. This work also proposes the application of ATP-SVM for segmentation of color images.
Twin Support Vector Machine: A review from 2007 to 2014
Egyptian Informatics Journal, 2015
Twin Support Vector Machine (TWSVM) is an emerging machine learning method suitable for both classification and regression problems. It utilizes the concept of Generalized Eigenvalues Proximal Support Vector Machine (GEPSVM) and finds two non-parallel planes for each class by solving a pair of Quadratic Programming Problems. It enhances the computational speed as compared to the traditional Support Vector Machine (SVM). TWSVM was initially constructed to solve binary classification problems; later researchers successfully extended it for multi-class problem domain. TWSVM always gives promising empirical results, due to which it has many attractive features which enhance its applicability. This paper presents the research development of TWSVM in recent years. This study is divided into two main broad categories-variant based and multi-class based TWSVM methods. The paper primarily discusses the basic concept of TWSVM and highlights its applications in recent years. A comparative analysis of various research contributions based on TWSVM is also presented. This is helpful for researchers to effectively utilize the TWSVM as an emergent research methodology and encourage them to work further in the performance enhancement of TWSVM.
A twin-hypersphere support vector machine classifier and the fast learning algorithm
This paper formulates a twin-hypersphere support vector machine (THSVM) classifier for binary recognition. Similar to the twin support vector machine (TWSVM) classifier, this THSVM determines two hyperspheres by solving two related support vector machine (SVM)-type problems, each one is smaller than the classical SVM, which makes the THSVM be more efficient than the classical SVM. In addition, the THSVM avoids the matrix inversions in its two dual quadratic programming problems (QPPs) compared with the TWSVM. By considering the characteristics of the dual QPPs of THSVM, an efficient Gilbert's algorithm for the THSVM based on the reduced convex hull (RCH) instead of directly optimizing its pair of QPPs is further presented. Computational results on several synthetic as well as benchmark datasets indicate the significant advantages of the THSVM classifier in the computational time and test accuracy.
MS-TWSVM: Mahalanobis distance-based Structural Twin Support Vector Machine
The distribution information of data points in two classes as the structural information is inserted into the classifiers to improve their generalization performance. Recently many algorithms such as S-TWSVM has used this information to construct two non-parallel hyperplanes which each one lies as close as possible to one class and being far away from the other. It is well known that different classes have different data distribution in real world problems, thus the covariance matrices of these classes are not the same. In these situations, the Mahalanobis is often more popular than Euclidean as a measure of distance. In this paper, in addition to apply the idea of S-TWSVM, the classical Euclidean distance is replaced by Mahalanobis distance which leads to simultaneously consider the covariance matrices of the two classes. By this modification, the orientation information in two classes can be better exploited than S-TWSVM. The experiments indicate our proposed algorithm is often superior to other learning algorithms in terms of generalization performance.
IEEE Transactions on Medical Imaging, 2000
Classifier design for a given classification task needs to take into consideration both the complexity of the classifier and the size of the dataset that is available for training the classifier. With limited training data, as often is the situation in computer-aided diagnosis of medical images, a classifier with simple structure (e.g., a linear classifier) is more robust and therefore preferred. We propose a novel two-class classifier, which we call a hybrid linear/nonlinear classifier (HLNLC), that involves two stages: the input features are linearly combined to form a scalar variable in the first stage and then the likelihood ratio of the scalar variable is used as the decision variable for classification. We first develop the theory of HLNLC by assuming that the feature data follow normal distributions. We show that the commonly used Fisher's linear discriminant function is generally not the optimal linear function in the first stage of the HLNLC. We formulate an optimization problem to solve for the optimal linear function in the first stage of the HLNLC, i.e., the linear function that maximizes the area under the receiver operating characteristic (ROC) curve of the HLNLC. For practical applications, we propose a robust implementation of the HLNLC by making a loose assumption that the two-class feature data arise from a pair of latent (rather than explicit) multivariate normal distributions. The novel hybrid classifier fills a gap between linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) in the sense that both its theoretical performance and its complexity lie between those of the LDA and those of the QDA. Simulation studies show that the hybrid linear/nonlinear classifier performs better than LDA without increasing the classifier complexity accordingly. With a finite number of training samples, the HLNLC can perform better than that of the ideal observer due to its simplicity. Finally, we demonstrate the application of the HLNLC in computer-aided diagnosis of breast lesions in ultrasound images.
IEEE Transactions on Signal Processing, 2000
We develop a novel classifier in a kernel feature space related to the eigenspectrum of the Laplacian data matrix. The classification cost function measures the angle between class mean vectors in the kernel feature space, and is derived from an information theoretic divergence measure using Parzen windowing. The classification rule is expressed in terms of a weighted kernel expansion. The weighting associated with a data point is inversely proportional to the probability density at that point, emphasizing the least probable regions. No optimization is needed to determine the weighting scheme, as opposed to the support vector machine. The connection to Parzen windowing also provides a theoretical criterion for kernel size selection, reducing the need for computationally demanding cross-validation. We show that the new classifier performs better than the Parzen window Bayes classifier, and in many cases comparable to the support vector machine, at a computationally lower cost.