Special issue on support vector machines (original) (raw)
Related papers
Support Vector Machines with Applications 1
Support vector machines (SVMs) appeared in the early nineties as optimal margin classifiers in the context of Vapnik's statistical learning theory. Since then SVMs have been successfully applied to real-world data analysis problems, often providing improved results compared with other techniques. The SVMs operate within the framework of regularization theory by minimizing an empirical risk in a well-posed and consistent way. A clear advantage of the support vector approach is that sparse solutions to classification and regression problems are usually obtained: only a few samples are involved in the determination of the classification or regression functions. This fact facilitates the application of SVMs to problems that involve a large amount of data, such as text processing and bioinformatics tasks. This paper is intended as an introduction to SVMs and their applications, emphasizing their key features. In addition, some algorithmic extensions and illustrative real-world applications of SVMs are shown.
Support Vector Machines with Applications
Statistical Science, 2006
Support vector machines (SVMs) appeared in the early nineties as optimal margin classifiers in the context of Vapnik's statistical learning theory. Since then SVMs have been successfully applied to real-world data analysis problems, often providing improved results compared with other techniques. The SVMs operate within the framework of regularization theory by minimizing an empirical risk in a well-posed and consistent way. A clear advantage of the support vector approach is that sparse solutions to classification and regression problems are usually obtained: only a few samples are involved in the determination of the classification or regression functions. This fact facilitates the application of SVMs to problems that involve a large amount of data, such as text processing and bioinformatics tasks. This paper is intended as an introduction to SVMs and their applications, emphasizing their key features. In addition, some algorithmic extensions and illustrative real-world applications of SVMs are shown.
A Chock-Full Survey on Support Vector Machines
Support vector machines (SVMs) are nothing but machines that build support vectors for classification process. SVMs can formulate both linear and non-linear decision boundaries with good generalization ability and they are based on Statistical Learning Theory (SLT).Generally classification using SVMs is same as solving an optimization problem because strength of SVMs lies in solving optimization techniques. At present SVMs have become excellent areas for research which are also powerful tools for most of the machine learning tasks. These optimization problems deal with not only convex problems but also non convex such as semi infinite programming, bi-level programming and integer programming. The goal of this paper is to thoroughly review SVMs from the optimization point of view. Examining the many aspects of SVM optimization problems, it is useful to understand and apply this popular data mining technique more easily. There are two themes given, first theme specifies the SVM models and algorithms based on standard classification till now regarding optimization point of view. The main reason in constructing several optimization problems is to increase the SVM generalize well and reduce over fitting. And then second theme gives the models concerning enhancements to make SVM more accurate and build better, more rapid and easier to understand learning machines. Since research in SVMs and research in optimization problems have become increasingly coupled so present new challenges are systematically explored to construct new optimization models using SVM.
Support Vector Machine Optimal Kernel Selection
IRJET, 2022
Support vector machine (SVM) is capable of outcompeting every other learned model algorithm in terms of accuracy and other high-performance metrics by its high dimensional data projection for classification. Nevertheless, the performance of the Support vector machine is greatly affected by the choice of the kernel function which helps in the same. This paper discusses the working of SVM and its dependency on the kernel function, along with the explanation of the types of kernels. The focus is on choosing the optimal kernel for three different types of data that vary on volume of features and classes to conclude the optimal choice of the kernel for each type of the three datasets. For performance measures, we used metrics such as accuracy, kappa, specificity and sensitivity. This study statistically examines and compares each type of kernel against the mentioned metrics.
Model selection for support vector machine classification
Neurocomputing, 2003
We address the problem of model selection for Support Vector Machine (SVM) classification. For fixed functional form of the kernel, model selection amounts to tuning kernel parameters and the slack penalty coefficientC. We begin by reviewing a recently developed probabilistic framework for SVM classification. An extension to the case of SVMs with quadratic slack penalties is given and a simple
Wiley Interdisciplinary Reviews: Computational Statistics, 2009
Support vector machines (SVMs) are a family of machine learning methods, originally introduced for the problem of classification and later generalized to various other situations. They are based on principles of statistical learning theory and convex optimization, and are currently used in various domains of application, including bioinformatics, text categorization, and computer vision. 2009 John Wiley & Sons, Inc. WIREs Comp Stat 2009 1 283-289 S upport vector machines (SVMs), introduced by
Optimal feature selection for support vector machines
Pattern Recognition, 2010
Selecting relevant features for support vector machine (SVM) classifiers is important for a variety of reasons such as generalization performance, computational efficiency, and feature interpretability. Traditional SVM approaches to feature selection typically extract features and learn SVM parameters independently. Independently performing these two steps might result in a loss of information related to the classification process. This paper proposes a convex energy-based framework to jointly perform feature selection and SVM parameter learning for linear and non-linear kernels. Experiments on various databases show significant reduction of features used while maintaining classification performance.
Introduction to Support Vector Machines and Kernel Methods
We explain the support vector machine algorithm, and its extension the kernel method, for machine learning using small datasets. We also briefly discuss the Vapnik-Chervonenkis theory which forms the theoretical foundation of machine learning. This review is based on lectures given by the second author.
Speeding-up Model Selection for Support Vector Machines
The Florida AI Research Society Conference, 2005
One big difficulty in the practical use of support vector machines is the selection of a suitable kernel function and its appropriate parameter setting for a given application. There is no rule for the selection and people have to estimate the machine's performance based on a costly multi-trial iteration of training and testing phases. In this paper, we describe a