Optimising Multiple Kernels for SVM by Genetic Programming (original) (raw)
Related papers
The genetic evolution of kernels for support vector machine classifiers
2004
Abstract. The Support Vector Machine (SVM) has emerged in recent years as a popular approach to the classification of data. One problem that faces the user of an SVM is how to choose a kernel and the specific parameters for that kernel. Applications of an SVM therefore require a search for the optimum settings for a particular problem. This paper proposes a classification technique, which we call the Genetic Kernel SVM (GK SVM), that uses Genetic Programming to evolve a kernel for a SVM classifier.
Applied Intelligence
Support Vector Machines (SVMs) deliver state-of-the-art performance in real-world applications and are now established as one of the standard tools for machine learning and data mining. A key problem of these methods is how to choose an optimal kernel and how to optimise its parameters. The real-world applications have also emphasised the need to consider a combination of kernels—a multiple kernel—in order to boost the classification accuracy by adapting the kernel to the characteristics of heterogeneous data. This combination could be linear or non-linear, weighted or un-weighted. Several approaches have been already proposed to find a linear weighted kernel combination and to optimise its parameters together with the SVM parameters, but no approach has tried to optimise a non-linear weighted combination. Therefore, our goal is to automatically generate and adapt a kernel combination (linear or non-linear, weighted or un-weighted, according to the data) and to optimise both the kernel parameters and SVM parameters by evolutionary means in a unified framework. We will denote our combination as a kernel of kernels (KoK). Numerical experiments show that the SVM algorithm, involving the evolutionary kernel of kernels (eKoK) we propose, performs better than well-known classic kernels whose parameters were optimised and a state of the art convex linear and an evolutionary linear, respectively, kernel combinations. These results emphasise the fact that the SVM algorithm could require a non-linear weighted combination of kernels.
The genetic kernel support vector machine: Description and evaluation
2005
Abstract The Support Vector Machine (SVM) has emerged in recent years as a popular approach to the classification of data. One problem that faces the user of an SVM is how to choose a kernel and the specific parameters for that kernel. Applications of an SVM therefore require a search for the optimum settings for a particular problem. This paper proposes a classification technique, which we call the Genetic Kernel SVM (GK SVM), that uses Genetic Programming to evolve a kernel for a SVM classifier.
Tuning and evolution of support vector kernels
Evolutionary Intelligence, 2012
Kernel-based methods like Support Vector Machines (SVM) have been established as powerful techniques in machine learning. The idea of SVM is to perform a mapping φ from the input space to a higher-dimensional feature space using a kernel function k, so that a linear learning algorithm can be employed. However, the burden of choosing the appropriate kernel function is usually left to the user. It can easily be shown, that the accuracy of the learned model highly depends on the chosen kernel function and its parameters, especially for complex tasks. In order to obtain a good classification or regression model, an appropriate kernel function must be used.
Adapting multiple kernel parameters for support vector machines using genetic algorithms
2005
Kernel parameterization is a key design step in the application of support vector machines (SVM) for supervised learning problems. A grid-search with a cross-validation criteria is often conducted to choose the kernel parameters but it is computationally unfeasible for a large number of them. Here we describe a genetic algorithm (GA) as a method for tuning kernels of multiple parameters for classification tasks, with application to the weighted radial basis function (RBF) kernel. In this type of kernels the number of parameters equals the dimension of the input patterns which is usually high for biological datasets. We show preliminary experimental results where adapted weighted RBF kernels for SVM achieve classification performance over 98% in human serum proteomic profile data. Further improvements to this method may lead to discovery of relevant biomarkers in biomedical applications. 0-7803-9363-5/05/$20.00 ©2005 IEEE.
Genetic Programming for Kernel-Based Learning with Co-evolving Subsets Selection
Lecture Notes in Computer Science, 2006
Support Vector Machines (SVMs) are well-established Machine Learning (ML) algorithms. They rely on the fact that i) linear learning can be formalized as a well-posed optimization problem; ii) nonlinear learning can be brought into linear learning thanks to the kernel trick and the mapping of the initial search space onto a high dimensional feature space. The kernel is designed by the ML expert and it governs the efficiency of the SVM approach. In this paper, a new approach for the automatic design of kernels by Genetic Programming, called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a well-founded fitness function inspired from the margin criterion, and a co-evolution framework ensuring the computational scalability of the approach. Empirical validation on standard ML benchmark demonstrates that EKM is competitive using state-of-the-art SVMs with tuned hyper-parameters.
Genetically Constructed Kernels for Support Vector Machines
2006
Data mining for customer relationship management involves the task of binary classification, e.g. to distinguish between customers who are likely to respond to direct mail and those who are not. The support vector machine (SVM) is a powerful learning technique for this kind of problem. To obtain good classification results the selection of an appropriate kernel function is crucial for SVM. Recently, the evolutionary construction of kernels by means of meta-heuristics has been proposed to automate model selection. In this paper we consider genetic algorithms (GA) to generate SVM kernels in a data driven manner and investigate the potential of such hybrid algorithms with regard to classification accuracy, generalisation ability of the resulting classifier and computational efficiency. We contribute to the literature by: (1) extending current approaches for evolutionary constructed kernels; (2) investigating their adequacy in a real world business scenario; (3) considering runtime issues together with measures of classification effectiveness in a mutual framework.
An evolutionary approach to automatic kernel construction
2006
Kernel-based learning presents a unified approach to machine learning problems such as classification and regression. The selection of a kernel and associated parameters is a critical step in the application of any kernel-based method to a problem. This paper presents a data-driven evolutionary approach for constructing kernels, named KTree. An application of KTree to the Support Vector Machine (SVM) classifier is described.
Concerning the potential of evolutionary support vector machines
2007 IEEE Congress on Evolutionary Computation, 2007
Within the present paper, we put forward a novel hybridization between support vector machines and evolutionary algorithms. Evolutionary support vector machines consider the classification task as in support vector machines but use an evolutionary algorithm to solve the optimization problem of determining the decision function. They can explicitly acquire the coefficients of the separating hyperplane, which is often not possible within the classical technique. More important, evolutionary support vector machines obtain the coefficients directly from the evolutionary algorithm and can refer them at any point during a run. In addition, they do not require properties of positive (semi-)definition for kernels within nonlinear learning. The concept can be furthermore extended to handle large amounts of data, a problem frequently occurring e.g. in spam mail detection, one of our test cases. An adapted chunking technique is therefore alternatively used. In addition to two different representations, a crowding variant of the evolutionary algorithm is tested in order to investigate whether the performance of the algorithm is maintained; its global search capabilities would be important for the prospected coevolution of non-standard kernels. Evolutionary support vector machines are validated on four real-world classification tasks; obtained results show the promise of this new approach.
An evolutionary method for constructing complex SVM kernels
D. Simian, F. Stoica, An Evolutionary Method for Constructing Complex SVM Kernels, Recent Advances in Mathematics and Computers in Biology and Chemistry, Proceedings of the 10th International Conference on Mathematics and Computers in Biology and Chemistry, MCBC'09, Prague, Chech Republic, WSEAS Press, ISBN 978-960-474-062-8, ISSN 1790-5125, pp.172-178, 2009