Bootstrap-inspired techniques in computation intelligence (original) (raw)

Free PDF

New Applications of Ensembles of Classifiers Cover Page

Free PDF

Ensembles of classifiers based on Kernel density estimators Cover Page

Free PDF

Construction of High-accuracy Ensemble of Classifiers Cover Page

Free PDF

Some proposals for combining ensemble classifiers Cover Page

Free PDF

A New Method for Constructing Classifier Ensembles Cover Page

A Combination of Methods for Building Ensembles of Classifiers

Abstract-In this paper we make an extensive study of different methods for building ensembles of classifiers. We examine variants of ensemble methods that are based on perturbing features. We illustrate the power of using these variants by applying them to a number of different problems. We find that the best performing ensemble is obtained by combining an approach based on random subspace with a cluster-based input decimated ensemble and the principal direction oracle.

Free PDF

A Combination of Methods for Building Ensembles of Classifiers Cover Page

An Empirical Study of Ensemble Techniques (Bagging, Boosting and Stacking

2019

Ensemble methods are popular strategies for improving the predictive ability of a machine learning model. An ensemble consists of a set of individually trained base learners/models whose predictions are combined when classifying new cases. Previous researches have shown that an ensemble is on the average more accurate than a single base model. Bagging, Boosting and Stacking are some popular ensemble techniques which we studied in this paper. We evaluated these ensembles on 9 data sets. From our results, we observed the following. First, an ensemble is always more accurate than a single base model. Secondly, we observed that Boosting ensembles is on the average better than Bagging while Stacking (meta-learning) is on the average more accurate than Boosting and Bagging. Further experiment also shows that the gain in predictive power of any ensembles may sometimes be small or even decrease depending on the data set.

Free PDF

An Empirical Study of Ensemble Techniques (Bagging, Boosting and Stacking Cover Page

ON THE PERFORMANCE OF ENSEMBLES OF CLASSIFIERS BASED ON KERNEL DENSITY ESTIMATION

Proceedings of the International Conference on Computer, Communication and Control Technologies 2003, 2003

A combination of classi cation rules (classi ers) is known as an Ensemble, and in general it is more accurate than the individual classi ers used to build it. Two popular methods to construct an Ensemble are Bagging introduced by Breiman, (1996) and Boosting (Freund and Schapire, 1996). Both method rely on resamplingtechniques to obtain di erent training sets for each of the classi ers. Previous work has shown that Bagging as well as Boosting are very e ective for unstable classi ers. In this paper we present experimental results of application of both combining techniques using classi ers where the class conditional density is estimated using kernel density estimators. The e ect of sequential forward selection on the performance of the Ensemble also is considered.

Free PDF

ON THE PERFORMANCE OF ENSEMBLES OF CLASSIFIERS BASED ON KERNEL DENSITY ESTIMATION Cover Page

Free PDF

Increasing the Accuracy of Predictive Algorithms: A Review of Ensembles of Classifiers Cover Page

Free PDF

The deterministic subspace method for constructing classifier ensembles Cover Page