MultiBoosting: A Technique for Combining Boosting and Wagging (original) (raw)
References
Ali, K., Brunk, C., & Pazzani, M. (1994). On learning multiple descriptions of a concept. In Proceedings of Tools with Artificial Intelligence (pp. 476–483). New Orleans, LA.
Bauer, E. & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36, 105–139. Google Scholar
Blake, C., Keogh, E., & Merz, C. J. (1999). UCI repository of machine learning databases. [Machine-readable data repository]. University of California, Department of Information and Computer Science, Irvine, CA. Google Scholar
Breiman, L. (1996a). Bagging predictors. Machine Learning, 24, 123–140. Google Scholar
Breiman, L. (1996b). Bias, variance, and arcing classifiers. Technical report 460. Berkeley, CA, Department of Statistics, University of California. Google Scholar
Breiman, L. (1997). Arcing the edge. Technical report 486. Berkeley, CA, Department of Statistics, University of California. Google Scholar
Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and Regression Trees. Belmont, CA: Wadsworth International. Google Scholar
Dietterich, T. G. (1998). Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10(7), 1895–1923. Google Scholar
Freund, Y. & Schapire, R. E. (1995). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55, 119–139. Google Scholar
Freund, Y. & Schapire, R. E. (1996). Experiments with a new boosting algorithm. In Proceedings of the Thirteenth International Conference on Machine Learning (pp. 148–156). Bari, Italy: Morgan Kaufmann. Google Scholar
Friedman, J. H. (1997). On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1, 55–77. Google Scholar
Friedman, J., Hastie, T., & Tibshirani, R. Additive logistic regression: A statistical view of boosting. Annals of Statistics. To appear.
Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma. Neural Computation, 4, 1–48. Google Scholar
Kohavi, R. & Wolpert, D. (1996). Bias plus variance decomposition for zero-one loss functions. In Proceedings of the 13th International Conference on Machine Learning (pp. 275–283). Bari, Italy: Morgan Kaufmann. Google Scholar
Kong, E. B. & Dietterich, T. G. (1995). Error-correcting output coding corrects bias and variance. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 313–321). Tahoe City, CA: Morgan Kaufmann. Google Scholar
Krogh, A. & Vedelsby, J. (1995). Neural network ensembles, cross validation, and active learning. G. Tesauro, D. Touretzky, & T. Leen (Eds.), Advances in Neural Information Processing Systems (Vol. 7). Boston, MA: MIT Press. Google Scholar
Nock, R. & Gascuel, O. (1995). On learning decision committees. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 413–420). Taho City, CA: Morgan Kaufmann. Google Scholar
Oliver, J. J. & Hand, D. J. (1995). On pruning and averaging decision trees. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 430–437). Taho City, CA: Morgan Kaufmann. Google Scholar
Quinlan, J. R. (1996). Bagging, boosting, and C4.5. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (pp. 725–730). AAAI/MIT Press.
Rao, R. B., Gordon, D., & Spears, W. (1995). For every generalization action is there really an equal and opposite reaction? Analysis of the conservation law for generalization performance. In Proceedings of the Twelfth International Conference on Machine Learning (pp. 471–479). Taho City, CA: Morgan Kaufmann. Google Scholar
Salzberg, S. L. (1997). On comparing classifiers: Pitfalls to avoid and a recommended approach. Data Mining and Knowledge Discovery, 1, 317–327. Google Scholar
Schaffer, C. (1994). A conservation law for generalization performance. In Proceedings of the 1994 International Conference on Machine Learning. Morgan Kaufmann.
Schapire, R. E., Freund, Y., Bartlett, P., & Lee, W. S. (1998). Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26, 1651–1686. Google Scholar
Wolpert, D. H. (1992). Stacked generalization. Neural Networks, 5, 241–259. Google Scholar
Wolpert, D. H. (1995). Off-training set error and a priori distinctions between learning algorithms. Technical Report SFI TR 95–01–003. Santa Fe, NM, The Santa Fe Institute. Google Scholar
Zheng, Z. & Webb, G. I. (1998). Multiple boosting: A combination of boosting and bagging. In Proceedings of the 1998 International Conference on Parallel and Distributed Processing Techniques and Applications (pp. 1133–1140). CSREA Press.