Research of assembling optimized classification algorithm by neural network based on Ordinary Least Squares (OLS) (original) (raw)
Abstract
A new optimized classification algorithm assembled by neural network based on Ordinary Least Squares (OLS) is established here. While recognizing complex high-dimensional data by neural network, the design of network is a challenge. Besides, single network model can hardly get satisfying recognition accuracy. Firstly, feature dimension reduction is carried on so that the design of network is more convenient. Take Elman neural network algorithm based on PCA as sub-classifier I. The recognition precision of this classifier is relatively high, but the convergence rate is not satisfying. Take RBF neural network algorithm based on factor analysis as sub-classifier II. The convergence rate of the classifier algorithm is fast, but the recognition precision is relatively low. In order to make up for the deficiency, by carrying on ensemble learning of the two sub-classifiers and determining optimal weights of each sub-classifier by OLS principle, assembled optimized classification algorithm is obtained, so to some extent, information loss caused by dimensionality reduction in data is made up. In the end, validation of the model can be tested by case analysis.
Access this article
Subscribe and save
- Starting from 10 chapters or articles per month
- Access and download chapters and articles from more than 300k books and 2,500 journals
- Cancel anytime View plans
Buy Now
Price excludes VAT (USA)
Tax calculation will be finalised during checkout.
Instant access to the full article PDF.
Similar content being viewed by others
References
- Mccllochw S, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biol 10(5):115–133
Google Scholar - Ding SF, Jia WK, Su CY et al (2011) Research of neural network algorithm based on factor analysis and cluster analysis. Neural Comput Appl 20(2):297–302
Article Google Scholar - Sun JX (2002) Modern pattern recognition. National University of Defence Technology Press, Changsha
Google Scholar - Bian ZQ, Zhang XG (2000) Pattern recognition. Tsinghua University Press, Beijing
Google Scholar - Moody J, Dkaren CJ (1989) Fast learning in networks locally-tuned processing units. Neural Comput 1(2):281–294
Article Google Scholar - Elman JL (1990) Finding structure in time. Cogn Sci 14(2):179–211
Article Google Scholar - Tang CS, Jin YH (2003) A multiple classifiers integration method based on full information matrix. J Softw 14(6):1103–1109
MATH Google Scholar - Sun L, Han CZ, Shen JJ et al (2008) Generalized rough set method for ensemble feature selection and multiple classifier fusion. Acta Automatica Sinica 34(3):298–304
Article MATH Google Scholar - Gu Y, Xu ZB, Sun J et al (2006) An intrusion detection ensemble system based on the features extracted by PCA and ICA. J Comput Res Develop 43(4):633–638
Article Google Scholar - Ding SF, Jia WK, Su CY et al (2008) Research of pattern feature extraction and selection. Proc seventh Int Conf Mach Learn Cybernetics 1:466–471
Google Scholar - Ding SF, Jia WK, Su CY et al (2008) A survey on statistical pattern feature extraction. Lect Notes Artif Intell 5227:701–708
Google Scholar - Foman G (2003) An exnetsive empirical study of feater selection metrics for text classification. J Mach Learn Res 3:1289–1305
Google Scholar - Johnson RA, Wichern DW (2007) Applied multivariate statistical analysis, 6th edn. Prentice Hall, Englewood Cliffs
- Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representation by back-propagating errors. Nature 3(6):533–536
Article Google Scholar - Ding SF, Jia WK, Su CY et al (2008) PCA-based Elman neural network algorithm. Lect Notes Comput Sci 5370:315–321
Article Google Scholar - Zhou ZH, Chen SF (2002) Neural network ensemble. Chinese J Comput 25(1):1–8
MathSciNet Google Scholar - Liu Y, Yao X (1999) Ensemble learning via negative correlation. Neural Netw 12(10):1399–1404
Article Google Scholar - http://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/[OL].2009.3
Acknowledgments
This work is supported by the Basic Research Program (Natural Science Foundation) of Jiangsu Province of China (No.BK2009093), the National Natural Science Foundation of China (No.60975039, and No.41074003), and the Opening Foundation of Key Laboratory of Intelligent Information Processing of Chinese Academy of Sciences (No.IIP2010-1).
Author information
Authors and Affiliations
- School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, 221116, People’s Republic of China
Xinzheng Xu, Shifei Ding, Weikuan Jia & Gang Ma - Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100080, China
Shifei Ding - Geomatics College, Shandong University of Science and Technology, Qingdao, 266510, China
Fengxiang Jin
Authors
- Xinzheng Xu
- Shifei Ding
- Weikuan Jia
- Gang Ma
- Fengxiang Jin
Corresponding author
Correspondence toShifei Ding.
Rights and permissions
About this article
Cite this article
Xu, X., Ding, S., Jia, W. et al. Research of assembling optimized classification algorithm by neural network based on Ordinary Least Squares (OLS).Neural Comput & Applic 22, 187–193 (2013). https://doi.org/10.1007/s00521-011-0694-3
- Received: 17 April 2011
- Accepted: 04 July 2011
- Published: 22 July 2011
- Issue date: January 2013
- DOI: https://doi.org/10.1007/s00521-011-0694-3