LEARN++: an incremental learning algorithm for multilayer perceptron networks (original) (raw)
Related papers
Learn++: An incremental learning algorithm for supervised neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part C: …, 2002
We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided. . His current research interests include signal processing, pattern recognition, neural systems, machine learning, and computational models of learning, with applications to biomedical engineering and imaging, chemical sensing, nondestructive evaluation, and testing. He also teaches upper undergraduate and graduate level courses in wavelet theory, pattern recognition, neural networks, and biomedical systems and devices at Rowan University.
Implementation of Incremental Learning in Artificial Neural Networks
Nowadays, the use of artificial neural networks (ANN), in particular the Multilayer Perceptron (MLP), is very popular for executing different tasks such as pattern recognition, data mining, and process automation. However, there are still weaknesses in these models when compared with human capabilities. A characteristic of human memory is the ability for learning new concepts without forgetting what we learned in the past, which has been a disadvantage in the field of artificial neural networks. How can we add new knowledge to the network without forgetting what has already been learned, without repeating the exhaustive ANN process? In an exhaustively training is used a complete training set, with all objects of all classes. In this work, we present a novel incremental learning algorithm for the MLP. New knowledge is incorporated into the target network without executing an exhaustive retraining. Objects of a new class integrate this knowledge, which was not included in the training of a source network. The algorithm consists in taking the final weights from the source network, doing a correction of these with the Support Vector Machine tools, and transferring the obtained weights to a target network. This last net is trained with a training set that it is previously preprocessed. The efficiency resulted of the target network is comparable with a net that is exhaustively trained.
Learn++.MT: A New Approach to Incremental Learning
2004
An ensemble of classifiers based algorithm, Learn++, was recently introduced that is capable of incrementally learning new information from datasets that consecutively become available, even if the new data introduce additional classes that were not formerly seen. The algorithm does not require access to previously used datasets, yet it is capable of largely retaining the previously acquired knowledge. However, Learn++ suffers from the inherent "out-voting" problem when asked to learn new classes, which causes it to generate an unnecessarily large number of classifiers. This paper proposes a modified version of this algorithm, called Learn++.MT that not only reduces the number of classifiers generated, but also provides performance improvements. The out-voting problem, the new algorithm and its promising results on two benchmark datasets as well as on one real world application are presented.
Neural Networks, IEEE Transactions …, 2003
The response of a multilayered perceptron (MLP) network on points which are far away from the boundary of its training data is generally never reliable. Ideally a network should not respond to data points which lie far away from the boundary of its training data. We propose a new training scheme for MLPs as classifiers, which ensures this. Our training scheme involves training subnets for each class present in the training data. Each subnet can decide whether a data point belongs to a certain class or not. Training each subnet requires data from the class which the subnet represents along with some points outside the boundary of that class. For this purpose we propose an easy but approximate method to generate points outside the boundary of a pattern class. The trained subnets are then merged to solve the multiclass classification problem. We show through simulations that an MLP trained by our method does not respond to points which lies outside the boundary of its training sample. Also, our network can deal with overlapped classes in a better manner. In addition, this scheme enables incremental training of an MLP, i.e., the MLP can learn new knowledge without forgetting the old knowledge.
An Incremental Neural Network Construction Algorithm for Training Multilayer Perceptrons
2003
The problem of determining the architecture of a multilayer perceptron together with the disadvantages of the standard backpropagation algorithm, directed the research towards algorithms that determine not only the weights but also the structure of the network necessary for learning the data. We propose a Constructive Algorithm with Multiple Operators using Statistical Test (MOST) for determining the architecture. The networks that are constructed by MOST can have multiple hidden layers with multiple hidden units in each layer. The algorithm uses node removal, addition and layer addition and determines the number of nodes in layers by heuristics. It applies a statistical test to compare different architectures. The results are promising and near optimal.
Incremental backpropagation learning networks
IEEE Transactions on Neural Networks, 1996
How to learn new knowledge without forgetting old knowledge is a key issue in designing an incremental-learning neural network. In this paper, we present a new incremental learning method for pattern recognition, called the "incremental backpropagation learning network," which employs bounded weight modification and structural adaptation learning rules and applies initial knowledge to constrain the learning process. The viability of this approach is demonstrated for classification problems including the iris and the promoter domains.
A Constructive Incremental Learning Algorithm for Binary Classification Tasks
2006
Abstract This paper presents i-AA1* a constructive, incremental learning algorithm for a special class of weightless, self-organizing networks. In i-AA1*, learning consists of adapting the nodes' functions and the network's overall topology as each new training pattern is presented. Provided the training data is consistent, computational complexity is low and prior factual knowledge may be used to" prime" the network and improve its predictive accuracy.
A constructive algorithm for unsupervised learning with incremental neural network
Journal of Applied Research and Technology, 2015
Artificial neural network (ANN) has wide applications such as data processing and classification. However, comparing with other classification methods, ANN needs enormous memory space and training time to build the model. This makes ANN infeasible in practical applications. In this paper, we try to integrate the ideas of human learning mechanism with the existing models of ANN. We propose an incremental neural network construction framework for unsupervised learning. In this framework, a neural network is incrementally constructed by the corresponding subnets with individual instances. First, a subnet maps the relation between inputs and outputs for an observed instance. Then, when combining multiple subnets, the neural network keeps the corresponding abilities to generate the same outputs with the same inputs. This makes the learning process unsupervised and inherent in this framework. In our experiment, Reuters-21578 was used as the dataset to show the effectiveness of the proposed method on text classification. The experimental results showed that our method can effectively classify texts with the best F1-measure of 92.5%. It also showed the learning algorithm can enhance the accuracy effectively and efficiently. This framework also validates scalability in terms of the network size, in which the training and testing times both showed a constant trend. This also validates the feasibility of the method for practical uses.
Overview of some incremental learning algorithms
Fuzzy Systems Conference, …, 2007
Incremental learning (IL) plays a key role in many real-world applications where data arrives over time. It is mainly concerned with learning models in an everchanging environment. In this paper, we review some of the incremental learning algorithms and evaluate them within the same experimental settings in order to provide as objective comparative study as possible. These algorithms include fuzzy ARTMAP, nearest generalized exemplar, growing neural gas, generalized fuzzy min-max neural network, and IL based on function decomposition (ILFD).
An incremental neural network with a reduced architecture
2012
This paper proposes a technique, called Evolving Probabilistic Neural Network (ePNN), that presents many interesting features, including incremental learning, evolving architecture, the capacity to learn continually throughout its existence and requiring that each training sample be used only once in the training phase without reprocessing. A series of experiments was performed on data sets in the public domain; the results indicate that ePNN is superior or equal to the other incremental neural networks evaluated in this paper. These results also demonstrate the advantage of the small ePNN architecture and show that its architecture is more stable than the other incremental neural networks evaluated. ePNN thus appears to be a promising alternative for a quick learning system and a fast classifier with a low computational cost.