Learn++: an incremental learning algorithm for supervised neural networks. Polikar, R., Upda, L., Upda, S., & Honavar, V. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 31(4):497–508, November, 2001. Conference Name: IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)doi abstract bibtex We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.
@article{polikar_learn_2001,
title = {Learn++: an incremental learning algorithm for supervised neural networks},
volume = {31},
issn = {1558-2442},
shorttitle = {Learn++},
doi = {10.1109/5326.983933},
abstract = {We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.},
number = {4},
journal = {IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)},
author = {Polikar, R. and Upda, L. and Upda, S.S. and Honavar, V.},
month = nov,
year = {2001},
note = {Conference Name: IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)},
keywords = {Classification algorithms, Costs, Knowledge acquisition, Multilayer perceptrons, Neural networks, Pattern recognition, Stability, Training data, Upper bound, Voting},
pages = {497--508},
}
Downloads: 0
{"_id":"impS4CkjFTKkrKjvn","bibbaseid":"polikar-upda-upda-honavar-learnanincrementallearningalgorithmforsupervisedneuralnetworks-2001","author_short":["Polikar, R.","Upda, L.","Upda, S.","Honavar, V."],"bibdata":{"bibtype":"article","type":"article","title":"Learn++: an incremental learning algorithm for supervised neural networks","volume":"31","issn":"1558-2442","shorttitle":"Learn++","doi":"10.1109/5326.983933","abstract":"We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.","number":"4","journal":"IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)","author":[{"propositions":[],"lastnames":["Polikar"],"firstnames":["R."],"suffixes":[]},{"propositions":[],"lastnames":["Upda"],"firstnames":["L."],"suffixes":[]},{"propositions":[],"lastnames":["Upda"],"firstnames":["S.S."],"suffixes":[]},{"propositions":[],"lastnames":["Honavar"],"firstnames":["V."],"suffixes":[]}],"month":"November","year":"2001","note":"Conference Name: IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)","keywords":"Classification algorithms, Costs, Knowledge acquisition, Multilayer perceptrons, Neural networks, Pattern recognition, Stability, Training data, Upper bound, Voting","pages":"497–508","bibtex":"@article{polikar_learn_2001,\n\ttitle = {Learn++: an incremental learning algorithm for supervised neural networks},\n\tvolume = {31},\n\tissn = {1558-2442},\n\tshorttitle = {Learn++},\n\tdoi = {10.1109/5326.983933},\n\tabstract = {We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.},\n\tnumber = {4},\n\tjournal = {IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)},\n\tauthor = {Polikar, R. and Upda, L. and Upda, S.S. and Honavar, V.},\n\tmonth = nov,\n\tyear = {2001},\n\tnote = {Conference Name: IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews)},\n\tkeywords = {Classification algorithms, Costs, Knowledge acquisition, Multilayer perceptrons, Neural networks, Pattern recognition, Stability, Training data, Upper bound, Voting},\n\tpages = {497--508},\n}\n\n\n\n","author_short":["Polikar, R.","Upda, L.","Upda, S.","Honavar, V."],"key":"polikar_learn_2001","id":"polikar_learn_2001","bibbaseid":"polikar-upda-upda-honavar-learnanincrementallearningalgorithmforsupervisedneuralnetworks-2001","role":"author","urls":{},"keyword":["Classification algorithms","Costs","Knowledge acquisition","Multilayer perceptrons","Neural networks","Pattern recognition","Stability","Training data","Upper bound","Voting"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","dataSources":["iwKepCrWBps7ojhDx"],"keywords":["classification algorithms","costs","knowledge acquisition","multilayer perceptrons","neural networks","pattern recognition","stability","training data","upper bound","voting"],"search_terms":["learn","incremental","learning","algorithm","supervised","neural","networks","polikar","upda","upda","honavar"],"title":"Learn++: an incremental learning algorithm for supervised neural networks","year":2001}