Incremental on-line learning: A review and comparison of state of the art algorithms. Losing, V., Hammer, B., & Wersing, H. Neurocomputing, 275:1261–1274, January, 2018.
Incremental on-line learning: A review and comparison of state of the art algorithms [link]Paper  doi  abstract   bibtex   
Recently, incremental and on-line learning gained more attention especially in the context of big data and learning from data streams, conflicting with the traditional assumption of complete data availability. Even though a variety of different methods are available, it often remains unclear which of them is suitable for a specific task and how they perform in comparison to each other. We analyze the key properties of eight popular incremental methods representing different algorithm classes. Thereby, we evaluate them with regards to their on-line classification error as well as to their behavior in the limit. Further, we discuss the often neglected issue of hyperparameter optimization specifically for each method and test how robustly it can be done based on a small set of examples. Our extensive evaluation on data sets with different characteristics gives an overview of the performance with respect to accuracy, convergence speed as well as model complexity, facilitating the choice of the best method for a given application.
@article{losing_incremental_2018,
	title = {Incremental on-line learning: {A} review and comparison of state of the art algorithms},
	volume = {275},
	issn = {0925-2312},
	shorttitle = {Incremental on-line learning},
	url = {http://www.sciencedirect.com/science/article/pii/S0925231217315928},
	doi = {10.1016/j.neucom.2017.06.084},
	abstract = {Recently, incremental and on-line learning gained more attention especially in the context of big data and learning from data streams, conflicting with the traditional assumption of complete data availability. Even though a variety of different methods are available, it often remains unclear which of them is suitable for a specific task and how they perform in comparison to each other. We analyze the key properties of eight popular incremental methods representing different algorithm classes. Thereby, we evaluate them with regards to their on-line classification error as well as to their behavior in the limit. Further, we discuss the often neglected issue of hyperparameter optimization specifically for each method and test how robustly it can be done based on a small set of examples. Our extensive evaluation on data sets with different characteristics gives an overview of the performance with respect to accuracy, convergence speed as well as model complexity, facilitating the choice of the best method for a given application.},
	language = {en},
	urldate = {2020-03-17},
	journal = {Neurocomputing},
	author = {Losing, Viktor and Hammer, Barbara and Wersing, Heiko},
	month = jan,
	year = {2018},
	keywords = {Data streams, Hyperparameter optimization, Incremental learning, Model selection, On-line learning},
	pages = {1261--1274},
}

Downloads: 0