Standardized Evaluation of Machine Learning Methods for Evolving Data Streams. Haug, J., Tramountani, E., & Kasneci, G. arXiv:2204.13625 [cs, stat], April, 2022. arXiv: 2204.13625
Standardized Evaluation of Machine Learning Methods for Evolving Data Streams [link]Paper  abstract   bibtex   
Due to the unspecified and dynamic nature of data streams, online machine learning requires powerful and flexible solutions. However, evaluating online machine learning methods under realistic conditions is difficult. Existing work therefore often draws on different heuristics and simulations that do not necessarily produce meaningful and reliable results. Indeed, in the absence of common evaluation standards, it often remains unclear how online learning methods will perform in practice or in comparison to similar work. In this paper, we propose a comprehensive set of properties for high-quality machine learning in evolving data streams. In particular, we discuss sensible performance measures and evaluation strategies for online predictive modelling, online feature selection and concept drift detection. As one of the first works, we also look at the interpretability of online learning methods. The proposed evaluation standards are provided in a new Python framework called float. Float is completely modular and allows the simultaneous integration of common libraries, such as scikit-multiflow or river, with custom code. Float is open-sourced and can be accessed at https://github.com/haugjo/float. In this sense, we hope that our work will contribute to more standardized, reliable and realistic testing and comparison of online machine learning methods.
@article{haug_standardized_2022,
	title = {Standardized {Evaluation} of {Machine} {Learning} {Methods} for {Evolving} {Data} {Streams}},
	url = {http://arxiv.org/abs/2204.13625},
	abstract = {Due to the unspecified and dynamic nature of data streams, online machine learning requires powerful and flexible solutions. However, evaluating online machine learning methods under realistic conditions is difficult. Existing work therefore often draws on different heuristics and simulations that do not necessarily produce meaningful and reliable results. Indeed, in the absence of common evaluation standards, it often remains unclear how online learning methods will perform in practice or in comparison to similar work. In this paper, we propose a comprehensive set of properties for high-quality machine learning in evolving data streams. In particular, we discuss sensible performance measures and evaluation strategies for online predictive modelling, online feature selection and concept drift detection. As one of the first works, we also look at the interpretability of online learning methods. The proposed evaluation standards are provided in a new Python framework called float. Float is completely modular and allows the simultaneous integration of common libraries, such as scikit-multiflow or river, with custom code. Float is open-sourced and can be accessed at https://github.com/haugjo/float. In this sense, we hope that our work will contribute to more standardized, reliable and realistic testing and comparison of online machine learning methods.},
	urldate = {2022-05-03},
	journal = {arXiv:2204.13625 [cs, stat]},
	author = {Haug, Johannes and Tramountani, Effi and Kasneci, Gjergji},
	month = apr,
	year = {2022},
	note = {arXiv: 2204.13625},
	keywords = {Computer Science - Machine Learning, Statistics - Machine Learning},
}

Downloads: 0