Improving Accountability in Recommender Systems Research Through Reproducibility. Bellogín, A. & Said, A. January 2021. ISBN: 2102.00482 Publication Title: arXiv [cs.IR]
Improving Accountability in Recommender Systems Research Through Reproducibility [link]Paper  abstract   bibtex   
Reproducibility is a key requirement for scientific progress. It allows the reproduction of the works of others, and, as a consequence, to fully trust the reported claims and results. In this work, we argue that, by facilitating reproducibility of recommender systems experimentation, we indirectly address the issues of accountability and transparency in recommender systems research from the perspectives of practitioners, designers, and engineers aiming to assess the capabilities of published research works. These issues have become increasingly prevalent in recent literature. Reasons for this include societal movements around intelligent systems and artificial intelligence striving towards fair and objective use of human behavioral data (as in Machine Learning, Information Retrieval, or Human-Computer Interaction). Society has grown to expect explanations and transparency standards regarding the underlying algorithms making automated decisions for and around us. This work surveys existing definitions of these concepts, and proposes a coherent terminology for recommender systems research, with the goal to connect reproducibility to accountability. We achieve this by introducing several guidelines and steps that lead to reproducible and, hence, accountable experimental workflows and research. We additionally analyze several instantiations of recommender system implementations available in the literature, and discuss the extent to which they fit in the introduced framework. With this work, we aim to shed light on this important problem, and facilitate progress in the field by increasing the accountability of research.
@unpublished{bellogin_improving_2021,
	title = {Improving {Accountability} in {Recommender} {Systems} {Research} {Through} {Reproducibility}},
	url = {http://arxiv.org/abs/2102.00482},
	abstract = {Reproducibility is a key requirement for scientific progress. It allows
the reproduction of the works of others, and, as a consequence, to fully
trust the reported claims and results. In this work, we argue that, by
facilitating reproducibility of recommender systems experimentation, we
indirectly address the issues of accountability and transparency in
recommender systems research from the perspectives of practitioners,
designers, and engineers aiming to assess the capabilities of published
research works. These issues have become increasingly prevalent in recent
literature. Reasons for this include societal movements around intelligent
systems and artificial intelligence striving towards fair and objective
use of human behavioral data (as in Machine Learning, Information
Retrieval, or Human-Computer Interaction). Society has grown to expect
explanations and transparency standards regarding the underlying
algorithms making automated decisions for and around us. This work surveys
existing definitions of these concepts, and proposes a coherent
terminology for recommender systems research, with the goal to connect
reproducibility to accountability. We achieve this by introducing several
guidelines and steps that lead to reproducible and, hence, accountable
experimental workflows and research. We additionally analyze several
instantiations of recommender system implementations available in the
literature, and discuss the extent to which they fit in the introduced
framework. With this work, we aim to shed light on this important problem,
and facilitate progress in the field by increasing the accountability of
research.},
	author = {Bellogín, Alejandro and Said, Alan},
	month = jan,
	year = {2021},
	note = {ISBN: 2102.00482
Publication Title: arXiv [cs.IR]},
}

Downloads: 0