Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation. Bantilan, N. arXiv:1710.06921 [cs], October, 2017. arXiv: 1710.06921
Themis-ml: A Fairness-aware Machine Learning Interface for End-to-end Discrimination Discovery and Mitigation [link]Paper  abstract   bibtex   
As more industries integrate machine learning into socially sensitive decision processes like hiring, loan-approval, and parole-granting, we are at risk of perpetuating historical and contemporary socioeconomic disparities. This is a critical problem because on the one hand, organizations who use but do not understand the discriminatory potential of such systems will facilitate the widening of social disparities under the assumption that algorithms are categorically objective. On the other hand, the responsible use of machine learning can help us measure, understand, and mitigate the implicit historical biases in socially sensitive data by expressing implicit decision-making mental models in terms of explicit statistical models. In this paper we specify, implement, and evaluate a "fairness-aware" machine learning interface called themis-ml, which is intended for use by individual data scientists and engineers, academic research teams, or larger product teams who use machine learning in production systems.
@article{bantilan_themis-ml:_2017,
	title = {Themis-ml: {A} {Fairness}-aware {Machine} {Learning} {Interface} for {End}-to-end {Discrimination} {Discovery} and {Mitigation}},
	shorttitle = {Themis-ml},
	url = {http://arxiv.org/abs/1710.06921},
	abstract = {As more industries integrate machine learning into socially sensitive decision processes like hiring, loan-approval, and parole-granting, we are at risk of perpetuating historical and contemporary socioeconomic disparities. This is a critical problem because on the one hand, organizations who use but do not understand the discriminatory potential of such systems will facilitate the widening of social disparities under the assumption that algorithms are categorically objective. On the other hand, the responsible use of machine learning can help us measure, understand, and mitigate the implicit historical biases in socially sensitive data by expressing implicit decision-making mental models in terms of explicit statistical models. In this paper we specify, implement, and evaluate a "fairness-aware" machine learning interface called themis-ml, which is intended for use by individual data scientists and engineers, academic research teams, or larger product teams who use machine learning in production systems.},
	urldate = {2018-01-06},
	journal = {arXiv:1710.06921 [cs]},
	author = {Bantilan, Niels},
	month = oct,
	year = {2017},
	note = {arXiv: 1710.06921},
	keywords = {Computer Science - Computers and Society, utah-fairness-group},
	annote = {Comment: Presented at the Data For Good Exchange 2017},
	file = {arXiv\:1710.06921 PDF:C\:\\Users\\Ashudeep Singh\\Zotero\\storage\\XS7R7ESP\\Bantilan - 2017 - Themis-ml A Fairness-aware Machine Learning Inter.pdf:application/pdf;arXiv.org Snapshot:C\:\\Users\\Ashudeep Singh\\Zotero\\storage\\32Q8W75D\\1710.html:text/html}
}

Downloads: 0