Race and Gender. Gebru, T. In Dubber, M. D., Pasquale, F., & Das, S., editors, The Oxford Handbook of Ethics of AI, pages 0. Oxford University Press, July, 2020.
Race and Gender [link]Paper  doi  abstract   bibtex   
This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.
@incollection{gebru_race_2020,
	title = {Race and {Gender}},
	isbn = {978-0-19-006739-7},
	url = {https://doi.org/10.1093/oxfordhb/9780190067397.013.16},
	abstract = {This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.},
	urldate = {2023-04-10},
	booktitle = {The {Oxford} {Handbook} of {Ethics} of {AI}},
	publisher = {Oxford University Press},
	author = {Gebru, Timnit},
	editor = {Dubber, Markus D. and Pasquale, Frank and Das, Sunit},
	month = jul,
	year = {2020},
	doi = {10.1093/oxfordhb/9780190067397.013.16},
	pages = {0},
}

Downloads: 0