Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It. Hacker, P., Mittelstadt, B., Borgesius, F. Z., & Wachter, S. June, 2024. arXiv:2407.10329 [cs]
Generative Discrimination: What Happens When Generative AI Exhibits Bias, and What Can Be Done About It [link]Paper  doi  abstract   bibtex   2 downloads  
As generative Artificial Intelligence (genAI) technologies proliferate across sectors, they offer significant benefits but also risk exacerbating discrimination. This chapter explores how genAI intersects with non-discrimination laws, identifying shortcomings and suggesting improvements. It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups, which may not be overtly discriminatory in individual cases but have cumulative discriminatory effects. For example, genAI systems may predominantly depict white men when asked for images of people in important jobs. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like unbalanced content, harmful stereotypes or misclassification. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws, including the AI Act, to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances.
@misc{hackerGenerativeDiscriminationWhat2024,
	title = {Generative {Discrimination}: {What} {Happens} {When} {Generative} {AI} {Exhibits} {Bias}, and {What} {Can} {Be} {Done} {About} {It}},
	shorttitle = {Generative {Discrimination}},
	url = {http://arxiv.org/abs/2407.10329},
	doi = {10.48550/arXiv.2407.10329},
	abstract = {As generative Artificial Intelligence (genAI) technologies proliferate across sectors, they offer significant benefits but also risk exacerbating discrimination. This chapter explores how genAI intersects with non-discrimination laws, identifying shortcomings and suggesting improvements. It highlights two main types of discriminatory outputs: (i) demeaning and abusive content and (ii) subtler biases due to inadequate representation of protected groups, which may not be overtly discriminatory in individual cases but have cumulative discriminatory effects. For example, genAI systems may predominantly depict white men when asked for images of people in important jobs. This chapter examines these issues, categorizing problematic outputs into three legal categories: discriminatory content; harassment; and legally hard cases like unbalanced content, harmful stereotypes or misclassification. It argues for holding genAI providers and deployers liable for discriminatory outputs and highlights the inadequacy of traditional legal frameworks to address genAI-specific issues. The chapter suggests updating EU laws, including the AI Act, to mitigate biases in training and input data, mandating testing and auditing, and evolving legislation to enforce standards for bias mitigation and inclusivity as technology advances.},
	urldate = {2024-07-28},
	publisher = {arXiv},
	author = {Hacker, Philipp and Mittelstadt, Brent and Borgesius, Frederik Zuiderveen and Wachter, Sandra},
	month = jun,
	year = {2024},
	note = {arXiv:2407.10329 [cs]},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computers and Society},
	annote = {Comment: forthcoming in: Philipp Hacker, Andreas Engel, Sarah Hammer and Brent Mittelstadt (eds.), Oxford Handbook on the Foundations and Regulation of Generative AI (Oxford University Press, 2024)},
}

Downloads: 2