What's in a Name? Auditing Large Language Models for Race and Gender Bias. Haim, A., Salinas, A., & Nyarko, J. February, 2024. arXiv:2402.14875 [cs]Paper doi abstract bibtex We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.
@misc{haim_whats_2024,
title = {What's in a {Name}? {Auditing} {Large} {Language} {Models} for {Race} and {Gender} {Bias}},
shorttitle = {What's in a {Name}?},
url = {http://arxiv.org/abs/2402.14875},
doi = {10.48550/arXiv.2402.14875},
abstract = {We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.},
urldate = {2024-04-22},
publisher = {arXiv},
author = {Haim, Amit and Salinas, Alejandro and Nyarko, Julian},
month = feb,
year = {2024},
note = {arXiv:2402.14875 [cs]},
}
Downloads: 0
{"_id":"FJGogiGJjuF5fhDqL","bibbaseid":"haim-salinas-nyarko-whatsinanameauditinglargelanguagemodelsforraceandgenderbias-2024","author_short":["Haim, A.","Salinas, A.","Nyarko, J."],"bibdata":{"bibtype":"misc","type":"misc","title":"What's in a Name? Auditing Large Language Models for Race and Gender Bias","shorttitle":"What's in a Name?","url":"http://arxiv.org/abs/2402.14875","doi":"10.48550/arXiv.2402.14875","abstract":"We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.","urldate":"2024-04-22","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Haim"],"firstnames":["Amit"],"suffixes":[]},{"propositions":[],"lastnames":["Salinas"],"firstnames":["Alejandro"],"suffixes":[]},{"propositions":[],"lastnames":["Nyarko"],"firstnames":["Julian"],"suffixes":[]}],"month":"February","year":"2024","note":"arXiv:2402.14875 [cs]","bibtex":"@misc{haim_whats_2024,\n\ttitle = {What's in a {Name}? {Auditing} {Large} {Language} {Models} for {Race} and {Gender} {Bias}},\n\tshorttitle = {What's in a {Name}?},\n\turl = {http://arxiv.org/abs/2402.14875},\n\tdoi = {10.48550/arXiv.2402.14875},\n\tabstract = {We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.},\n\turldate = {2024-04-22},\n\tpublisher = {arXiv},\n\tauthor = {Haim, Amit and Salinas, Alejandro and Nyarko, Julian},\n\tmonth = feb,\n\tyear = {2024},\n\tnote = {arXiv:2402.14875 [cs]},\n}\n\n","author_short":["Haim, A.","Salinas, A.","Nyarko, J."],"key":"haim_whats_2024","id":"haim_whats_2024","bibbaseid":"haim-salinas-nyarko-whatsinanameauditinglargelanguagemodelsforraceandgenderbias-2024","role":"author","urls":{"Paper":"http://arxiv.org/abs/2402.14875"},"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"misc","biburl":"https://bibbase.org/zotero/andreasmartin","dataSources":["jurZeGzSpYdkQ8rm4"],"keywords":[],"search_terms":["name","auditing","large","language","models","race","gender","bias","haim","salinas","nyarko"],"title":"What's in a Name? Auditing Large Language Models for Race and Gender Bias","year":2024}