Bias Testing and Mitigation in LLM-based Code Generation. Huang, D., Bu, Q., Zhang, J., Xie, X., Chen, J., & Cui, H. January, 2024. arXiv:2309.14345 [cs]Paper doi abstract bibtex Utilizing state-of-the-art Large Language Models (LLMs), automatic code generation models play a pivotal role in enhancing the productivity of software development procedures. As the adoption of LLMs becomes more widespread in software coding ecosystems, a pressing issue has emerged: does the generated code contain social bias and unfairness, such as those related to age, gender, and race? This issue concerns the integrity, fairness, and ethical foundation of software applications that depend on the code generated by these models, yet is under-explored in the literature. This paper presents a novel bias testing framework that is specifically designed for code generation tasks. Based on this framework, we conduct an extensive evaluation of the bias in code generated by five state-of-the-art LLMs. Our findings reveal that 20.29% to 44.93% code functions generated by the models under study are biased when handling bias sensitive tasks (i.e., tasks that involve sensitive attributes such as age and gender). This indicates that the existing LLMs can be unfair in code generation, posing risks of unintended and harmful software behaviors. To mitigate bias for code generation models, we evaluate five bias mitigation prompt strategies, i.e., utilizing bias testing results to refine the code (zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our evaluation results illustrate that these strategies are all effective in mitigating bias. Overall, one-shot and few-shot learning are the two most effective. For GPT-4, 80% to 90% code bias can be removed with one-shot learning.
@misc{huang_bias_2024,
title = {Bias {Testing} and {Mitigation} in {LLM}-based {Code} {Generation}},
url = {http://arxiv.org/abs/2309.14345},
doi = {10.48550/arXiv.2309.14345},
abstract = {Utilizing state-of-the-art Large Language Models (LLMs), automatic code generation models play a pivotal role in enhancing the productivity of software development procedures. As the adoption of LLMs becomes more widespread in software coding ecosystems, a pressing issue has emerged: does the generated code contain social bias and unfairness, such as those related to age, gender, and race? This issue concerns the integrity, fairness, and ethical foundation of software applications that depend on the code generated by these models, yet is under-explored in the literature. This paper presents a novel bias testing framework that is specifically designed for code generation tasks. Based on this framework, we conduct an extensive evaluation of the bias in code generated by five state-of-the-art LLMs. Our findings reveal that 20.29\% to 44.93\% code functions generated by the models under study are biased when handling bias sensitive tasks (i.e., tasks that involve sensitive attributes such as age and gender). This indicates that the existing LLMs can be unfair in code generation, posing risks of unintended and harmful software behaviors. To mitigate bias for code generation models, we evaluate five bias mitigation prompt strategies, i.e., utilizing bias testing results to refine the code (zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our evaluation results illustrate that these strategies are all effective in mitigating bias. Overall, one-shot and few-shot learning are the two most effective. For GPT-4, 80\% to 90\% code bias can be removed with one-shot learning.},
urldate = {2024-04-30},
publisher = {arXiv},
author = {Huang, Dong and Bu, Qingwen and Zhang, Jie and Xie, Xiaofei and Chen, Junjie and Cui, Heming},
month = jan,
year = {2024},
note = {arXiv:2309.14345 [cs]},
}
Downloads: 0
{"_id":"5q5GsEKtsLyb34EPf","bibbaseid":"huang-bu-zhang-xie-chen-cui-biastestingandmitigationinllmbasedcodegeneration-2024","author_short":["Huang, D.","Bu, Q.","Zhang, J.","Xie, X.","Chen, J.","Cui, H."],"bibdata":{"bibtype":"misc","type":"misc","title":"Bias Testing and Mitigation in LLM-based Code Generation","url":"http://arxiv.org/abs/2309.14345","doi":"10.48550/arXiv.2309.14345","abstract":"Utilizing state-of-the-art Large Language Models (LLMs), automatic code generation models play a pivotal role in enhancing the productivity of software development procedures. As the adoption of LLMs becomes more widespread in software coding ecosystems, a pressing issue has emerged: does the generated code contain social bias and unfairness, such as those related to age, gender, and race? This issue concerns the integrity, fairness, and ethical foundation of software applications that depend on the code generated by these models, yet is under-explored in the literature. This paper presents a novel bias testing framework that is specifically designed for code generation tasks. Based on this framework, we conduct an extensive evaluation of the bias in code generated by five state-of-the-art LLMs. Our findings reveal that 20.29% to 44.93% code functions generated by the models under study are biased when handling bias sensitive tasks (i.e., tasks that involve sensitive attributes such as age and gender). This indicates that the existing LLMs can be unfair in code generation, posing risks of unintended and harmful software behaviors. To mitigate bias for code generation models, we evaluate five bias mitigation prompt strategies, i.e., utilizing bias testing results to refine the code (zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our evaluation results illustrate that these strategies are all effective in mitigating bias. Overall, one-shot and few-shot learning are the two most effective. For GPT-4, 80% to 90% code bias can be removed with one-shot learning.","urldate":"2024-04-30","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Huang"],"firstnames":["Dong"],"suffixes":[]},{"propositions":[],"lastnames":["Bu"],"firstnames":["Qingwen"],"suffixes":[]},{"propositions":[],"lastnames":["Zhang"],"firstnames":["Jie"],"suffixes":[]},{"propositions":[],"lastnames":["Xie"],"firstnames":["Xiaofei"],"suffixes":[]},{"propositions":[],"lastnames":["Chen"],"firstnames":["Junjie"],"suffixes":[]},{"propositions":[],"lastnames":["Cui"],"firstnames":["Heming"],"suffixes":[]}],"month":"January","year":"2024","note":"arXiv:2309.14345 [cs]","bibtex":"@misc{huang_bias_2024,\n\ttitle = {Bias {Testing} and {Mitigation} in {LLM}-based {Code} {Generation}},\n\turl = {http://arxiv.org/abs/2309.14345},\n\tdoi = {10.48550/arXiv.2309.14345},\n\tabstract = {Utilizing state-of-the-art Large Language Models (LLMs), automatic code generation models play a pivotal role in enhancing the productivity of software development procedures. As the adoption of LLMs becomes more widespread in software coding ecosystems, a pressing issue has emerged: does the generated code contain social bias and unfairness, such as those related to age, gender, and race? This issue concerns the integrity, fairness, and ethical foundation of software applications that depend on the code generated by these models, yet is under-explored in the literature. This paper presents a novel bias testing framework that is specifically designed for code generation tasks. Based on this framework, we conduct an extensive evaluation of the bias in code generated by five state-of-the-art LLMs. Our findings reveal that 20.29\\% to 44.93\\% code functions generated by the models under study are biased when handling bias sensitive tasks (i.e., tasks that involve sensitive attributes such as age and gender). This indicates that the existing LLMs can be unfair in code generation, posing risks of unintended and harmful software behaviors. To mitigate bias for code generation models, we evaluate five bias mitigation prompt strategies, i.e., utilizing bias testing results to refine the code (zero-shot), one-, few-shot, and two Chain-of-Thought (CoT) prompts. Our evaluation results illustrate that these strategies are all effective in mitigating bias. Overall, one-shot and few-shot learning are the two most effective. For GPT-4, 80\\% to 90\\% code bias can be removed with one-shot learning.},\n\turldate = {2024-04-30},\n\tpublisher = {arXiv},\n\tauthor = {Huang, Dong and Bu, Qingwen and Zhang, Jie and Xie, Xiaofei and Chen, Junjie and Cui, Heming},\n\tmonth = jan,\n\tyear = {2024},\n\tnote = {arXiv:2309.14345 [cs]},\n}\n\n","author_short":["Huang, D.","Bu, Q.","Zhang, J.","Xie, X.","Chen, J.","Cui, H."],"key":"huang_bias_2024","id":"huang_bias_2024","bibbaseid":"huang-bu-zhang-xie-chen-cui-biastestingandmitigationinllmbasedcodegeneration-2024","role":"author","urls":{"Paper":"http://arxiv.org/abs/2309.14345"},"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"misc","biburl":"https://bibbase.org/zotero/andreasmartin","dataSources":["jurZeGzSpYdkQ8rm4"],"keywords":[],"search_terms":["bias","testing","mitigation","llm","based","code","generation","huang","bu","zhang","xie","chen","cui"],"title":"Bias Testing and Mitigation in LLM-based Code Generation","year":2024}