Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants. Sandoval, G., Pearce, H., Nys, T., Karri, R., Garg, S., & Dolan-Gavitt, B. February, 2023.
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants [link]Paper  doi  abstract   bibtex   
Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked 'shopping list' structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.
@misc{sandoval_lost_2023,
	title = {Lost at {C}: {A} {User} {Study} on the {Security} {Implications} of {Large} {Language} {Model} {Code} {Assistants}},
	shorttitle = {Lost at {C}},
	url = {http://arxiv.org/abs/2208.09727},
	abstract = {Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked 'shopping list' structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10\% more than the control, indicating the use of LLMs does not introduce new security risks.},
	urldate = {2023-03-09},
	publisher = {arXiv},
	author = {Sandoval, Gustavo and Pearce, Hammond and Nys, Teo and Karri, Ramesh and Garg, Siddharth and Dolan-Gavitt, Brendan},
	month = feb,
	year = {2023},
	doi = {10.48550/arXiv.2208.09727},
	keywords = {\#nosource, Computer Science - Cryptography and Security},
}

Downloads: 0