ChatGPT and Software Testing Education: Promises & Perils. Jalil, S., Suzzana Rafi, LaToza, T. D, Moran, K., & Lam, W. arXiv.org, March, 2023. Place: Ithaca Publisher: Cornell University Library, arXiv.org
ChatGPT and Software Testing Education: Promises & Perils [link]Paper  abstract   bibtex   
Over the past decade, predictive language modeling for code has proven to be a valuable tool for enabling new forms of automation for developers. More recently, we have seen the advent of general purpose "large language models", based on neural transformer architectures, that have been trained on massive datasets of human written text spanning code and natural language. However, despite the demonstrated representational power of such models, interacting with them has historically been constrained to specific task settings, limiting their general applicability. Many of these limitations were recently overcome with the introduction of ChatGPT, a language model created by OpenAI and trained to operate as a conversational agent, enabling it to answer questions and respond to a wide variety of commands from end users. The introduction of models, such as ChatGPT, has already spurred fervent discussion from educators, ranging from fear that students could use these AI tools to circumvent learning, to excitement about the new types of learning opportunities that they might unlock. However, given the nascent nature of these tools, we currently lack fundamental knowledge related to how well they perform in different educational settings, and the potential promise (or danger) that they might pose to traditional forms of instruction. As such, in this paper, we examine how well ChatGPT performs when tasked with answering common questions in a popular software testing curriculum. Our findings indicate that ChatGPT can provide correct or partially correct answers in 55.6% of cases, provide correct or partially correct explanations of answers in 53.0% of cases, and that prompting the tool in a shared question context leads to a marginally higher rate of correct responses. Based on these findings, we discuss the potential promises and perils related to the use of ChatGPT by students and instructors.
@article{jalil_chatgpt_2023,
	title = {{ChatGPT} and {Software} {Testing} {Education}: {Promises} \& {Perils}},
	url = {https://www.proquest.com/working-papers/chatgpt-software-testing-education-promises-amp/docview/2774362357/se-2},
	abstract = {Over the past decade, predictive language modeling for code has proven to be a valuable tool for enabling new forms of automation for developers. More recently, we have seen the advent of general purpose "large language models", based on neural transformer architectures, that have been trained on massive datasets of human written text spanning code and natural language. However, despite the demonstrated representational power of such models, interacting with them has historically been constrained to specific task settings, limiting their general applicability. Many of these limitations were recently overcome with the introduction of ChatGPT, a language model created by OpenAI and trained to operate as a conversational agent, enabling it to answer questions and respond to a wide variety of commands from end users. The introduction of models, such as ChatGPT, has already spurred fervent discussion from educators, ranging from fear that students could use these AI tools to circumvent learning, to excitement about the new types of learning opportunities that they might unlock. However, given the nascent nature of these tools, we currently lack fundamental knowledge related to how well they perform in different educational settings, and the potential promise (or danger) that they might pose to traditional forms of instruction. As such, in this paper, we examine how well ChatGPT performs when tasked with answering common questions in a popular software testing curriculum. Our findings indicate that ChatGPT can provide correct or partially correct answers in 55.6\% of cases, provide correct or partially correct explanations of answers in 53.0\% of cases, and that prompting the tool in a shared question context leads to a marginally higher rate of correct responses. Based on these findings, we discuss the potential promises and perils related to the use of ChatGPT by students and instructors.},
	language = {English},
	journal = {arXiv.org},
	author = {Jalil, Sajed and {Suzzana Rafi} and LaToza, Thomas D and Moran, Kevin and Lam, Wing},
	month = mar,
	year = {2023},
	note = {Place: Ithaca
Publisher: Cornell University Library, arXiv.org},
	keywords = {Chatbots, Learning, Business And Economics--Banking And Finance, Questions, Education, Human-Computer Interaction, Massive data points, Natural language processing, Software Engineering, Software testing, Students},
}

Downloads: 0