Better by you, better than me, chatgpt3 as writing assistance in students essays. Basic, Z., Banovac, A., Kruzic, I., & Jerkovic, I. arXiv.org, February, 2023. Place: Ithaca Publisher: Cornell University Library, arXiv.org
Better by you, better than me, chatgpt3 as writing assistance in students essays [link]Paper  abstract   bibtex   
Aim: To compare students' essay writing performance with or without employing ChatGPT-3 as a writing assistant tool. Materials and methods: Eighteen students participated in the study (nine in control and nine in the experimental group that used ChatGPT-3). We scored essay elements with grades (A-D) and corresponding numerical values (4-1). We compared essay scores to students' GPTs, writing time, authenticity, and content similarity. Results: Average grade was C for both groups; for control (2.39, SD=0.71) and for experimental (2.00, SD=0.73). None of the predictors affected essay scores: group (P=0.184), writing duration (P=0.669), module (P=0.388), and GPA (P=0.532). The text unauthenticity was slightly higher in the experimental group (11.87%, SD=13.45 to 9.96%, SD=9.81%), but the similarity among essays was generally low in the overall sample (the Jaccard similarity index ranging from 0 to 0.054). In the experimental group, AI classifier recognized more potential AI-generated texts. Conclusions: This study found no evidence that using GPT as a writing tool improves essay quality since the control group outperformed the experimental group in most parameters.
@article{basic_better_2023,
	title = {Better by you, better than me, chatgpt3 as writing assistance in students essays},
	url = {https://www.proquest.com/working-papers/better-you-than-me-chatgpt3-as-writing-assistance/docview/2775126519/se-2},
	abstract = {Aim: To compare students' essay writing performance with or without employing ChatGPT-3 as a writing assistant tool. Materials and methods: Eighteen students participated in the study (nine in control and nine in the experimental group that used ChatGPT-3). We scored essay elements with grades (A-D) and corresponding numerical values (4-1). We compared essay scores to students' GPTs, writing time, authenticity, and content similarity. Results: Average grade was C for both groups; for control (2.39, SD=0.71) and for experimental (2.00, SD=0.73). None of the predictors affected essay scores: group (P=0.184), writing duration (P=0.669), module (P=0.388), and GPA (P=0.532). The text unauthenticity was slightly higher in the experimental group (11.87\%, SD=13.45 to 9.96\%, SD=9.81\%), but the similarity among essays was generally low in the overall sample (the Jaccard similarity index ranging from 0 to 0.054). In the experimental group, AI classifier recognized more potential AI-generated texts. Conclusions: This study found no evidence that using GPT as a writing tool improves essay quality since the control group outperformed the experimental group in most parameters.},
	language = {English},
	journal = {arXiv.org},
	author = {Basic, Zeljana and Banovac, Ana and Kruzic, Ivana and Jerkovic, Ivan},
	month = feb,
	year = {2023},
	note = {Place: Ithaca
Publisher: Cornell University Library, arXiv.org},
	keywords = {Writing, Chatbots, Artificial Intelligence, Business And Economics--Banking And Finance, Computers and Society, Human-Computer Interaction, Students, Essays, Similarity},
}

Downloads: 0