ChatGPT for Clinical Vignette Generation, Revision, and Evaluation. James RA Benoit MedRxiv, February, 2023. Place: Cold Spring Harbor Publisher: Cold Spring Harbor Laboratory Press
ChatGPT for Clinical Vignette Generation, Revision, and Evaluation [link]Paper  doi  abstract   bibtex   
Objective To determine the capabilities of ChatGPT for rapidly generating, rewriting, and evaluating (via diagnostic and triage accuracy) sets of clinical vignettes. Design We explored the capabilities of ChatGPT for generating and rewriting vignettes. First, we gave it natural language prompts to generate 10 new sets of 10 vignettes, each set for a different common childhood illness. Next, we had it generate 10 sets of 10 vignettes given a set of symptoms from which to draw. We then had it rewrite 15 existing pediatric vignettes at different levels of health literacy. Fourth, we asked it to generate 10 vignettes written as a parent, and rewrite these vignettes as a physician, then at a grade 8 reading level, before rewriting them from the original parent's perspective. Finally, we evaluated ChatGPT for diagnosis and triage for 45 clinical vignettes previously used for evaluating symptom checkers. Setting and participants ChatGPT, a publicly available, free chatbot. Main outcome measures Our main outcomes for de novo vignette generation were whether ChatGPT followed vignette creation instructions consistently, correctly, and listed reasonable symptoms for the disease being described. For generating vignettes from pre-existing symptom sets, we examined whether the symptom sets were used without introducing extra symptoms. Our main outcome for rewriting existing standardized vignettes to match patient demographics, and rewriting vignettes between styles, was whether symptoms were dropped or added outside the original vignette. Finally, our main outcomes examining diagnostic and triage accuracy on 45 standardized patient vignettes were whether the correct diagnosis was listed first, and if the correct triage recommendation was made. Results ChatGPT was able to quickly produce varied contexts and symptom profiles when writing vignettes based on an illness name, but overused some core disease symptoms. It was able to use given symptom lists as the basis for vignettes consistently, adding one additional (though appropriate) symptom from outside the list for one disease. Pediatric vignettes rewritten at different levels of health literacy showed more complex symptoms being dropped when writing at low health literacy in 87.5% of cases. While writing at high health literacy, it added a diagnosis to 80% of vignettes (91.7% correctly diagnosed). Symptoms were retained in 90% of cases when rewriting vignettes between viewpoints. When presented with 45 vignettes, ChatGPT identified illnesses with 75.6% (95% CI, 62.6% to 88.5%) first-pass diagnostic accuracy and 57.8% (95% CI, 42.9% to 72.7%) triage accuracy. Its use does require monitoring and has caveats, which we discuss. Conclusions ChatGPT was capable, with caveats and appropriate review, of generating, rewriting, and evaluating clinical vignettes.
@article{james_ra_benoit_chatgpt_2023,
	title = {{ChatGPT} for {Clinical} {Vignette} {Generation}, {Revision}, and {Evaluation}},
	url = {https://www.proquest.com/working-papers/chatgpt-clinical-vignette-generation-revision/docview/2774352658/se-2},
	doi = {10.1101/2023.02.04.23285478},
	abstract = {Objective To determine the capabilities of ChatGPT for rapidly generating, rewriting, and evaluating (via diagnostic and triage accuracy) sets of clinical vignettes. Design We explored the capabilities of ChatGPT for generating and rewriting vignettes. First, we gave it natural language prompts to generate 10 new sets of 10 vignettes, each set for a different common childhood illness. Next, we had it generate 10 sets of 10 vignettes given a set of symptoms from which to draw. We then had it rewrite 15 existing pediatric vignettes at different levels of health literacy. Fourth, we asked it to generate 10 vignettes written as a parent, and rewrite these vignettes as a physician, then at a grade 8 reading level, before rewriting them from the original parent's perspective. Finally, we evaluated ChatGPT for diagnosis and triage for 45 clinical vignettes previously used for evaluating symptom checkers. Setting and participants ChatGPT, a publicly available, free chatbot. Main outcome measures Our main outcomes for de novo vignette generation were whether ChatGPT followed vignette creation instructions consistently, correctly, and listed reasonable symptoms for the disease being described. For generating vignettes from pre-existing symptom sets, we examined whether the symptom sets were used without introducing extra symptoms. Our main outcome for rewriting existing standardized vignettes to match patient demographics, and rewriting vignettes between styles, was whether symptoms were dropped or added outside the original vignette. Finally, our main outcomes examining diagnostic and triage accuracy on 45 standardized patient vignettes were whether the correct diagnosis was listed first, and if the correct triage recommendation was made. Results ChatGPT was able to quickly produce varied contexts and symptom profiles when writing vignettes based on an illness name, but overused some core disease symptoms. It was able to use given symptom lists as the basis for vignettes consistently, adding one additional (though appropriate) symptom from outside the list for one disease. Pediatric vignettes rewritten at different levels of health literacy showed more complex symptoms being dropped when writing at low health literacy in 87.5\% of cases. While writing at high health literacy, it added a diagnosis to 80\% of vignettes (91.7\% correctly diagnosed). Symptoms were retained in 90\% of cases when rewriting vignettes between viewpoints. When presented with 45 vignettes, ChatGPT identified illnesses with 75.6\% (95\% CI, 62.6\% to 88.5\%) first-pass diagnostic accuracy and 57.8\% (95\% CI, 42.9\% to 72.7\%) triage accuracy. Its use does require monitoring and has caveats, which we discuss. Conclusions ChatGPT was capable, with caveats and appropriate review, of generating, rewriting, and evaluating clinical vignettes.},
	language = {English},
	journal = {MedRxiv},
	author = {{James RA Benoit}},
	month = feb,
	year = {2023},
	note = {Place: Cold Spring Harbor
Publisher: Cold Spring Harbor Laboratory Press},
	keywords = {Children, Medical Sciences, Patients, Accuracy, Diagnosis, Health education, Health literacy, Pediatrics},
}

Downloads: 0