Large Language Model-based Role-Playing for Personalized Medical Jargon Extraction. Lim, J. H., Kwon, S., Yao, Z., Lalor, J. P., & Yu, H. August, 2024. arXiv:2408.05555 [cs]
Large Language Model-based Role-Playing for Personalized Medical Jargon Extraction [link]Paper  abstract   bibtex   
Previous studies reveal that Electronic Health Records (EHR), which have been widely adopted in the U.S. to allow patients to access their personal medical information, do not have high readability to patients due to the prevalence of medical jargon. Tailoring medical notes to individual comprehension by identifying jargon that is difficult for each person will enhance the utility of generative models. We present the first quantitative analysis to measure the impact of role-playing in LLM in medical term extraction. By comparing the results of Mechanical Turk workers over 20 sentences, our study demonstrates that LLM role-playing improves F1 scores in 95% of cases across 14 different socio-demographic backgrounds. Furthermore, applying role-playing with in-context learning outperformed the previous state-of-the-art models. Our research showed that ChatGPT can improve traditional medical term extraction systems by utilizing role-play to deliver personalized patient education, a potential that previous models had not achieved.
@misc{lim_large_2024,
	title = {Large {Language} {Model}-based {Role}-{Playing} for {Personalized} {Medical} {Jargon} {Extraction}},
	url = {http://arxiv.org/abs/2408.05555},
	abstract = {Previous studies reveal that Electronic Health Records (EHR), which have been widely adopted in the U.S. to allow patients to access their personal medical information, do not have high readability to patients due to the prevalence of medical jargon. Tailoring medical notes to individual comprehension by identifying jargon that is difficult for each person will enhance the utility of generative models. We present the first quantitative analysis to measure the impact of role-playing in LLM in medical term extraction. By comparing the results of Mechanical Turk workers over 20 sentences, our study demonstrates that LLM role-playing improves F1 scores in 95\% of cases across 14 different socio-demographic backgrounds. Furthermore, applying role-playing with in-context learning outperformed the previous state-of-the-art models. Our research showed that ChatGPT can improve traditional medical term extraction systems by utilizing role-play to deliver personalized patient education, a potential that previous models had not achieved.},
	urldate = {2024-09-03},
	publisher = {arXiv},
	author = {Lim, Jung Hoon and Kwon, Sunjae and Yao, Zonghai and Lalor, John P. and Yu, Hong},
	month = aug,
	year = {2024},
	note = {arXiv:2408.05555 [cs]},
	keywords = {Computer Science - Computation and Language},
}

Downloads: 0