Putting ChatGPT's Medical Advice to the (Turing) Test. Nov, O., Singh, N., & Mann, D. M MedRxiv, January, 2023. Place: Cold Spring Harbor Publisher: Cold Spring Harbor Laboratory Press
Putting ChatGPT's Medical Advice to the (Turing) Test [link]Paper  doi  abstract   bibtex   
Importance: Chatbots could play a role in answering patient questions, but patients' ability to distinguish between provider and chatbot responses, and patients' trust in chatbots' functions are not well established. Objective: To assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication. Design: Survey in January 2023 Participants: A US representative sample of 430 study participants aged 18 and above was recruited on Prolific, a crowdsourcing platform for academic studies. 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. 53.2% of respondents analyzed were women; their average age was 47.1. Exposure(s): Ten representative non-administrative patient-provider interactions were extracted from the EHR. Patients' questions were placed in ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient's question was followed by a provider- or ChatGPT-generated response. Participants were informed that five responses were provider-generated and five were chatbot-generated. Participants were asked, and incentivized financially, to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale of 1-5. Main Outcome(s) and Measure(s): Main outcome: Proportion of responses correctly classified as provider- vs chatbot-generated. Secondary outcomes: Average and standard deviation of responses to trust questions. Results: The correct classification of responses ranged between 49.0% to 85.7% for different questions. On average, chatbot responses were correctly identified 65.5% of the time, and provider responses were correctly distinguished 65.1% of the time. On average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased. Conclusions and Relevance: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in healthcare. Keywords: AI in Medicine; ChatGPT; Generative AI; Healthcare AI; Turing Test
@article{nov_putting_2023,
	title = {Putting {ChatGPT}'s {Medical} {Advice} to the ({Turing}) {Test}},
	url = {https://www.proquest.com/working-papers/putting-chatgpts-medical-advice-turing-test/docview/2768841735/se-2},
	doi = {10.1101/2023.01.23.23284735},
	abstract = {Importance: Chatbots could play a role in answering patient questions, but patients' ability to distinguish between provider and chatbot responses, and patients' trust in chatbots' functions are not well established. Objective: To assess the feasibility of using ChatGPT or a similar AI-based chatbot for patient-provider communication. Design: Survey in January 2023 Participants: A US representative sample of 430 study participants aged 18 and above was recruited on Prolific, a crowdsourcing platform for academic studies. 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. 53.2\% of respondents analyzed were women; their average age was 47.1. Exposure(s): Ten representative non-administrative patient-provider interactions were extracted from the EHR. Patients' questions were placed in ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider's response. In the survey, each patient's question was followed by a provider- or ChatGPT-generated response. Participants were informed that five responses were provider-generated and five were chatbot-generated. Participants were asked, and incentivized financially, to correctly identify the response source. Participants were also asked about their trust in chatbots' functions in patient-provider communication, using a Likert scale of 1-5. Main Outcome(s) and Measure(s): Main outcome: Proportion of responses correctly classified as provider- vs chatbot-generated. Secondary outcomes: Average and standard deviation of responses to trust questions. Results: The correct classification of responses ranged between 49.0\% to 85.7\% for different questions. On average, chatbot responses were correctly identified 65.5\% of the time, and provider responses were correctly distinguished 65.1\% of the time. On average, responses toward patients' trust in chatbots' functions were weakly positive (mean Likert score: 3.4), with lower trust as the health-related complexity of the task in questions increased. Conclusions and Relevance: ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in healthcare. Keywords: AI in Medicine; ChatGPT; Generative AI; Healthcare AI; Turing Test},
	language = {English},
	journal = {MedRxiv},
	author = {Nov, Oded and Singh, Nina and Mann, Devin M},
	month = jan,
	year = {2023},
	note = {Place: Cold Spring Harbor
Publisher: Cold Spring Harbor Laboratory Press},
	keywords = {Chatbots, Health care, Medical Sciences, Patients, Business And Economics--Banking And Finance, Questions, Human-Computer Interaction, Surveys, Computation, Logic},
}

Downloads: 0