The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation. Xu, R., Lin, B. S., Yang, S., Zhang, T., Shi, W., Zhang, T., Fang, Z., Xu, W., & Qiu, H. December, 2023. arXiv:2312.09085 [cs]
The Earth is Flat because...: Investigating LLMs' Belief towards Misinformation via Persuasive Conversation [link]Paper  doi  abstract   bibtex   
Large Language Models (LLMs) encapsulate vast amounts of knowledge but still remain vulnerable to external misinformation. Existing research mainly studied this susceptibility behavior in a single-turn setting. However, belief can change during a multi-turn conversation, especially a persuasive one. Therefore, in this study, we delve into LLMs' susceptibility to persuasive conversations, particularly on factual questions that they can answer correctly. We first curate the Farm (i.e., Fact to Misinform) dataset, which contains factual questions paired with systematically generated persuasive misinformation. Then, we develop a testing framework to track LLMs' belief changes in a persuasive dialogue. Through extensive experiments, we find that LLMs' correct beliefs on factual knowledge can be easily manipulated by various persuasive strategies.
@misc{xu_earth_2023,
	title = {The {Earth} is {Flat} because...: {Investigating} {LLMs}' {Belief} towards {Misinformation} via {Persuasive} {Conversation}},
	shorttitle = {The {Earth} is {Flat} because...},
	url = {http://arxiv.org/abs/2312.09085},
	doi = {10.48550/arXiv.2312.09085},
	abstract = {Large Language Models (LLMs) encapsulate vast amounts of knowledge but still remain vulnerable to external misinformation. Existing research mainly studied this susceptibility behavior in a single-turn setting. However, belief can change during a multi-turn conversation, especially a persuasive one. Therefore, in this study, we delve into LLMs' susceptibility to persuasive conversations, particularly on factual questions that they can answer correctly. We first curate the Farm (i.e., Fact to Misinform) dataset, which contains factual questions paired with systematically generated persuasive misinformation. Then, we develop a testing framework to track LLMs' belief changes in a persuasive dialogue. Through extensive experiments, we find that LLMs' correct beliefs on factual knowledge can be easily manipulated by various persuasive strategies.},
	urldate = {2024-01-02},
	publisher = {arXiv},
	author = {Xu, Rongwu and Lin, Brian S. and Yang, Shujian and Zhang, Tianqi and Shi, Weiyan and Zhang, Tianwei and Fang, Zhixuan and Xu, Wei and Qiu, Han},
	month = dec,
	year = {2023},
	note = {arXiv:2312.09085 [cs]},
	keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Computers and Society, Computer Science - Cryptography and Security, belief, belief changes, dataset, knowledge, large language model, misinformation, persuasion, persuasive conversation, persuasive dialogue, testing framework},
}

Downloads: 0