How Susceptible are Large Language Models to Ideological Manipulation?. Chen, K., He, Z., Yan, J., Shi, T., & Lerman, K. In Al-Onaizan, Y., Bansal, M., & Chen, Y., editors, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024, Miami, FL, USA, November 12-16, 2024, pages 17140–17161, 2024. Association for Computational Linguistics.
How Susceptible are Large Language Models to Ideological Manipulation? [link]Paper  bibtex   
@inproceedings{DBLP:conf/emnlp/ChenHYSL24,
  author       = {Kai Chen and
                  Zihao He and
                  Jun Yan and
                  Taiwei Shi and
                  Kristina Lerman},
  editor       = {Yaser Al{-}Onaizan and
                  Mohit Bansal and
                  Yun{-}Nung Chen},
  title        = {How Susceptible are Large Language Models to Ideological Manipulation?},
  booktitle    = {Proceedings of the 2024 Conference on Empirical Methods in Natural
                  Language Processing, {EMNLP} 2024, Miami, FL, USA, November 12-16,
                  2024},
  pages        = {17140--17161},
  publisher    = {Association for Computational Linguistics},
  year         = {2024},
  url          = {https://aclanthology.org/2024.emnlp-main.952},
  timestamp    = {Thu, 14 Nov 2024 00:00:00 +0100},
  biburl       = {https://dblp.org/rec/conf/emnlp/ChenHYSL24.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

Downloads: 0