Prevalence Overshadows Concerns? Understanding Chinese Users' Privacy Awareness and Expectations Towards LLM-based Healthcare Consultation. Liu, Z., Hu, L., Zhou, T., Tang, Y., & Cai, Z. In 2025 IEEE Symposium on Security and Privacy (SP), pages 92–92, Los Alamitos, CA, USA, May, 2025. IEEE Computer Society.
Paper doi abstract bibtex Large Language Models (LLMs) are increasingly gaining traction in the healthcare sector, yet expanding the threat of sensitive health information being easily exposed and accessed without authorization. These privacy risks escalate in regions like China, where privacy awareness is notably limited. While some efforts have been devoted to user surveys on LLMs in healthcare, users' perceptions of privacy remain unexplored. To fill this gap, this paper contributes the first user study (n=846) in China on privacy awareness and expectations in LLM-based healthcare consultations. Specifically, a healthcare chatbot is deployed to investigate users' awareness in practice. Information flows grounded in contextual integrity are then employed to measure users' privacy expectations. Our findings suggest that the prevalence of LLMs amplifies health privacy risks by raising users' curiosity and willingness to use such services, thus overshadowing privacy concerns. 77.3% of participants are inclined to use such services, and 72.9% indicate they would adopt the generated advice. Interestingly, a paradoxical “illusion” emerges where users' knowledge and concerns about privacy contradict their privacy expectations, leading to greater health privacy exposure. Our extensive discussion offers insights for future LLM-based healthcare privacy investigations and protection technology development.
@inproceedings{liu_prevalence_2025,
address = {Los Alamitos, CA, USA},
title = {Prevalence {Overshadows} {Concerns}? {Understanding} {Chinese} {Users}' {Privacy} {Awareness} and {Expectations} {Towards} {LLM}-based {Healthcare} {Consultation}},
url = {https://doi.ieeecomputersociety.org/10.1109/SP61157.2025.00092},
doi = {10.1109/SP61157.2025.00092},
abstract = {Large Language Models (LLMs) are increasingly gaining traction in the healthcare sector, yet expanding the threat of sensitive health information being easily exposed and accessed without authorization. These privacy risks escalate in regions like China, where privacy awareness is notably limited. While some efforts have been devoted to user surveys on LLMs in healthcare, users' perceptions of privacy remain unexplored. To fill this gap, this paper contributes the first user study (n=846) in China on privacy awareness and expectations in LLM-based healthcare consultations. Specifically, a healthcare chatbot is deployed to investigate users' awareness in practice. Information flows grounded in contextual integrity are then employed to measure users' privacy expectations. Our findings suggest that the prevalence of LLMs amplifies health privacy risks by raising users' curiosity and willingness to use such services, thus overshadowing privacy concerns. 77.3\% of participants are inclined to use such services, and 72.9\% indicate they would adopt the generated advice. Interestingly, a paradoxical “illusion” emerges where users' knowledge and concerns about privacy contradict their privacy expectations, leading to greater health privacy exposure. Our extensive discussion offers insights for future LLM-based healthcare privacy investigations and protection technology development.},
booktitle = {2025 {IEEE} {Symposium} on {Security} and {Privacy} ({SP})},
publisher = {IEEE Computer Society},
author = {Liu, Zhihuang and Hu, Ling and Zhou, Tongqing and Tang, Yonghao and Cai, Zhiping},
month = may,
year = {2025},
keywords = {contextual integrity, healthcare, large language models, privacy, user study},
pages = {92--92},
}
Downloads: 0
{"_id":"6abjg9RuYK3NHtnSi","bibbaseid":"liu-hu-zhou-tang-cai-prevalenceovershadowsconcernsunderstandingchineseusersprivacyawarenessandexpectationstowardsllmbasedhealthcareconsultation-2025","author_short":["Liu, Z.","Hu, L.","Zhou, T.","Tang, Y.","Cai, Z."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","address":"Los Alamitos, CA, USA","title":"Prevalence Overshadows Concerns? Understanding Chinese Users' Privacy Awareness and Expectations Towards LLM-based Healthcare Consultation","url":"https://doi.ieeecomputersociety.org/10.1109/SP61157.2025.00092","doi":"10.1109/SP61157.2025.00092","abstract":"Large Language Models (LLMs) are increasingly gaining traction in the healthcare sector, yet expanding the threat of sensitive health information being easily exposed and accessed without authorization. These privacy risks escalate in regions like China, where privacy awareness is notably limited. While some efforts have been devoted to user surveys on LLMs in healthcare, users' perceptions of privacy remain unexplored. To fill this gap, this paper contributes the first user study (n=846) in China on privacy awareness and expectations in LLM-based healthcare consultations. Specifically, a healthcare chatbot is deployed to investigate users' awareness in practice. Information flows grounded in contextual integrity are then employed to measure users' privacy expectations. Our findings suggest that the prevalence of LLMs amplifies health privacy risks by raising users' curiosity and willingness to use such services, thus overshadowing privacy concerns. 77.3% of participants are inclined to use such services, and 72.9% indicate they would adopt the generated advice. Interestingly, a paradoxical “illusion” emerges where users' knowledge and concerns about privacy contradict their privacy expectations, leading to greater health privacy exposure. Our extensive discussion offers insights for future LLM-based healthcare privacy investigations and protection technology development.","booktitle":"2025 IEEE Symposium on Security and Privacy (SP)","publisher":"IEEE Computer Society","author":[{"propositions":[],"lastnames":["Liu"],"firstnames":["Zhihuang"],"suffixes":[]},{"propositions":[],"lastnames":["Hu"],"firstnames":["Ling"],"suffixes":[]},{"propositions":[],"lastnames":["Zhou"],"firstnames":["Tongqing"],"suffixes":[]},{"propositions":[],"lastnames":["Tang"],"firstnames":["Yonghao"],"suffixes":[]},{"propositions":[],"lastnames":["Cai"],"firstnames":["Zhiping"],"suffixes":[]}],"month":"May","year":"2025","keywords":"contextual integrity, healthcare, large language models, privacy, user study","pages":"92–92","bibtex":"@inproceedings{liu_prevalence_2025,\n\taddress = {Los Alamitos, CA, USA},\n\ttitle = {Prevalence {Overshadows} {Concerns}? {Understanding} {Chinese} {Users}' {Privacy} {Awareness} and {Expectations} {Towards} {LLM}-based {Healthcare} {Consultation}},\n\turl = {https://doi.ieeecomputersociety.org/10.1109/SP61157.2025.00092},\n\tdoi = {10.1109/SP61157.2025.00092},\n\tabstract = {Large Language Models (LLMs) are increasingly gaining traction in the healthcare sector, yet expanding the threat of sensitive health information being easily exposed and accessed without authorization. These privacy risks escalate in regions like China, where privacy awareness is notably limited. While some efforts have been devoted to user surveys on LLMs in healthcare, users' perceptions of privacy remain unexplored. To fill this gap, this paper contributes the first user study (n=846) in China on privacy awareness and expectations in LLM-based healthcare consultations. Specifically, a healthcare chatbot is deployed to investigate users' awareness in practice. Information flows grounded in contextual integrity are then employed to measure users' privacy expectations. Our findings suggest that the prevalence of LLMs amplifies health privacy risks by raising users' curiosity and willingness to use such services, thus overshadowing privacy concerns. 77.3\\% of participants are inclined to use such services, and 72.9\\% indicate they would adopt the generated advice. Interestingly, a paradoxical “illusion” emerges where users' knowledge and concerns about privacy contradict their privacy expectations, leading to greater health privacy exposure. Our extensive discussion offers insights for future LLM-based healthcare privacy investigations and protection technology development.},\n\tbooktitle = {2025 {IEEE} {Symposium} on {Security} and {Privacy} ({SP})},\n\tpublisher = {IEEE Computer Society},\n\tauthor = {Liu, Zhihuang and Hu, Ling and Zhou, Tongqing and Tang, Yonghao and Cai, Zhiping},\n\tmonth = may,\n\tyear = {2025},\n\tkeywords = {contextual integrity, healthcare, large language models, privacy, user study},\n\tpages = {92--92},\n}\n\n","author_short":["Liu, Z.","Hu, L.","Zhou, T.","Tang, Y.","Cai, Z."],"key":"liu_prevalence_2025","id":"liu_prevalence_2025","bibbaseid":"liu-hu-zhou-tang-cai-prevalenceovershadowsconcernsunderstandingchineseusersprivacyawarenessandexpectationstowardsllmbasedhealthcareconsultation-2025","role":"author","urls":{"Paper":"https://doi.ieeecomputersociety.org/10.1109/SP61157.2025.00092"},"keyword":["contextual integrity","healthcare","large language models","privacy","user study"],"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://api.zotero.org/groups/2228317/items?key=MR9XyU2gEWmpdtfEoBUzqtJu&format=bibtex","dataSources":["zHyjsCWXYzKNbtvaG"],"keywords":["contextual integrity","healthcare","large language models","privacy","user study"],"search_terms":["prevalence","overshadows","concerns","understanding","chinese","users","privacy","awareness","expectations","towards","llm","based","healthcare","consultation","liu","hu","zhou","tang","cai"],"title":"Prevalence Overshadows Concerns? Understanding Chinese Users' Privacy Awareness and Expectations Towards LLM-based Healthcare Consultation","year":2025}