Membership Inference Attack Susceptibility of Clinical Language Models. Jagannatha, A., Rawat, B. P. S., & Yu, H. CoRR, 2021. arXiv: 2104.08305Paper abstract bibtex Deep Neural Network (DNN) models have been shown to have high empirical privacy leakages. Clinical language models (CLMs) trained on clinical data have been used to improve performance in biomedical natural language processing tasks. In this work, we investigate the risks of training-data leakage through white-box or black-box access to CLMs. We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2. We show that membership inference attacks on CLMs lead to non-trivial privacy leakages of up to 7%. Our results show that smaller models have lower empirical privacy leakages than larger ones, and masked LMs have lower leakages than auto-regressive LMs. We further show that differentially private CLMs can have improved model utility on clinical domain while ensuring low empirical privacy leakage. Lastly, we also study the effects of group-level membership inference and disease rarity on CLM privacy leakages.
@article{jagannatha_membership_2021,
title = {Membership {Inference} {Attack} {Susceptibility} of {Clinical} {Language} {Models}},
volume = {abs/2104.08305},
url = {https://arxiv.org/abs/2104.08305},
abstract = {Deep Neural Network (DNN) models have been shown to have high empirical privacy leakages. Clinical language models (CLMs) trained on clinical data have been used to improve performance in biomedical natural language processing tasks. In this work, we investigate the risks of training-data leakage through white-box or black-box access to CLMs. We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2. We show that membership inference attacks on CLMs lead to non-trivial privacy leakages of up to 7\%. Our results show that smaller models have lower empirical privacy leakages than larger ones, and masked LMs have lower leakages than auto-regressive LMs. We further show that differentially private CLMs can have improved model utility on clinical domain while ensuring low empirical privacy leakage. Lastly, we also study the effects of group-level membership inference and disease rarity on CLM privacy leakages.},
journal = {CoRR},
author = {Jagannatha, Abhyuday and Rawat, Bhanu Pratap Singh and Yu, Hong},
year = {2021},
note = {arXiv: 2104.08305},
}
Downloads: 0
{"_id":"CAdafFbhJx4n2MLWQ","bibbaseid":"jagannatha-rawat-yu-membershipinferenceattacksusceptibilityofclinicallanguagemodels-2021","author_short":["Jagannatha, A.","Rawat, B. P. S.","Yu, H."],"bibdata":{"bibtype":"article","type":"article","title":"Membership Inference Attack Susceptibility of Clinical Language Models","volume":"abs/2104.08305","url":"https://arxiv.org/abs/2104.08305","abstract":"Deep Neural Network (DNN) models have been shown to have high empirical privacy leakages. Clinical language models (CLMs) trained on clinical data have been used to improve performance in biomedical natural language processing tasks. In this work, we investigate the risks of training-data leakage through white-box or black-box access to CLMs. We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2. We show that membership inference attacks on CLMs lead to non-trivial privacy leakages of up to 7%. Our results show that smaller models have lower empirical privacy leakages than larger ones, and masked LMs have lower leakages than auto-regressive LMs. We further show that differentially private CLMs can have improved model utility on clinical domain while ensuring low empirical privacy leakage. Lastly, we also study the effects of group-level membership inference and disease rarity on CLM privacy leakages.","journal":"CoRR","author":[{"propositions":[],"lastnames":["Jagannatha"],"firstnames":["Abhyuday"],"suffixes":[]},{"propositions":[],"lastnames":["Rawat"],"firstnames":["Bhanu","Pratap","Singh"],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["Hong"],"suffixes":[]}],"year":"2021","note":"arXiv: 2104.08305","bibtex":"@article{jagannatha_membership_2021,\n\ttitle = {Membership {Inference} {Attack} {Susceptibility} of {Clinical} {Language} {Models}},\n\tvolume = {abs/2104.08305},\n\turl = {https://arxiv.org/abs/2104.08305},\n\tabstract = {Deep Neural Network (DNN) models have been shown to have high empirical privacy leakages. Clinical language models (CLMs) trained on clinical data have been used to improve performance in biomedical natural language processing tasks. In this work, we investigate the risks of training-data leakage through white-box or black-box access to CLMs. We design and employ membership inference attacks to estimate the empirical privacy leaks for model architectures like BERT and GPT2. We show that membership inference attacks on CLMs lead to non-trivial privacy leakages of up to 7\\%. Our results show that smaller models have lower empirical privacy leakages than larger ones, and masked LMs have lower leakages than auto-regressive LMs. We further show that differentially private CLMs can have improved model utility on clinical domain while ensuring low empirical privacy leakage. Lastly, we also study the effects of group-level membership inference and disease rarity on CLM privacy leakages.},\n\tjournal = {CoRR},\n\tauthor = {Jagannatha, Abhyuday and Rawat, Bhanu Pratap Singh and Yu, Hong},\n\tyear = {2021},\n\tnote = {arXiv: 2104.08305},\n}\n\n","author_short":["Jagannatha, A.","Rawat, B. P. S.","Yu, H."],"key":"jagannatha_membership_2021","id":"jagannatha_membership_2021","bibbaseid":"jagannatha-rawat-yu-membershipinferenceattacksusceptibilityofclinicallanguagemodels-2021","role":"author","urls":{"Paper":"https://arxiv.org/abs/2104.08305"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"article","biburl":"http://fenway.cs.uml.edu/papers/pubs-all.bib","dataSources":["TqaA9miSB65nRfS5H"],"keywords":[],"search_terms":["membership","inference","attack","susceptibility","clinical","language","models","jagannatha","rawat","yu"],"title":"Membership Inference Attack Susceptibility of Clinical Language Models","year":2021}