Adversarial attacks against medical deep learning systems. Finlayson, S. G, Chung, H. W., Kohane, I. S, & Beam, A. L arXiv preprint arXiv:1804.05296, 2018. Paper abstract bibtex The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we demonstrate that adversarial examples are capable of manip- ulating deep learning systems across three clinical domains. For each of our representative medical deep learning classifiers, both white and black box attacks were highly successful. Our models are representative of the current state of the art in medical computer vision and, in some cases, directly reflect architectures already see- ing deployment in real world clinical settings. In addition to the technical contribution of our paper, we synthesize a large body of knowledge about the healthcare system to argue that medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud and provide concrete examples of how and why such attacks could be realistically carried out. We urge practitioners to be aware of current vulnerabilities when deploying deep learning systems in clinical settings, and encourage the machine learning community to further investigate the domain-specific characteristics of medical learning systems.
@article{finlayson2018adversarial,
title={Adversarial attacks against medical deep learning systems},
author={Finlayson, Samuel G and Chung, Hyung Won and Kohane, Isaac S and Beam, Andrew L},
journal={arXiv preprint arXiv:1804.05296},
url_Paper={https://www.dropbox.com/s/vt378etc6bpujuh/finlayson_adversarial_arxiv_2018.pdf?dl=1},
abstract={The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we demonstrate that adversarial examples are capable of manip- ulating deep learning systems across three clinical domains. For each of our representative medical deep learning classifiers, both white and black box attacks were highly successful. Our models are representative of the current state of the art in medical computer vision and, in some cases, directly reflect architectures already see- ing deployment in real world clinical settings. In addition to the technical contribution of our paper, we synthesize a large body of knowledge about the healthcare system to argue that medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud and provide concrete examples of how and why such attacks could be realistically carried out. We urge practitioners to be aware of current vulnerabilities when deploying deep learning systems in clinical settings, and encourage the machine learning community to further investigate the domain-specific characteristics of medical learning systems.},
keywords={Deep Learning, Adversarial Attacks, Healthcare},
year={2018}
}
Downloads: 0
{"_id":"3kLouKQke427ByCQT","bibbaseid":"finlayson-chung-kohane-beam-adversarialattacksagainstmedicaldeeplearningsystems-2018","authorIDs":[],"author_short":["Finlayson, S. G","Chung, H. W.","Kohane, I. S","Beam, A. L"],"bibdata":{"bibtype":"article","type":"article","title":"Adversarial attacks against medical deep learning systems","author":[{"propositions":[],"lastnames":["Finlayson"],"firstnames":["Samuel","G"],"suffixes":[]},{"propositions":[],"lastnames":["Chung"],"firstnames":["Hyung","Won"],"suffixes":[]},{"propositions":[],"lastnames":["Kohane"],"firstnames":["Isaac","S"],"suffixes":[]},{"propositions":[],"lastnames":["Beam"],"firstnames":["Andrew","L"],"suffixes":[]}],"journal":"arXiv preprint arXiv:1804.05296","url_paper":"https://www.dropbox.com/s/vt378etc6bpujuh/finlayson_adversarial_arxiv_2018.pdf?dl=1","abstract":"The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we demonstrate that adversarial examples are capable of manip- ulating deep learning systems across three clinical domains. For each of our representative medical deep learning classifiers, both white and black box attacks were highly successful. Our models are representative of the current state of the art in medical computer vision and, in some cases, directly reflect architectures already see- ing deployment in real world clinical settings. In addition to the technical contribution of our paper, we synthesize a large body of knowledge about the healthcare system to argue that medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud and provide concrete examples of how and why such attacks could be realistically carried out. We urge practitioners to be aware of current vulnerabilities when deploying deep learning systems in clinical settings, and encourage the machine learning community to further investigate the domain-specific characteristics of medical learning systems.","keywords":"Deep Learning, Adversarial Attacks, Healthcare","year":"2018","bibtex":"@article{finlayson2018adversarial,\n title={Adversarial attacks against medical deep learning systems},\n author={Finlayson, Samuel G and Chung, Hyung Won and Kohane, Isaac S and Beam, Andrew L},\n journal={arXiv preprint arXiv:1804.05296},\n url_Paper={https://www.dropbox.com/s/vt378etc6bpujuh/finlayson_adversarial_arxiv_2018.pdf?dl=1},\n abstract={The discovery of adversarial examples has raised concerns about the practical deployment of deep learning systems. In this paper, we demonstrate that adversarial examples are capable of manip- ulating deep learning systems across three clinical domains. For each of our representative medical deep learning classifiers, both white and black box attacks were highly successful. Our models are representative of the current state of the art in medical computer vision and, in some cases, directly reflect architectures already see- ing deployment in real world clinical settings. In addition to the technical contribution of our paper, we synthesize a large body of knowledge about the healthcare system to argue that medicine may be uniquely susceptible to adversarial attacks, both in terms of monetary incentives and technical vulnerability. To this end, we outline the healthcare economy and the incentives it creates for fraud and provide concrete examples of how and why such attacks could be realistically carried out. We urge practitioners to be aware of current vulnerabilities when deploying deep learning systems in clinical settings, and encourage the machine learning community to further investigate the domain-specific characteristics of medical learning systems.},\n keywords={Deep Learning, Adversarial Attacks, Healthcare},\n year={2018}\n}\n\n","author_short":["Finlayson, S. G","Chung, H. W.","Kohane, I. S","Beam, A. L"],"key":"finlayson2018adversarial","id":"finlayson2018adversarial","bibbaseid":"finlayson-chung-kohane-beam-adversarialattacksagainstmedicaldeeplearningsystems-2018","role":"author","urls":{" paper":"https://www.dropbox.com/s/vt378etc6bpujuh/finlayson_adversarial_arxiv_2018.pdf?dl=1"},"keyword":["Deep Learning","Adversarial Attacks","Healthcare"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"article","biburl":"https://www.dropbox.com/s/0k6pa735xx3gr9i/citations.txt?dl=1","creationDate":"2019-07-24T12:46:58.183Z","downloads":0,"keywords":["deep learning","adversarial attacks","healthcare"],"search_terms":["adversarial","attacks","against","medical","deep","learning","systems","finlayson","chung","kohane","beam"],"title":"Adversarial attacks against medical deep learning systems","year":2018,"dataSources":["2LQLmS62hSLmYBK2a","R7XXNLtmcExJiuMoA"]}