Toward Certified Robustness Against Real-World Distribution Shifts. Wu, H., Tagomori, T., Robey, A., Yang, F., Matni, N., Pappas, G., Hassani, H., Păsăreanu, C., & Barrett, C. In McDaniel, P. & Papernot, N., editors, Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML), pages 537–553, February, 2023. IEEE. Raleigh, NC
Paper doi abstract bibtex 14 downloads We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts. To do so, we bridge the gap between hand-crafted specifications and realistic deployment settings by considering a neural-symbolic verification framework in which generative models are trained to learn perturbations from data and specifications are defined with respect to the output of these learned models. A pervasive challenge arising from this setting is that although S-shaped activations (e.g., sigmoid, tanh) are common in the last layer of deep generative models, existing verifiers cannot tightly approximate S-shaped activations. To address this challenge, we propose a general meta-algorithm for handling S-shaped activations which leverages classical notions of counter-example-guided abstraction refinement. The key idea is to ``lazily'' refine the abstraction of S-shaped functions to exclude spurious counter-examples found in the previous abstraction, thus guaranteeing progress in the verification process while keeping the state-space small. For networks with sigmoid activations, we show that our technique outperforms state-of-the-art verifiers on certifying robustness against both canonical adversarial perturbations and numerous real-world distribution shifts. Furthermore, experiments on the MNIST and CIFAR-10 datasets show that distribution-shift-aware algorithms have significantly higher certified robustness against distribution shifts.
@inproceedings{WTR+23,
url = "http://theory.stanford.edu/~barrett/pubs/WTR+23.pdf",
author = "Haoze Wu and Teruhiro Tagomori and Alexander Robey and Fengjun Yang and Nikolai Matni and George Pappas and Hamed Hassani and Corina P{\u{a}}s{\u{a}}reanu and Clark Barrett",
title = "Toward Certified Robustness Against Real-World Distribution Shifts",
booktitle = "Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)",
publisher = "IEEE",
editor = "Patrick McDaniel and Nicolas Papernot",
month = feb,
pages = "537--553",
doi = "10.1109/SaTML54575.2023.00042",
year = 2023,
note = "Raleigh, NC",
category = "Conference Publications",
abstract = "We consider the problem of certifying the robustness of deep
neural networks against real-world distribution shifts. To do
so, we bridge the gap between hand-crafted specifications and
realistic deployment settings by considering a
neural-symbolic verification framework in which generative
models are trained to learn perturbations from data and
specifications are defined with respect to the output of
these learned models. A pervasive challenge arising from this
setting is that although S-shaped activations (e.g., sigmoid,
tanh) are common in the last layer of deep generative models,
existing verifiers cannot tightly approximate S-shaped
activations. To address this challenge, we propose a general
meta-algorithm for handling S-shaped activations which
leverages classical notions of counter-example-guided
abstraction refinement. The key idea is to ``lazily'' refine
the abstraction of S-shaped functions to exclude spurious
counter-examples found in the previous abstraction, thus
guaranteeing progress in the verification process while
keeping the state-space small. For networks with sigmoid
activations, we show that our technique outperforms
state-of-the-art verifiers on certifying robustness against
both canonical adversarial perturbations and numerous
real-world distribution shifts. Furthermore, experiments on
the MNIST and CIFAR-10 datasets show that
distribution-shift-aware algorithms have significantly higher
certified robustness against distribution shifts.",
}
Downloads: 14
{"_id":"fyHqiz7gSBQ9f8E8m","bibbaseid":"wu-tagomori-robey-yang-matni-pappas-hassani-psreanu-etal-towardcertifiedrobustnessagainstrealworlddistributionshifts-2023","author_short":["Wu, H.","Tagomori, T.","Robey, A.","Yang, F.","Matni, N.","Pappas, G.","Hassani, H.","Păsăreanu, C.","Barrett, C."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","url":"http://theory.stanford.edu/~barrett/pubs/WTR+23.pdf","author":[{"firstnames":["Haoze"],"propositions":[],"lastnames":["Wu"],"suffixes":[]},{"firstnames":["Teruhiro"],"propositions":[],"lastnames":["Tagomori"],"suffixes":[]},{"firstnames":["Alexander"],"propositions":[],"lastnames":["Robey"],"suffixes":[]},{"firstnames":["Fengjun"],"propositions":[],"lastnames":["Yang"],"suffixes":[]},{"firstnames":["Nikolai"],"propositions":[],"lastnames":["Matni"],"suffixes":[]},{"firstnames":["George"],"propositions":[],"lastnames":["Pappas"],"suffixes":[]},{"firstnames":["Hamed"],"propositions":[],"lastnames":["Hassani"],"suffixes":[]},{"firstnames":["Corina"],"propositions":[],"lastnames":["Păsăreanu"],"suffixes":[]},{"firstnames":["Clark"],"propositions":[],"lastnames":["Barrett"],"suffixes":[]}],"title":"Toward Certified Robustness Against Real-World Distribution Shifts","booktitle":"Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)","publisher":"IEEE","editor":[{"firstnames":["Patrick"],"propositions":[],"lastnames":["McDaniel"],"suffixes":[]},{"firstnames":["Nicolas"],"propositions":[],"lastnames":["Papernot"],"suffixes":[]}],"month":"February","pages":"537–553","doi":"10.1109/SaTML54575.2023.00042","year":"2023","note":"Raleigh, NC","category":"Conference Publications","abstract":"We consider the problem of certifying the robustness of deep neural networks against real-world distribution shifts. To do so, we bridge the gap between hand-crafted specifications and realistic deployment settings by considering a neural-symbolic verification framework in which generative models are trained to learn perturbations from data and specifications are defined with respect to the output of these learned models. A pervasive challenge arising from this setting is that although S-shaped activations (e.g., sigmoid, tanh) are common in the last layer of deep generative models, existing verifiers cannot tightly approximate S-shaped activations. To address this challenge, we propose a general meta-algorithm for handling S-shaped activations which leverages classical notions of counter-example-guided abstraction refinement. The key idea is to ``lazily'' refine the abstraction of S-shaped functions to exclude spurious counter-examples found in the previous abstraction, thus guaranteeing progress in the verification process while keeping the state-space small. For networks with sigmoid activations, we show that our technique outperforms state-of-the-art verifiers on certifying robustness against both canonical adversarial perturbations and numerous real-world distribution shifts. Furthermore, experiments on the MNIST and CIFAR-10 datasets show that distribution-shift-aware algorithms have significantly higher certified robustness against distribution shifts.","bibtex":"@inproceedings{WTR+23,\n url = \"http://theory.stanford.edu/~barrett/pubs/WTR+23.pdf\",\n author = \"Haoze Wu and Teruhiro Tagomori and Alexander Robey and Fengjun Yang and Nikolai Matni and George Pappas and Hamed Hassani and Corina P{\\u{a}}s{\\u{a}}reanu and Clark Barrett\",\n title = \"Toward Certified Robustness Against Real-World Distribution Shifts\",\n booktitle = \"Proceedings of the 2023 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)\",\n publisher = \"IEEE\",\n editor = \"Patrick McDaniel and Nicolas Papernot\",\n month = feb,\n pages = \"537--553\",\n doi = \"10.1109/SaTML54575.2023.00042\",\n year = 2023,\n note = \"Raleigh, NC\",\n category = \"Conference Publications\",\n abstract = \"We consider the problem of certifying the robustness of deep\n neural networks against real-world distribution shifts. To do\n so, we bridge the gap between hand-crafted specifications and\n realistic deployment settings by considering a\n neural-symbolic verification framework in which generative\n models are trained to learn perturbations from data and\n specifications are defined with respect to the output of\n these learned models. A pervasive challenge arising from this\n setting is that although S-shaped activations (e.g., sigmoid,\n tanh) are common in the last layer of deep generative models,\n existing verifiers cannot tightly approximate S-shaped\n activations. To address this challenge, we propose a general\n meta-algorithm for handling S-shaped activations which\n leverages classical notions of counter-example-guided\n abstraction refinement. The key idea is to ``lazily'' refine\n the abstraction of S-shaped functions to exclude spurious\n counter-examples found in the previous abstraction, thus\n guaranteeing progress in the verification process while\n keeping the state-space small. For networks with sigmoid\n activations, we show that our technique outperforms\n state-of-the-art verifiers on certifying robustness against\n both canonical adversarial perturbations and numerous\n real-world distribution shifts. Furthermore, experiments on\n the MNIST and CIFAR-10 datasets show that\n distribution-shift-aware algorithms have significantly higher\n certified robustness against distribution shifts.\",\n}\n\n","author_short":["Wu, H.","Tagomori, T.","Robey, A.","Yang, F.","Matni, N.","Pappas, G.","Hassani, H.","Păsăreanu, C.","Barrett, C."],"editor_short":["McDaniel, P.","Papernot, N."],"key":"WTR+23","id":"WTR+23","bibbaseid":"wu-tagomori-robey-yang-matni-pappas-hassani-psreanu-etal-towardcertifiedrobustnessagainstrealworlddistributionshifts-2023","role":"author","urls":{"Paper":"http://theory.stanford.edu/~barrett/pubs/WTR+23.pdf"},"metadata":{"authorlinks":{}},"downloads":14},"bibtype":"inproceedings","biburl":"http://aisafety.stanford.edu/bib/all-pubs.bib","dataSources":["gAsQhyq6KFagsX5yX","Q5m4eREZKA5kKSYST"],"keywords":[],"search_terms":["toward","certified","robustness","against","real","world","distribution","shifts","wu","tagomori","robey","yang","matni","pappas","hassani","păsăreanu","barrett"],"title":"Toward Certified Robustness Against Real-World Distribution Shifts","year":2023,"downloads":14}