Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. Athalye, A., Carlini, N., & Wagner, D. 35th International Conference on Machine Learning, ICML 2018, 1:436-448, 2018.
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples [pdf]Paper  abstract   bibtex   
We identify obfuscated gradients, a kind of gradient masking, as a phenomenon that leads to a false sense of security in defenses against adversarial examples. While defenses that causc obfuscated gradients appear to defeat iterative optimization- based attacks, wc find defenses relying on this effect can be circumvented. We describe characteristic behaviors of defenses exhibiting the effect, and for each of the three types Qf obfuscated gradients we discover, wc develop attack techniques to overcome it. In a case study, examining non- certified white-box-secure defenses at ICLR 2018. we find obfuscated gradients arc a common occurrence, with 7 of 9 defenses relying on obfuscated gradients. Our new attacks successfully circumvent 6 completely, and 1 partially, in the original threat model each paper considers.

Downloads: 0