Adversarial Examples Improve Image Recognition. Xie, C., Tan, M., Gong, B., Wang, J., Yuille, A., & Le, Q. V
abstract   bibtex   
Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an opposite perspective: adversarial examples can be used to improve image recognition models if harnessed in the right manner. We propose AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to our method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. We show that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger. For instance, by applying AdvProp to the latest EfficientNet-B7 [28] on ImageNet, we achieve significant improvements on ImageNet (+0.7%), ImageNet-C (+6.5%), ImageNet-A (+7.0%), StylizedImageNet (+4.8%). With an enhanced EfficientNet-B8, our method achieves the state-of-the-art 85.5% ImageNet top-1 accuracy without extra data. This result even surpasses the best model in [20] which is trained with 3.5B Instagram images (∼3000× more than ImageNet) and ∼9.4× more parameters. Models are available at https://github.com/tensorflow/tpu/tree/ master/models/official/efficientnet.
@article{xie_adversarial_nodate,
	title = {Adversarial {Examples} {Improve} {Image} {Recognition}},
	abstract = {Adversarial examples are commonly viewed as a threat to ConvNets. Here we present an opposite perspective: adversarial examples can be used to improve image recognition models if harnessed in the right manner. We propose AdvProp, an enhanced adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to our method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. We show that AdvProp improves a wide range of models on various image recognition tasks and performs better when the models are bigger. For instance, by applying AdvProp to the latest EfficientNet-B7 [28] on ImageNet, we achieve significant improvements on ImageNet (+0.7\%), ImageNet-C (+6.5\%), ImageNet-A (+7.0\%), StylizedImageNet (+4.8\%). With an enhanced EfficientNet-B8, our method achieves the state-of-the-art 85.5\% ImageNet top-1 accuracy without extra data. This result even surpasses the best model in [20] which is trained with 3.5B Instagram images (∼3000× more than ImageNet) and ∼9.4× more parameters. Models are available at https://github.com/tensorflow/tpu/tree/ master/models/official/efficientnet.},
	language = {en},
	author = {Xie, Cihang and Tan, Mingxing and Gong, Boqing and Wang, Jiang and Yuille, Alan and Le, Quoc V},
	pages = {10},
}

Downloads: 0