Towards Robustness Certification Against Universal Perturbations. Zeng*, Y., Shi*, Z., Jin, M., Kang, F., Lyu, L., Hsieh, C., & Jia, R. In International Conference on Learning Representations (ICLR), 2023.
Towards Robustness Certification Against Universal Perturbations [link]Openreview  abstract   bibtex   6 downloads  
In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.
@inproceedings{2023_4C_AUP,
  title={Towards Robustness Certification Against Universal Perturbations},
  author={Yi Zeng* and Zhouxing Shi* and Ming Jin and Feiyang Kang and Lingjuan Lyu and Cho-Jui Hsieh and Ruoxi Jia},
  booktitle={International Conference on Learning Representations (ICLR)},
  pages={},
  year={2023},
  url_openreview={https://openreview.net/forum?id=7GEvPKxjtt},
  keywords = {Machine Learning},
  abstract={In this paper, we investigate the problem of certifying neural network robustness against universal perturbations (UPs), which have been widely used in universal adversarial attacks and backdoor attacks. Existing robustness certification methods aim to provide robustness guarantees for each sample with respect to the worst-case perturbations given a neural network. However, those sample-wise bounds will be loose when considering the UP threat model as they overlook the important constraint that the perturbation should be shared across all samples. We propose a method based on a combination of linear relaxation-based perturbation analysis and Mixed Integer Linear Programming to establish the first robust certification method for UP. In addition, we develop a theoretical framework for computing error bounds on the entire population using the certification results from a randomly sampled batch. Aside from an extensive evaluation of the proposed certification, we further show how the certification facilitates efficient comparison of robustness among different models or efficacy among different universal adversarial attack defenses and enables accurate detection of backdoor target classes.  },
}

Downloads: 6