\n
\n\n \n \n \n \n \n \n Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization.\n \n \n \n \n\n\n \n Xu, X.; Zhang, J. Y; Ma, E.; Son, H. H.; Koyejo, S.; and Li, B.\n\n\n \n\n\n\n In Chaudhuri, K.; Jegelka, S.; Song, L.; Szepesvari, C.; Niu, G.; and Sabato, S., editor(s),
Proceedings of the 39th International Conference on Machine Learning, volume 162, of
Proceedings of Machine Learning Research, pages 24770–24802, 17–23 Jul 2022. PMLR\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n\n \n link\n \n \n\n bibtex\n \n\n \n \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@InProceedings{pmlr-v162-xu22n,\n title = \t {Adversarially Robust Models may not Transfer Better: Sufficient Conditions for Domain Transferability from the View of Regularization},\n author = {Xu, Xiaojun and Zhang, Jacky Y and Ma, Evelyn and Son, Hyun Ho and Koyejo, Sanmi and Li, Bo},\n booktitle = \t {Proceedings of the 39th International Conference on Machine Learning},\n pages = \t {24770--24802},\n year = \t {2022},\n editor = \t {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},\n volume = \t {162},\n series = \t {Proceedings of Machine Learning Research},\n month = \t {17--23 Jul},\n publisher = {PMLR},\n pdf = \t {https://proceedings.mlr.press/v162/xu22n/xu22n.pdf},\n url = \t {https://proceedings.mlr.press/v162/xu22n.html},\n abstract = \t {Machine learning (ML) robustness and domain generalization are fundamentally correlated: they essentially concern data distribution shifts under adversarial and natural settings, respectively. On one hand, recent studies show that more robust (adversarially trained) models are more generalizable. On the other hand, there is a lack of theoretical understanding of their fundamental connections. In this paper, we explore the relationship between regularization and domain transferability considering different factors such as norm regularization and data augmentations (DA). We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability. Our analysis implies that “robustness" is neither necessary nor sufficient for transferability; rather, regularization is a more fundamental perspective for understanding domain transferability. We then discuss popular DA protocols (including adversarial training) and show when they can be viewed as the function class regularization under certain conditions and therefore improve generalization. We conduct extensive experiments to verify our theoretical findings and show several counterexamples where robustness and generalization are negatively correlated on different datasets.}\n}\n\n
\n
\n\n\n
\n Machine learning (ML) robustness and domain generalization are fundamentally correlated: they essentially concern data distribution shifts under adversarial and natural settings, respectively. On one hand, recent studies show that more robust (adversarially trained) models are more generalizable. On the other hand, there is a lack of theoretical understanding of their fundamental connections. In this paper, we explore the relationship between regularization and domain transferability considering different factors such as norm regularization and data augmentations (DA). We propose a general theoretical framework proving that factors involving the model function class regularization are sufficient conditions for relative domain transferability. Our analysis implies that “robustness\" is neither necessary nor sufficient for transferability; rather, regularization is a more fundamental perspective for understanding domain transferability. We then discuss popular DA protocols (including adversarial training) and show when they can be viewed as the function class regularization under certain conditions and therefore improve generalization. We conduct extensive experiments to verify our theoretical findings and show several counterexamples where robustness and generalization are negatively correlated on different datasets.\n
\n\n\n
\n
\n\n \n \n \n \n \n \n Efficient Neural Network Analysis with Sum-of-Infeasibilities.\n \n \n \n \n\n\n \n Wu, H.; Zeljić, A.; Katz, G.; and Barrett, C.\n\n\n \n\n\n\n In Fisman, D.; and Rosu, G., editor(s),
International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS), volume 13243, of
Lecture Notes in Computer Science, pages 143–163, April 2022. Springer\n
\n\n
\n\n
\n\n
\n\n \n \n Paper\n \n \n\n \n \n doi\n \n \n\n \n link\n \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n \n \n \n\n\n\n
\n
@inproceedings{WZK+22,\n author = {Haoze Wu and Aleksandar Zelji{\\'c} and Guy Katz and Clark\n\tBarrett},\n editor = {Dana Fisman and Grigore Rosu},\n title = {Efficient Neural Network Analysis with Sum-of-Infeasibilities},\n booktitle = tacas,\n series = {Lecture Notes in Computer Science},\n volume = {13243},\n pages = {143--163},\n publisher = {Springer},\n month = apr,\n year = {2022},\n doi = {10.1007/978-3-030-99524-9_24},\n url = {http://www.cs.stanford.edu/~barrett/pubs/WZK+22.pdf}\n}\n\n\n
\n
\n\n\n\n