The GAN is dead; long live the GAN! A Modern GAN Baseline. Huang, Y., Gokaslan, A., Kuleshov, V., & Tompkin, J. January, 2025. arXiv:2501.05441 [cs]
Paper doi abstract bibtex There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks. We provide evidence against this claim and build a modern GAN baseline in a more principled manner. First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a bag of ad-hoc tricks. We analyze our loss mathematically and prove that it admits local convergence guarantees, unlike most existing relativistic losses. Second, our new loss allows us to discard all ad-hoc tricks and replace outdated backbones used in common GANs with modern architectures. Using StyleGAN2 as an example, we present a roadmap of simplification and modernization that results in a new minimalist baseline – R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ, ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against state-of-the-art GANs and diffusion models.
@misc{huang_gan_2025,
title = {The {GAN} is dead; long live the {GAN}! {A} {Modern} {GAN} {Baseline}},
url = {http://arxiv.org/abs/2501.05441},
doi = {10.48550/arXiv.2501.05441},
abstract = {There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks. We provide evidence against this claim and build a modern GAN baseline in a more principled manner. First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a bag of ad-hoc tricks. We analyze our loss mathematically and prove that it admits local convergence guarantees, unlike most existing relativistic losses. Second, our new loss allows us to discard all ad-hoc tricks and replace outdated backbones used in common GANs with modern architectures. Using StyleGAN2 as an example, we present a roadmap of simplification and modernization that results in a new minimalist baseline -- R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ, ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against state-of-the-art GANs and diffusion models.},
urldate = {2025-01-15},
publisher = {arXiv},
author = {Huang, Yiwen and Gokaslan, Aaron and Kuleshov, Volodymyr and Tompkin, James},
month = jan,
year = {2025},
note = {arXiv:2501.05441 [cs]},
keywords = {\#ICML{\textgreater}24, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, ❤️},
}
Downloads: 0
{"_id":"b4a53RbMivtos7Z9R","bibbaseid":"huang-gokaslan-kuleshov-tompkin-theganisdeadlonglivetheganamodernganbaseline-2025","author_short":["Huang, Y.","Gokaslan, A.","Kuleshov, V.","Tompkin, J."],"bibdata":{"bibtype":"misc","type":"misc","title":"The GAN is dead; long live the GAN! A Modern GAN Baseline","url":"http://arxiv.org/abs/2501.05441","doi":"10.48550/arXiv.2501.05441","abstract":"There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks. We provide evidence against this claim and build a modern GAN baseline in a more principled manner. First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a bag of ad-hoc tricks. We analyze our loss mathematically and prove that it admits local convergence guarantees, unlike most existing relativistic losses. Second, our new loss allows us to discard all ad-hoc tricks and replace outdated backbones used in common GANs with modern architectures. Using StyleGAN2 as an example, we present a roadmap of simplification and modernization that results in a new minimalist baseline – R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ, ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against state-of-the-art GANs and diffusion models.","urldate":"2025-01-15","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Huang"],"firstnames":["Yiwen"],"suffixes":[]},{"propositions":[],"lastnames":["Gokaslan"],"firstnames":["Aaron"],"suffixes":[]},{"propositions":[],"lastnames":["Kuleshov"],"firstnames":["Volodymyr"],"suffixes":[]},{"propositions":[],"lastnames":["Tompkin"],"firstnames":["James"],"suffixes":[]}],"month":"January","year":"2025","note":"arXiv:2501.05441 [cs]","keywords":"#ICML\\textgreater24, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, ❤️","bibtex":"@misc{huang_gan_2025,\n\ttitle = {The {GAN} is dead; long live the {GAN}! {A} {Modern} {GAN} {Baseline}},\n\turl = {http://arxiv.org/abs/2501.05441},\n\tdoi = {10.48550/arXiv.2501.05441},\n\tabstract = {There is a widely-spread claim that GANs are difficult to train, and GAN architectures in the literature are littered with empirical tricks. We provide evidence against this claim and build a modern GAN baseline in a more principled manner. First, we derive a well-behaved regularized relativistic GAN loss that addresses issues of mode dropping and non-convergence that were previously tackled via a bag of ad-hoc tricks. We analyze our loss mathematically and prove that it admits local convergence guarantees, unlike most existing relativistic losses. Second, our new loss allows us to discard all ad-hoc tricks and replace outdated backbones used in common GANs with modern architectures. Using StyleGAN2 as an example, we present a roadmap of simplification and modernization that results in a new minimalist baseline -- R3GAN. Despite being simple, our approach surpasses StyleGAN2 on FFHQ, ImageNet, CIFAR, and Stacked MNIST datasets, and compares favorably against state-of-the-art GANs and diffusion models.},\n\turldate = {2025-01-15},\n\tpublisher = {arXiv},\n\tauthor = {Huang, Yiwen and Gokaslan, Aaron and Kuleshov, Volodymyr and Tompkin, James},\n\tmonth = jan,\n\tyear = {2025},\n\tnote = {arXiv:2501.05441 [cs]},\n\tkeywords = {\\#ICML{\\textgreater}24, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, ❤️},\n}\n\n\n\n","author_short":["Huang, Y.","Gokaslan, A.","Kuleshov, V.","Tompkin, J."],"key":"huang_gan_2025","id":"huang_gan_2025","bibbaseid":"huang-gokaslan-kuleshov-tompkin-theganisdeadlonglivetheganamodernganbaseline-2025","role":"author","urls":{"Paper":"http://arxiv.org/abs/2501.05441"},"keyword":["#ICML\\textgreater24","Computer Science - Computer Vision and Pattern Recognition","Computer Science - Machine Learning","❤️"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero/zzhenry2012","dataSources":["nZHrFJKyxKKDaWYM8"],"keywords":["#icml\\textgreater24","computer science - computer vision and pattern recognition","computer science - machine learning","❤️"],"search_terms":["gan","dead","long","live","gan","modern","gan","baseline","huang","gokaslan","kuleshov","tompkin"],"title":"The GAN is dead; long live the GAN! A Modern GAN Baseline","year":2025}