Which training methods for GANs do actually converge?. Mescheder, L., Geiger, A., & Nowozin, S. 35th International Conference on Machine Learning, ICML 2018, 8:5589-5626, 2018. Paper abstract bibtex GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss reg- ularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero- centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high- resolution generative image models for a variety of datasets with little hyperparameter tuning.
@article{
title = {Which training methods for GANs do actually converge?},
type = {article},
year = {2018},
pages = {5589-5626},
volume = {8},
id = {54368713-bad8-3e80-89ca-9c73461e833a},
created = {2022-09-08T17:25:32.203Z},
file_attached = {true},
profile_id = {ad172e55-c0e8-3aa4-8465-09fac4d5f5c8},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2022-09-21T09:29:25.406Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {true},
hidden = {false},
folder_uuids = {b6d75013-efe2-4ddc-b3db-65496bd4db9f,103fae48-b63f-495b-9265-9049d2927097},
private_publication = {false},
abstract = {GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss reg- ularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero- centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high- resolution generative image models for a variety of datasets with little hyperparameter tuning.},
bibtype = {article},
author = {Mescheder, Lars and Geiger, Andreas and Nowozin, Sebastian},
journal = {35th International Conference on Machine Learning, ICML 2018}
}
Downloads: 0
{"_id":"nxmvNoDKPdWL2FeMf","bibbaseid":"mescheder-geiger-nowozin-whichtrainingmethodsforgansdoactuallyconverge-2018","authorIDs":[],"author_short":["Mescheder, L.","Geiger, A.","Nowozin, S."],"bibdata":{"title":"Which training methods for GANs do actually converge?","type":"article","year":"2018","pages":"5589-5626","volume":"8","id":"54368713-bad8-3e80-89ca-9c73461e833a","created":"2022-09-08T17:25:32.203Z","file_attached":"true","profile_id":"ad172e55-c0e8-3aa4-8465-09fac4d5f5c8","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2022-09-21T09:29:25.406Z","read":false,"starred":false,"authored":false,"confirmed":"true","hidden":false,"folder_uuids":"b6d75013-efe2-4ddc-b3db-65496bd4db9f,103fae48-b63f-495b-9265-9049d2927097","private_publication":false,"abstract":"GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss reg- ularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero- centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high- resolution generative image models for a variety of datasets with little hyperparameter tuning.","bibtype":"article","author":"Mescheder, Lars and Geiger, Andreas and Nowozin, Sebastian","journal":"35th International Conference on Machine Learning, ICML 2018","bibtex":"@article{\n title = {Which training methods for GANs do actually converge?},\n type = {article},\n year = {2018},\n pages = {5589-5626},\n volume = {8},\n id = {54368713-bad8-3e80-89ca-9c73461e833a},\n created = {2022-09-08T17:25:32.203Z},\n file_attached = {true},\n profile_id = {ad172e55-c0e8-3aa4-8465-09fac4d5f5c8},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2022-09-21T09:29:25.406Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {true},\n hidden = {false},\n folder_uuids = {b6d75013-efe2-4ddc-b3db-65496bd4db9f,103fae48-b63f-495b-9265-9049d2927097},\n private_publication = {false},\n abstract = {GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss reg- ularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero- centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high- resolution generative image models for a variety of datasets with little hyperparameter tuning.},\n bibtype = {article},\n author = {Mescheder, Lars and Geiger, Andreas and Nowozin, Sebastian},\n journal = {35th International Conference on Machine Learning, ICML 2018}\n}","author_short":["Mescheder, L.","Geiger, A.","Nowozin, S."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/449ece46-68e4-6575-0bb3-7da04e53dc5a/mescheder18a.pdf.pdf"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"mescheder-geiger-nowozin-whichtrainingmethodsforgansdoactuallyconverge-2018","role":"author","metadata":{"authorlinks":{}},"downloads":0},"bibtype":"article","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","creationDate":"2020-01-27T02:13:33.889Z","downloads":0,"keywords":[],"search_terms":["training","methods","gans","actually","converge","mescheder","geiger","nowozin"],"title":"Which training methods for GANs do actually converge?","year":2018,"dataSources":["hEoKh4ygEAWbAZ5iy","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"]}