Assisting the Adversary to Improve GAN Training. Munk, A., Harvey, W., & Wood, F. In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1-8, July, 2021. Arxiv Paper doi abstract bibtex 4 downloads Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.
@InProceedings{9533449,
author={Munk, Andreas and Harvey, William and Wood, Frank},
booktitle={2021 International Joint Conference on Neural Networks (IJCNN)},
title={Assisting the Adversary to Improve GAN Training},
year={2021},
pages={1-8},
abstract={Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.},
doi={10.1109/IJCNN52387.2021.9533449},
ISSN={2161-4407},
month={July},
url_ArXiv = {https://arxiv.org/abs/2010.01274},
url_Paper = {https://ieeexplore.ieee.org/document/9533449},
support = {D3M,ETALUMIS}
}
Downloads: 4
{"_id":"9DaMeGfPdtJkpvRpR","bibbaseid":"munk-harvey-wood-assistingtheadversarytoimprovegantraining-2021","author_short":["Munk, A.","Harvey, W.","Wood, F."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"propositions":[],"lastnames":["Munk"],"firstnames":["Andreas"],"suffixes":[]},{"propositions":[],"lastnames":["Harvey"],"firstnames":["William"],"suffixes":[]},{"propositions":[],"lastnames":["Wood"],"firstnames":["Frank"],"suffixes":[]}],"booktitle":"2021 International Joint Conference on Neural Networks (IJCNN)","title":"Assisting the Adversary to Improve GAN Training","year":"2021","pages":"1-8","abstract":"Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.","doi":"10.1109/IJCNN52387.2021.9533449","issn":"2161-4407","month":"July","url_arxiv":"https://arxiv.org/abs/2010.01274","url_paper":"https://ieeexplore.ieee.org/document/9533449","support":"D3M,ETALUMIS","bibtex":"@InProceedings{9533449, \n\tauthor={Munk, Andreas and Harvey, William and Wood, Frank}, \n\tbooktitle={2021 International Joint Conference on Neural Networks (IJCNN)}, \n\ttitle={Assisting the Adversary to Improve GAN Training}, \n\tyear={2021},\n\tpages={1-8}, \n\tabstract={Some of the most popular methods for improving the stability and performance of GANs involve constraining or regularizing the discriminator. In this paper we consider a largely overlooked regularization technique which we refer to as the Adversary's Assistant (AdvAs). We motivate this using a different perspective to that of prior work. Specifically, we consider a common mismatch between theoretical analysis and practice: analysis often assumes that the discriminator reaches its optimum on each iteration. In practice, this is essentially never true, often leading to poor gradient estimates for the generator. To address this, AdvAs is a penalty imposed on the generator based on the norm of the gradients used to train the discriminator. This encourages the generator to move towards points where the discriminator is optimal. We demonstrate the effect of applying AdvAs to several GAN objectives, datasets and network architectures. The results indicate a reduction in the mismatch between theory and practice and that AdvAs can lead to improvement of GAN training, as measured by FID scores.}, \n\tdoi={10.1109/IJCNN52387.2021.9533449}, \n\tISSN={2161-4407}, \n\tmonth={July},\n\turl_ArXiv = {https://arxiv.org/abs/2010.01274},\n\turl_Paper = {https://ieeexplore.ieee.org/document/9533449},\n\tsupport = {D3M,ETALUMIS}\n}\n\n","author_short":["Munk, A.","Harvey, W.","Wood, F."],"key":"9533449","id":"9533449","bibbaseid":"munk-harvey-wood-assistingtheadversarytoimprovegantraining-2021","role":"author","urls":{" arxiv":"https://arxiv.org/abs/2010.01274"," paper":"https://ieeexplore.ieee.org/document/9533449"},"metadata":{"authorlinks":{}},"downloads":4},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/plai-group/bibliography/master/group_publications.bib","dataSources":["BKH7YtW7K7WNMA3cj","7avRLRrz2ifJGMKcD","wyN5DxtoT6AQuiXnm"],"keywords":[],"search_terms":["assisting","adversary","improve","gan","training","munk","harvey","wood"],"title":"Assisting the Adversary to Improve GAN Training","year":2021,"downloads":4}