Dual Discriminator Generative Adversarial Nets. Nguyen, T. D., Le, T., Vu, H., & Phung, D. arXiv:1709.03831 [cs, stat], September, 2017. arXiv: 1709.03831Paper abstract bibtex We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.
@article{nguyen_dual_2017,
title = {Dual {Discriminator} {Generative} {Adversarial} {Nets}},
url = {http://arxiv.org/abs/1709.03831},
abstract = {We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.},
urldate = {2018-01-12TZ},
journal = {arXiv:1709.03831 [cs, stat]},
author = {Nguyen, Tu Dinh and Le, Trung and Vu, Hung and Phung, Dinh},
month = sep,
year = {2017},
note = {arXiv: 1709.03831},
keywords = {Computer Science - Learning, Statistics - Machine Learning}
}
Downloads: 0
{"_id":"uSiniLB3xFk7j4QL2","bibbaseid":"nguyen-le-vu-phung-dualdiscriminatorgenerativeadversarialnets-2017","downloads":0,"creationDate":"2018-04-06T04:26:07.180Z","title":"Dual Discriminator Generative Adversarial Nets","author_short":["Nguyen, T. D.","Le, T.","Vu, H.","Phung, D."],"year":2017,"bibtype":"article","biburl":"https://bibbase.org/zotero/alwynmathew","bibdata":{"bibtype":"article","type":"article","title":"Dual Discriminator Generative Adversarial Nets","url":"http://arxiv.org/abs/1709.03831","abstract":"We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.","urldate":"2018-01-12TZ","journal":"arXiv:1709.03831 [cs, stat]","author":[{"propositions":[],"lastnames":["Nguyen"],"firstnames":["Tu","Dinh"],"suffixes":[]},{"propositions":[],"lastnames":["Le"],"firstnames":["Trung"],"suffixes":[]},{"propositions":[],"lastnames":["Vu"],"firstnames":["Hung"],"suffixes":[]},{"propositions":[],"lastnames":["Phung"],"firstnames":["Dinh"],"suffixes":[]}],"month":"September","year":"2017","note":"arXiv: 1709.03831","keywords":"Computer Science - Learning, Statistics - Machine Learning","bibtex":"@article{nguyen_dual_2017,\n\ttitle = {Dual {Discriminator} {Generative} {Adversarial} {Nets}},\n\turl = {http://arxiv.org/abs/1709.03831},\n\tabstract = {We propose in this paper a novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN). Our idea is intuitive but proven to be very effective, especially in addressing some key limitations of GAN. In essence, it combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these divergences to effectively diversify the estimated density in capturing multi-modes. We term our method dual discriminator generative adversarial nets (D2GAN) which, unlike GAN, has two discriminators; and together with a generator, it also has the analogy of a minimax game, wherein a discriminator rewards high scores for samples from data distribution whilst another discriminator, conversely, favoring data from the generator, and the generator produces data to fool both two discriminators. We develop theoretical analysis to show that, given the maximal discriminators, optimizing the generator of D2GAN reduces to minimizing both KL and reverse KL divergences between data distribution and the distribution induced from the data generated by the generator, hence effectively avoiding the mode collapsing problem. We conduct extensive experiments on synthetic and real-world large-scale datasets (MNIST, CIFAR-10, STL-10, ImageNet), where we have made our best effort to compare our D2GAN with the latest state-of-the-art GAN's variants in comprehensive qualitative and quantitative evaluations. The experimental results demonstrate the competitive and superior performance of our approach in generating good quality and diverse samples over baselines, and the capability of our method to scale up to ImageNet database.},\n\turldate = {2018-01-12TZ},\n\tjournal = {arXiv:1709.03831 [cs, stat]},\n\tauthor = {Nguyen, Tu Dinh and Le, Trung and Vu, Hung and Phung, Dinh},\n\tmonth = sep,\n\tyear = {2017},\n\tnote = {arXiv: 1709.03831},\n\tkeywords = {Computer Science - Learning, Statistics - Machine Learning}\n}\n\n","author_short":["Nguyen, T. D.","Le, T.","Vu, H.","Phung, D."],"key":"nguyen_dual_2017","id":"nguyen_dual_2017","bibbaseid":"nguyen-le-vu-phung-dualdiscriminatorgenerativeadversarialnets-2017","role":"author","urls":{"Paper":"http://arxiv.org/abs/1709.03831"},"keyword":["Computer Science - Learning","Statistics - Machine Learning"],"downloads":0,"html":""},"search_terms":["dual","discriminator","generative","adversarial","nets","nguyen","le","vu","phung"],"keywords":["computer science - learning","statistics - machine learning"],"authorIDs":[],"dataSources":["p3JdPh89hHfoARFkn"]}