October, 2018. _eprint: 1810.10948

Paper abstract bibtex

Paper abstract bibtex

In this article, we introduce a new mode for training Generative Adversarial Networks (GANs). Rather than minimizing the distance of evidence distribution \${\textbackslash}backslashtilde\{p\}(x)\$ and the generative distribution \$q(x)\$, we minimize the distance of \${\textbackslash}backslashtilde\{p\}(x_r)q(x_f)\$ and \${\textbackslash}backslashtilde\{p\}(x_f)q(x_r)\$. This adversarial pattern can be interpreted as a Turing test in GANs. It allows us to use information of real samples during training generator and accelerates the whole training procedure. We even find that just proportionally increasing the size of discriminator and generator, it succeeds on 256x256 resolution without adjusting hyperparameters carefully.

@article{su_training_2018, title = {Training {Generative} {Adversarial} {Networks} {Via} {Turing} {Test}}, url = {http://arxiv.org/abs/1810.10948}, abstract = {In this article, we introduce a new mode for training Generative Adversarial Networks (GANs). Rather than minimizing the distance of evidence distribution \${\textbackslash}backslashtilde\{p\}(x)\$ and the generative distribution \$q(x)\$, we minimize the distance of \${\textbackslash}backslashtilde\{p\}(x\_r)q(x\_f)\$ and \${\textbackslash}backslashtilde\{p\}(x\_f)q(x\_r)\$. This adversarial pattern can be interpreted as a Turing test in GANs. It allows us to use information of real samples during training generator and accelerates the whole training procedure. We even find that just proportionally increasing the size of discriminator and generator, it succeeds on 256x256 resolution without adjusting hyperparameters carefully.}, author = {Su, Jianlin}, month = oct, year = {2018}, note = {\_eprint: 1810.10948}, keywords = {\#nosource}, }

Downloads: 0