Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling. Bartler, A., Wiewel, F., Mauch, L., & Yang, B. In *2019 27th European Signal Processing Conference (EUSIPCO)*, pages 1-5, Sep., 2019.

Paper doi abstract bibtex

Paper doi abstract bibtex

The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.

@InProceedings{8902811, author = {A. Bartler and F. Wiewel and L. Mauch and B. Yang}, booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)}, title = {Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling}, year = {2019}, pages = {1-5}, abstract = {The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.}, keywords = {approximation theory;Gaussian processes;gradient methods;image coding;importance sampling;learning (artificial intelligence);maximum likelihood estimation;stochastic processes;ELBO;reparametrized sampling;VAE;discrete-valued latent variables;binary valued latent representations;categorically valued latent representations;differentiable estimator;importance sampling;VAE architectures;categorically distributed latent representations;discrete latent variables;generative latent variable model;representation learning;stochastic gradient descend;variational autoencoder training;Decoding;Training;Monte Carlo methods;Signal processing;Europe;Standards;Stochastic processes;variational autoencoder;discrete latent variables;importance sampling}, doi = {10.23919/EUSIPCO.2019.8902811}, issn = {2076-1465}, month = {Sep.}, url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531485.pdf}, }

Downloads: 0

{"_id":"KEMFhEt5X5GcvR8ie","bibbaseid":"bartler-wiewel-mauch-yang-trainingvariationalautoencoderswithdiscretelatentvariablesusingimportancesampling-2019","authorIDs":[],"author_short":["Bartler, A.","Wiewel, F.","Mauch, L.","Yang, B."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["A."],"propositions":[],"lastnames":["Bartler"],"suffixes":[]},{"firstnames":["F."],"propositions":[],"lastnames":["Wiewel"],"suffixes":[]},{"firstnames":["L."],"propositions":[],"lastnames":["Mauch"],"suffixes":[]},{"firstnames":["B."],"propositions":[],"lastnames":["Yang"],"suffixes":[]}],"booktitle":"2019 27th European Signal Processing Conference (EUSIPCO)","title":"Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling","year":"2019","pages":"1-5","abstract":"The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.","keywords":"approximation theory;Gaussian processes;gradient methods;image coding;importance sampling;learning (artificial intelligence);maximum likelihood estimation;stochastic processes;ELBO;reparametrized sampling;VAE;discrete-valued latent variables;binary valued latent representations;categorically valued latent representations;differentiable estimator;importance sampling;VAE architectures;categorically distributed latent representations;discrete latent variables;generative latent variable model;representation learning;stochastic gradient descend;variational autoencoder training;Decoding;Training;Monte Carlo methods;Signal processing;Europe;Standards;Stochastic processes;variational autoencoder;discrete latent variables;importance sampling","doi":"10.23919/EUSIPCO.2019.8902811","issn":"2076-1465","month":"Sep.","url":"https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531485.pdf","bibtex":"@InProceedings{8902811,\n author = {A. Bartler and F. Wiewel and L. Mauch and B. Yang},\n booktitle = {2019 27th European Signal Processing Conference (EUSIPCO)},\n title = {Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling},\n year = {2019},\n pages = {1-5},\n abstract = {The Variational Autoencoder (VAE) is a popular generative latent variable model that is often used for representation learning. Standard VAEs assume continuous-valued latent variables and are trained by maximization of the evidence lower bound (ELBO). Conventional methods obtain a differentiable estimate of the ELBO with reparametrized sampling and optimize it with Stochastic Gradient Descend (SGD). However, this is not possible if we want to train VAEs with discrete-valued latent variables, since reparametrized sampling is not possible. In this paper, we propose an easy method to train VAEs with binary or categorically valued latent representations. Therefore, we use a differentiable estimator for the ELBO which is based on importance sampling. In experiments, we verify the approach and train two different VAEs architectures with Bernoulli and categorically distributed latent representations on two different benchmark datasets.},\n keywords = {approximation theory;Gaussian processes;gradient methods;image coding;importance sampling;learning (artificial intelligence);maximum likelihood estimation;stochastic processes;ELBO;reparametrized sampling;VAE;discrete-valued latent variables;binary valued latent representations;categorically valued latent representations;differentiable estimator;importance sampling;VAE architectures;categorically distributed latent representations;discrete latent variables;generative latent variable model;representation learning;stochastic gradient descend;variational autoencoder training;Decoding;Training;Monte Carlo methods;Signal processing;Europe;Standards;Stochastic processes;variational autoencoder;discrete latent variables;importance sampling},\n doi = {10.23919/EUSIPCO.2019.8902811},\n issn = {2076-1465},\n month = {Sep.},\n url = {https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531485.pdf},\n}\n\n","author_short":["Bartler, A.","Wiewel, F.","Mauch, L.","Yang, B."],"key":"8902811","id":"8902811","bibbaseid":"bartler-wiewel-mauch-yang-trainingvariationalautoencoderswithdiscretelatentvariablesusingimportancesampling-2019","role":"author","urls":{"Paper":"https://www.eurasip.org/proceedings/eusipco/eusipco2019/proceedings/papers/1570531485.pdf"},"keyword":["approximation theory;Gaussian processes;gradient methods;image coding;importance sampling;learning (artificial intelligence);maximum likelihood estimation;stochastic processes;ELBO;reparametrized sampling;VAE;discrete-valued latent variables;binary valued latent representations;categorically valued latent representations;differentiable estimator;importance sampling;VAE architectures;categorically distributed latent representations;discrete latent variables;generative latent variable model;representation learning;stochastic gradient descend;variational autoencoder training;Decoding;Training;Monte Carlo methods;Signal processing;Europe;Standards;Stochastic processes;variational autoencoder;discrete latent variables;importance sampling"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/Roznn/EUSIPCO/main/eusipco2019url.bib","creationDate":"2021-02-11T19:15:22.034Z","downloads":0,"keywords":["approximation theory;gaussian processes;gradient methods;image coding;importance sampling;learning (artificial intelligence);maximum likelihood estimation;stochastic processes;elbo;reparametrized sampling;vae;discrete-valued latent variables;binary valued latent representations;categorically valued latent representations;differentiable estimator;importance sampling;vae architectures;categorically distributed latent representations;discrete latent variables;generative latent variable model;representation learning;stochastic gradient descend;variational autoencoder training;decoding;training;monte carlo methods;signal processing;europe;standards;stochastic processes;variational autoencoder;discrete latent variables;importance sampling"],"search_terms":["training","variational","autoencoders","discrete","latent","variables","using","importance","sampling","bartler","wiewel","mauch","yang"],"title":"Training Variational Autoencoders with Discrete Latent Variables Using Importance Sampling","year":2019,"dataSources":["NqWTiMfRR56v86wRs","r6oz3cMyC99QfiuHW"]}