Nonparametric Variational Auto-encoders for Hierarchical Representation Learning. Goyal, P., Hu, Z., Liang, X., Wang, C., & Xing, E. arXiv:1703.07027 [cs, stat], August, 2017. arXiv: 1703.07027
Nonparametric Variational Auto-encoders for Hierarchical Representation Learning [link]Paper  abstract   bibtex   
The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods. However, most work on VAEs use a rather simple prior over the latent variables such as standard normal distribution, thereby restricting its applications to relatively simple phenomena. In this work, we propose hierarchical nonparametric variational autoencoders, which combines treestructured Bayesian nonparametric priors with VAEs, to enable infinite flexibility of the latent representation space. Both the neural parameters and Bayesian priors are learned jointly using tailored variational inference. The resulting model induces a hierarchical structure of latent semantic concepts underlying the data corpus, and infers accurate representations of data instances. We apply our model in video representation learning. Our method is able to discover highly interpretable activity hierarchies, and obtain improved clustering accuracy and generalization capacity based on the learned rich representations.
@article{goyal_nonparametric_2017,
	title = {Nonparametric {Variational} {Auto}-encoders for {Hierarchical} {Representation} {Learning}},
	url = {http://arxiv.org/abs/1703.07027},
	abstract = {The recently developed variational autoencoders (VAEs) have proved to be an effective confluence of the rich representational power of neural networks with Bayesian methods. However, most work on VAEs use a rather simple prior over the latent variables such as standard normal distribution, thereby restricting its applications to relatively simple phenomena. In this work, we propose hierarchical nonparametric variational autoencoders, which combines treestructured Bayesian nonparametric priors with VAEs, to enable infinite flexibility of the latent representation space. Both the neural parameters and Bayesian priors are learned jointly using tailored variational inference. The resulting model induces a hierarchical structure of latent semantic concepts underlying the data corpus, and infers accurate representations of data instances. We apply our model in video representation learning. Our method is able to discover highly interpretable activity hierarchies, and obtain improved clustering accuracy and generalization capacity based on the learned rich representations.},
	language = {en},
	urldate = {2022-01-19},
	journal = {arXiv:1703.07027 [cs, stat]},
	author = {Goyal, Prasoon and Hu, Zhiting and Liang, Xiaodan and Wang, Chenyu and Xing, Eric},
	month = aug,
	year = {2017},
	note = {arXiv: 1703.07027},
	keywords = {/unread, Computer Science - Machine Learning, Statistics - Machine Learning, ⛔ No DOI found},
}

Downloads: 0