Winner-Take-All Autoencoders. Makhzani, A. & Frey, B. abstract bibtex In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.
@article{makhzani_winner-take-all_nodate,
title = {Winner-{Take}-{All} {Autoencoders}},
abstract = {In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.},
language = {en},
author = {Makhzani, Alireza and Frey, Brendan},
pages = {11}
}
Downloads: 0
{"_id":"sQRseWvCA3Lc8hnDu","bibbaseid":"makhzani-frey-winnertakeallautoencoders","authorIDs":[],"author_short":["Makhzani, A.","Frey, B."],"bibdata":{"bibtype":"article","type":"article","title":"Winner-Take-All Autoencoders","abstract":"In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.","language":"en","author":[{"propositions":[],"lastnames":["Makhzani"],"firstnames":["Alireza"],"suffixes":[]},{"propositions":[],"lastnames":["Frey"],"firstnames":["Brendan"],"suffixes":[]}],"pages":"11","bibtex":"@article{makhzani_winner-take-all_nodate,\n\ttitle = {Winner-{Take}-{All} {Autoencoders}},\n\tabstract = {In this paper, we propose a winner-take-all method for learning hierarchical sparse representations in an unsupervised fashion. We first introduce fully-connected winner-take-all autoencoders which use mini-batch statistics to directly enforce a lifetime sparsity in the activations of the hidden units. We then propose the convolutional winner-take-all autoencoder which combines the benefits of convolutional architectures and autoencoders for learning shift-invariant sparse representations. We describe a way to train convolutional autoencoders layer by layer, where in addition to lifetime sparsity, a spatial sparsity within each feature map is achieved using winner-take-all activation functions. We will show that winner-take-all autoencoders can be used to to learn deep sparse representations from the MNIST, CIFAR-10, ImageNet, Street View House Numbers and Toronto Face datasets, and achieve competitive classification performance.},\n\tlanguage = {en},\n\tauthor = {Makhzani, Alireza and Frey, Brendan},\n\tpages = {11}\n}\n\n","author_short":["Makhzani, A.","Frey, B."],"key":"makhzani_winner-take-all_nodate","id":"makhzani_winner-take-all_nodate","bibbaseid":"makhzani-frey-winnertakeallautoencoders","role":"author","urls":{},"downloads":0,"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/asneha213","creationDate":"2019-06-06T20:57:45.734Z","downloads":0,"keywords":[],"search_terms":["winner","take","autoencoders","makhzani","frey"],"title":"Winner-Take-All Autoencoders","year":null,"dataSources":["fjacg9txEnNSDwee6"]}