Sparse Autoencoders Using Non-smooth Regularization. Amini, S. & Ghaernmaghami, S. In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2000-2004, Sep., 2018. Paper doi abstract bibtex Autoencoder, at the heart of a deep learning structure, plays an important role in extracting abstract representation of a set of input training patterns. Abstract representation contains informative features to demonstrate a large set of data patterns in an optimal way in certain applications. It is shown that through sparse regularization of outputs of the hidden units (codes) in an autoencoder, the quality of codes can be enhanced that leads to a higher learning performance in applications like classification. Almost all methods trying to achieve code sparsity in an autoencoder use a smooth approximation of l1 norm, as the best convex approximation of pseudo l0 norm. In this paper, we incorporate sparsity to autoencoder training optimization process using non-smooth convex l1 norm and propose an efficient algorithm to train the structure. The non-smooth l1 regularization have shown its efficiency in imposing sparsity in various applications including feature selection via lasso and sparse representation using basis pursuit. Our experimental results on three benchmark datasets show superiority of this term in training a sparse autoencoder over previously proposed ones. As a byproduct of the proposed method, it can also be used to apply different types of non-smooth regularizers to autoencoder training problem.
@InProceedings{8553217,
author = {S. Amini and S. Ghaernmaghami},
booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},
title = {Sparse Autoencoders Using Non-smooth Regularization},
year = {2018},
pages = {2000-2004},
abstract = {Autoencoder, at the heart of a deep learning structure, plays an important role in extracting abstract representation of a set of input training patterns. Abstract representation contains informative features to demonstrate a large set of data patterns in an optimal way in certain applications. It is shown that through sparse regularization of outputs of the hidden units (codes) in an autoencoder, the quality of codes can be enhanced that leads to a higher learning performance in applications like classification. Almost all methods trying to achieve code sparsity in an autoencoder use a smooth approximation of l1 norm, as the best convex approximation of pseudo l0 norm. In this paper, we incorporate sparsity to autoencoder training optimization process using non-smooth convex l1 norm and propose an efficient algorithm to train the structure. The non-smooth l1 regularization have shown its efficiency in imposing sparsity in various applications including feature selection via lasso and sparse representation using basis pursuit. Our experimental results on three benchmark datasets show superiority of this term in training a sparse autoencoder over previously proposed ones. As a byproduct of the proposed method, it can also be used to apply different types of non-smooth regularizers to autoencoder training problem.},
keywords = {approximation theory;encoding;feature extraction;image classification;image representation;learning (artificial intelligence);optimisation;sparse matrices;convex approximation;autoencoder training optimization process;sparse representation;sparse autoencoder;nonsmooth regularization;deep learning structure;abstract representation;input training patterns;informative features;data patterns;code sparsity;smooth approximation;feature selection;nonsmooth convex norm;lasso representation;Training;Decoding;Encoding;Cost function;Gradient methods;Europe},
doi = {10.23919/EUSIPCO.2018.8553217},
issn = {2076-1465},
month = {Sep.},
url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433458.pdf},
}
Downloads: 0
{"_id":"N8WNSvdkcGXBiLLFy","bibbaseid":"amini-ghaernmaghami-sparseautoencodersusingnonsmoothregularization-2018","authorIDs":[],"author_short":["Amini, S.","Ghaernmaghami, S."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["S."],"propositions":[],"lastnames":["Amini"],"suffixes":[]},{"firstnames":["S."],"propositions":[],"lastnames":["Ghaernmaghami"],"suffixes":[]}],"booktitle":"2018 26th European Signal Processing Conference (EUSIPCO)","title":"Sparse Autoencoders Using Non-smooth Regularization","year":"2018","pages":"2000-2004","abstract":"Autoencoder, at the heart of a deep learning structure, plays an important role in extracting abstract representation of a set of input training patterns. Abstract representation contains informative features to demonstrate a large set of data patterns in an optimal way in certain applications. It is shown that through sparse regularization of outputs of the hidden units (codes) in an autoencoder, the quality of codes can be enhanced that leads to a higher learning performance in applications like classification. Almost all methods trying to achieve code sparsity in an autoencoder use a smooth approximation of l1 norm, as the best convex approximation of pseudo l0 norm. In this paper, we incorporate sparsity to autoencoder training optimization process using non-smooth convex l1 norm and propose an efficient algorithm to train the structure. The non-smooth l1 regularization have shown its efficiency in imposing sparsity in various applications including feature selection via lasso and sparse representation using basis pursuit. Our experimental results on three benchmark datasets show superiority of this term in training a sparse autoencoder over previously proposed ones. As a byproduct of the proposed method, it can also be used to apply different types of non-smooth regularizers to autoencoder training problem.","keywords":"approximation theory;encoding;feature extraction;image classification;image representation;learning (artificial intelligence);optimisation;sparse matrices;convex approximation;autoencoder training optimization process;sparse representation;sparse autoencoder;nonsmooth regularization;deep learning structure;abstract representation;input training patterns;informative features;data patterns;code sparsity;smooth approximation;feature selection;nonsmooth convex norm;lasso representation;Training;Decoding;Encoding;Cost function;Gradient methods;Europe","doi":"10.23919/EUSIPCO.2018.8553217","issn":"2076-1465","month":"Sep.","url":"https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433458.pdf","bibtex":"@InProceedings{8553217,\n author = {S. Amini and S. Ghaernmaghami},\n booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n title = {Sparse Autoencoders Using Non-smooth Regularization},\n year = {2018},\n pages = {2000-2004},\n abstract = {Autoencoder, at the heart of a deep learning structure, plays an important role in extracting abstract representation of a set of input training patterns. Abstract representation contains informative features to demonstrate a large set of data patterns in an optimal way in certain applications. It is shown that through sparse regularization of outputs of the hidden units (codes) in an autoencoder, the quality of codes can be enhanced that leads to a higher learning performance in applications like classification. Almost all methods trying to achieve code sparsity in an autoencoder use a smooth approximation of l1 norm, as the best convex approximation of pseudo l0 norm. In this paper, we incorporate sparsity to autoencoder training optimization process using non-smooth convex l1 norm and propose an efficient algorithm to train the structure. The non-smooth l1 regularization have shown its efficiency in imposing sparsity in various applications including feature selection via lasso and sparse representation using basis pursuit. Our experimental results on three benchmark datasets show superiority of this term in training a sparse autoencoder over previously proposed ones. As a byproduct of the proposed method, it can also be used to apply different types of non-smooth regularizers to autoencoder training problem.},\n keywords = {approximation theory;encoding;feature extraction;image classification;image representation;learning (artificial intelligence);optimisation;sparse matrices;convex approximation;autoencoder training optimization process;sparse representation;sparse autoencoder;nonsmooth regularization;deep learning structure;abstract representation;input training patterns;informative features;data patterns;code sparsity;smooth approximation;feature selection;nonsmooth convex norm;lasso representation;Training;Decoding;Encoding;Cost function;Gradient methods;Europe},\n doi = {10.23919/EUSIPCO.2018.8553217},\n issn = {2076-1465},\n month = {Sep.},\n url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433458.pdf},\n}\n\n","author_short":["Amini, S.","Ghaernmaghami, S."],"key":"8553217","id":"8553217","bibbaseid":"amini-ghaernmaghami-sparseautoencodersusingnonsmoothregularization-2018","role":"author","urls":{"Paper":"https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433458.pdf"},"keyword":["approximation theory;encoding;feature extraction;image classification;image representation;learning (artificial intelligence);optimisation;sparse matrices;convex approximation;autoencoder training optimization process;sparse representation;sparse autoencoder;nonsmooth regularization;deep learning structure;abstract representation;input training patterns;informative features;data patterns;code sparsity;smooth approximation;feature selection;nonsmooth convex norm;lasso representation;Training;Decoding;Encoding;Cost function;Gradient methods;Europe"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/Roznn/EUSIPCO/main/eusipco2018url.bib","creationDate":"2021-02-13T15:38:40.328Z","downloads":0,"keywords":["approximation theory;encoding;feature extraction;image classification;image representation;learning (artificial intelligence);optimisation;sparse matrices;convex approximation;autoencoder training optimization process;sparse representation;sparse autoencoder;nonsmooth regularization;deep learning structure;abstract representation;input training patterns;informative features;data patterns;code sparsity;smooth approximation;feature selection;nonsmooth convex norm;lasso representation;training;decoding;encoding;cost function;gradient methods;europe"],"search_terms":["sparse","autoencoders","using","non","smooth","regularization","amini","ghaernmaghami"],"title":"Sparse Autoencoders Using Non-smooth Regularization","year":2018,"dataSources":["yiZioZximP7hphDpY","iuBeKSmaES2fHcEE9"]}