A new algorithm for training sparse autoencoders. Shamsabadi, A. S., Babaie-Zadeh, M., Seyyedsalehi, S. Z., Rabiee, H. R., & Jutten, C. In *2017 25th European Signal Processing Conference (EUSIPCO)*, pages 2141-2145, Aug, 2017.

Paper doi abstract bibtex

Paper doi abstract bibtex

Data representation plays an important role in performance of machine learning algorithms. Since data usually lacks the desired quality, many efforts have been made to provide a more desirable representation of data. Among many different approaches, sparse data representation has gained popularity in recent years. In this paper, we propose a new sparse autoencoder by imposing the power two of smoothed L0 norm of data representation on the hidden layer of regular autoencoder. The square of smoothed L0 norm increases the tendency that each data representation is "individually" sparse. Moreover, by using the proposed sparse autoencoder, once the model parameters are learned, the sparse representation of any new data is obtained simply by a matrix-vector multiplication without performing any optimization. When applied to the MNIST, CIFAR-10, and OPTDIGITS datasets, the results show that the proposed model guarantees a sparse representation for each input data which leads to better classification results.

@InProceedings{8081588, author = {A. S. Shamsabadi and M. Babaie-Zadeh and S. Z. Seyyedsalehi and H. R. Rabiee and C. Jutten}, booktitle = {2017 25th European Signal Processing Conference (EUSIPCO)}, title = {A new algorithm for training sparse autoencoders}, year = {2017}, pages = {2141-2145}, abstract = {Data representation plays an important role in performance of machine learning algorithms. Since data usually lacks the desired quality, many efforts have been made to provide a more desirable representation of data. Among many different approaches, sparse data representation has gained popularity in recent years. In this paper, we propose a new sparse autoencoder by imposing the power two of smoothed L0 norm of data representation on the hidden layer of regular autoencoder. The square of smoothed L0 norm increases the tendency that each data representation is {"}individually{"} sparse. Moreover, by using the proposed sparse autoencoder, once the model parameters are learned, the sparse representation of any new data is obtained simply by a matrix-vector multiplication without performing any optimization. When applied to the MNIST, CIFAR-10, and OPTDIGITS datasets, the results show that the proposed model guarantees a sparse representation for each input data which leads to better classification results.}, keywords = {data structures;learning (artificial intelligence);matrix multiplication;optimisation;pattern classification;vectors;machine learning algorithms;sparse data representation;sparse autoencoder;matrix-vector multiplication;optimization;Feature extraction;Optimization;Signal processing algorithms;Training;Sparse matrices;Decoding;Encoding}, doi = {10.23919/EUSIPCO.2017.8081588}, issn = {2076-1465}, month = {Aug}, url = {https://www.eurasip.org/proceedings/eusipco/eusipco2017/papers/1570347253.pdf}, }

Downloads: 0

{"_id":"2J68nih7tjhE2q5k5","bibbaseid":"shamsabadi-babaiezadeh-seyyedsalehi-rabiee-jutten-anewalgorithmfortrainingsparseautoencoders-2017","authorIDs":[],"author_short":["Shamsabadi, A. S.","Babaie-Zadeh, M.","Seyyedsalehi, S. Z.","Rabiee, H. R.","Jutten, C."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","author":[{"firstnames":["A.","S."],"propositions":[],"lastnames":["Shamsabadi"],"suffixes":[]},{"firstnames":["M."],"propositions":[],"lastnames":["Babaie-Zadeh"],"suffixes":[]},{"firstnames":["S.","Z."],"propositions":[],"lastnames":["Seyyedsalehi"],"suffixes":[]},{"firstnames":["H.","R."],"propositions":[],"lastnames":["Rabiee"],"suffixes":[]},{"firstnames":["C."],"propositions":[],"lastnames":["Jutten"],"suffixes":[]}],"booktitle":"2017 25th European Signal Processing Conference (EUSIPCO)","title":"A new algorithm for training sparse autoencoders","year":"2017","pages":"2141-2145","abstract":"Data representation plays an important role in performance of machine learning algorithms. Since data usually lacks the desired quality, many efforts have been made to provide a more desirable representation of data. Among many different approaches, sparse data representation has gained popularity in recent years. In this paper, we propose a new sparse autoencoder by imposing the power two of smoothed L0 norm of data representation on the hidden layer of regular autoencoder. The square of smoothed L0 norm increases the tendency that each data representation is \"individually\" sparse. Moreover, by using the proposed sparse autoencoder, once the model parameters are learned, the sparse representation of any new data is obtained simply by a matrix-vector multiplication without performing any optimization. When applied to the MNIST, CIFAR-10, and OPTDIGITS datasets, the results show that the proposed model guarantees a sparse representation for each input data which leads to better classification results.","keywords":"data structures;learning (artificial intelligence);matrix multiplication;optimisation;pattern classification;vectors;machine learning algorithms;sparse data representation;sparse autoencoder;matrix-vector multiplication;optimization;Feature extraction;Optimization;Signal processing algorithms;Training;Sparse matrices;Decoding;Encoding","doi":"10.23919/EUSIPCO.2017.8081588","issn":"2076-1465","month":"Aug","url":"https://www.eurasip.org/proceedings/eusipco/eusipco2017/papers/1570347253.pdf","bibtex":"@InProceedings{8081588,\n author = {A. S. Shamsabadi and M. Babaie-Zadeh and S. Z. Seyyedsalehi and H. R. Rabiee and C. Jutten},\n booktitle = {2017 25th European Signal Processing Conference (EUSIPCO)},\n title = {A new algorithm for training sparse autoencoders},\n year = {2017},\n pages = {2141-2145},\n abstract = {Data representation plays an important role in performance of machine learning algorithms. Since data usually lacks the desired quality, many efforts have been made to provide a more desirable representation of data. Among many different approaches, sparse data representation has gained popularity in recent years. In this paper, we propose a new sparse autoencoder by imposing the power two of smoothed L0 norm of data representation on the hidden layer of regular autoencoder. The square of smoothed L0 norm increases the tendency that each data representation is {\"}individually{\"} sparse. Moreover, by using the proposed sparse autoencoder, once the model parameters are learned, the sparse representation of any new data is obtained simply by a matrix-vector multiplication without performing any optimization. When applied to the MNIST, CIFAR-10, and OPTDIGITS datasets, the results show that the proposed model guarantees a sparse representation for each input data which leads to better classification results.},\n keywords = {data structures;learning (artificial intelligence);matrix multiplication;optimisation;pattern classification;vectors;machine learning algorithms;sparse data representation;sparse autoencoder;matrix-vector multiplication;optimization;Feature extraction;Optimization;Signal processing algorithms;Training;Sparse matrices;Decoding;Encoding},\n doi = {10.23919/EUSIPCO.2017.8081588},\n issn = {2076-1465},\n month = {Aug},\n url = {https://www.eurasip.org/proceedings/eusipco/eusipco2017/papers/1570347253.pdf},\n}\n\n","author_short":["Shamsabadi, A. S.","Babaie-Zadeh, M.","Seyyedsalehi, S. Z.","Rabiee, H. R.","Jutten, C."],"key":"8081588","id":"8081588","bibbaseid":"shamsabadi-babaiezadeh-seyyedsalehi-rabiee-jutten-anewalgorithmfortrainingsparseautoencoders-2017","role":"author","urls":{"Paper":"https://www.eurasip.org/proceedings/eusipco/eusipco2017/papers/1570347253.pdf"},"keyword":["data structures;learning (artificial intelligence);matrix multiplication;optimisation;pattern classification;vectors;machine learning algorithms;sparse data representation;sparse autoencoder;matrix-vector multiplication;optimization;Feature extraction;Optimization;Signal processing algorithms;Training;Sparse matrices;Decoding;Encoding"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/Roznn/EUSIPCO/main/eusipco2017url.bib","creationDate":"2021-02-13T16:38:25.744Z","downloads":0,"keywords":["data structures;learning (artificial intelligence);matrix multiplication;optimisation;pattern classification;vectors;machine learning algorithms;sparse data representation;sparse autoencoder;matrix-vector multiplication;optimization;feature extraction;optimization;signal processing algorithms;training;sparse matrices;decoding;encoding"],"search_terms":["new","algorithm","training","sparse","autoencoders","shamsabadi","babaie-zadeh","seyyedsalehi","rabiee","jutten"],"title":"A new algorithm for training sparse autoencoders","year":2017,"dataSources":["2MNbFYjMYTD6z7ExY","uP2aT6Qs8sfZJ6s8b"]}