Sparse Convolutional Neural Networks. Baoyuan Liu, Min Wang, Foroosh, H., Tappen, M., & Penksy, M. In 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 806–814, Boston, MA, USA, June, 2015. IEEE.
Paper doi abstract bibtex Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90% of parameters, with a drop of accuracy that is less than 1% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.
@inproceedings{baoyuan_liu_sparse_2015,
address = {Boston, MA, USA},
title = {Sparse {Convolutional} {Neural} {Networks}},
isbn = {978-1-4673-6964-0},
url = {http://ieeexplore.ieee.org/document/7298681/},
doi = {10.1109/CVPR.2015.7298681},
abstract = {Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90\% of parameters, with a drop of accuracy that is less than 1\% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.},
language = {en},
urldate = {2023-08-04},
booktitle = {2015 {IEEE} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
publisher = {IEEE},
author = {{Baoyuan Liu} and {Min Wang} and Foroosh, Hassan and Tappen, Marshall and Penksy, Marianna},
month = jun,
year = {2015},
keywords = {\#CNN, \#CVPR{\textgreater}15, \#Sparse, /unread, ⭐⭐⭐⭐⭐},
pages = {806--814},
}
Downloads: 0
{"_id":"vNCnErwS9MDQakK7e","bibbaseid":"baoyuanliu-minwang-foroosh-tappen-penksy-sparseconvolutionalneuralnetworks-2015","author_short":["Baoyuan Liu","Min Wang","Foroosh, H.","Tappen, M.","Penksy, M."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","address":"Boston, MA, USA","title":"Sparse Convolutional Neural Networks","isbn":"978-1-4673-6964-0","url":"http://ieeexplore.ieee.org/document/7298681/","doi":"10.1109/CVPR.2015.7298681","abstract":"Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90% of parameters, with a drop of accuracy that is less than 1% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.","language":"en","urldate":"2023-08-04","booktitle":"2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","publisher":"IEEE","author":[{"firstnames":[],"propositions":[],"lastnames":["Baoyuan Liu"],"suffixes":[]},{"firstnames":[],"propositions":[],"lastnames":["Min Wang"],"suffixes":[]},{"propositions":[],"lastnames":["Foroosh"],"firstnames":["Hassan"],"suffixes":[]},{"propositions":[],"lastnames":["Tappen"],"firstnames":["Marshall"],"suffixes":[]},{"propositions":[],"lastnames":["Penksy"],"firstnames":["Marianna"],"suffixes":[]}],"month":"June","year":"2015","keywords":"#CNN, #CVPR\\textgreater15, #Sparse, /unread, ⭐⭐⭐⭐⭐","pages":"806–814","bibtex":"@inproceedings{baoyuan_liu_sparse_2015,\n\taddress = {Boston, MA, USA},\n\ttitle = {Sparse {Convolutional} {Neural} {Networks}},\n\tisbn = {978-1-4673-6964-0},\n\turl = {http://ieeexplore.ieee.org/document/7298681/},\n\tdoi = {10.1109/CVPR.2015.7298681},\n\tabstract = {Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90\\% of parameters, with a drop of accuracy that is less than 1\\% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.},\n\tlanguage = {en},\n\turldate = {2023-08-04},\n\tbooktitle = {2015 {IEEE} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},\n\tpublisher = {IEEE},\n\tauthor = {{Baoyuan Liu} and {Min Wang} and Foroosh, Hassan and Tappen, Marshall and Penksy, Marianna},\n\tmonth = jun,\n\tyear = {2015},\n\tkeywords = {\\#CNN, \\#CVPR{\\textgreater}15, \\#Sparse, /unread, ⭐⭐⭐⭐⭐},\n\tpages = {806--814},\n}\n\n\n\n","author_short":["Baoyuan Liu","Min Wang","Foroosh, H.","Tappen, M.","Penksy, M."],"key":"baoyuan_liu_sparse_2015","id":"baoyuan_liu_sparse_2015","bibbaseid":"baoyuanliu-minwang-foroosh-tappen-penksy-sparseconvolutionalneuralnetworks-2015","role":"author","urls":{"Paper":"http://ieeexplore.ieee.org/document/7298681/"},"keyword":["#CNN","#CVPR\\textgreater15","#Sparse","/unread","⭐⭐⭐⭐⭐"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/zotero/zzhenry2012","dataSources":["nZHrFJKyxKKDaWYM8"],"keywords":["#cnn","#cvpr\\textgreater15","#sparse","/unread","⭐⭐⭐⭐⭐"],"search_terms":["sparse","convolutional","neural","networks","baoyuan liu","min wang","foroosh","tappen","penksy"],"title":"Sparse Convolutional Neural Networks","year":2015}