Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. Han, S., Mao, H., & Dally, W., J. In 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, 10, 2016. International Conference on Learning Representations, ICLR. Paper Website abstract bibtex Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce “deep compression”, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.
@inproceedings{
title = {Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding},
type = {inproceedings},
year = {2016},
websites = {https://arxiv.org/abs/1510.00149v5},
month = {10},
publisher = {International Conference on Learning Representations, ICLR},
day = {1},
id = {856f549a-859a-3631-97c4-5ad4eccd2e79},
created = {2021-06-14T08:22:14.256Z},
accessed = {2021-06-14},
file_attached = {true},
profile_id = {48fc0258-023d-3602-860e-824092d62c56},
group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},
last_modified = {2021-06-14T08:31:45.800Z},
read = {false},
starred = {false},
authored = {false},
confirmed = {false},
hidden = {false},
folder_uuids = {c9e2a751-ce83-45dd-9c0e-bdac57df3cf4,cf9189f6-f354-4337-8aaf-a5f12cbf8660},
private_publication = {false},
abstract = {Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce “deep compression”, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.},
bibtype = {inproceedings},
author = {Han, Song and Mao, Huizi and Dally, William J.},
booktitle = {4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings}
}
Downloads: 0
{"_id":"PXChcujjByktxuiWw","bibbaseid":"han-mao-dally-deepcompressioncompressingdeepneuralnetworkswithpruningtrainedquantizationandhuffmancoding-2016","author_short":["Han, S.","Mao, H.","Dally, W., J."],"bibdata":{"title":"Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding","type":"inproceedings","year":"2016","websites":"https://arxiv.org/abs/1510.00149v5","month":"10","publisher":"International Conference on Learning Representations, ICLR","day":"1","id":"856f549a-859a-3631-97c4-5ad4eccd2e79","created":"2021-06-14T08:22:14.256Z","accessed":"2021-06-14","file_attached":"true","profile_id":"48fc0258-023d-3602-860e-824092d62c56","group_id":"1ff583c0-be37-34fa-9c04-73c69437d354","last_modified":"2021-06-14T08:31:45.800Z","read":false,"starred":false,"authored":false,"confirmed":false,"hidden":false,"folder_uuids":"c9e2a751-ce83-45dd-9c0e-bdac57df3cf4,cf9189f6-f354-4337-8aaf-a5f12cbf8660","private_publication":false,"abstract":"Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce “deep compression”, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.","bibtype":"inproceedings","author":"Han, Song and Mao, Huizi and Dally, William J.","booktitle":"4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings","bibtex":"@inproceedings{\n title = {Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding},\n type = {inproceedings},\n year = {2016},\n websites = {https://arxiv.org/abs/1510.00149v5},\n month = {10},\n publisher = {International Conference on Learning Representations, ICLR},\n day = {1},\n id = {856f549a-859a-3631-97c4-5ad4eccd2e79},\n created = {2021-06-14T08:22:14.256Z},\n accessed = {2021-06-14},\n file_attached = {true},\n profile_id = {48fc0258-023d-3602-860e-824092d62c56},\n group_id = {1ff583c0-be37-34fa-9c04-73c69437d354},\n last_modified = {2021-06-14T08:31:45.800Z},\n read = {false},\n starred = {false},\n authored = {false},\n confirmed = {false},\n hidden = {false},\n folder_uuids = {c9e2a751-ce83-45dd-9c0e-bdac57df3cf4,cf9189f6-f354-4337-8aaf-a5f12cbf8660},\n private_publication = {false},\n abstract = {Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce “deep compression”, a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35× to 49× without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9× to 13×; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35×, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49× from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3× to 4× layerwise speedup and 3× to 7× better energy efficiency.},\n bibtype = {inproceedings},\n author = {Han, Song and Mao, Huizi and Dally, William J.},\n booktitle = {4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings}\n}","author_short":["Han, S.","Mao, H.","Dally, W., J."],"urls":{"Paper":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c/file/2d430796-e927-9fad-1813-6b4da8002efa/full_text.pdf.pdf","Website":"https://arxiv.org/abs/1510.00149v5"},"biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","bibbaseid":"han-mao-dally-deepcompressioncompressingdeepneuralnetworkswithpruningtrainedquantizationandhuffmancoding-2016","role":"author","metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://bibbase.org/service/mendeley/bfbbf840-4c42-3914-a463-19024f50b30c","dataSources":["aXmRAq63YsH7a3ufx","ya2CyA73rpZseyrZ8","2252seNhipfTmjEBQ"],"keywords":[],"search_terms":["deep","compression","compressing","deep","neural","networks","pruning","trained","quantization","huffman","coding","han","mao","dally"],"title":"Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding","year":2016}