Training and Inference with Integers in Deep Neural Networks. Wu, S., Li, G., Chen, F., & Shi, L. February, 2018. 267 citations (Semantic Scholar/arXiv) [2022-08-18] arXiv:1802.04680 [cs]
Paper abstract bibtex Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as "WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.
@misc{wu_training_2018,
title = {Training and {Inference} with {Integers} in {Deep} {Neural} {Networks}},
url = {http://arxiv.org/abs/1802.04680},
abstract = {Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as "WAGE" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.},
urldate = {2022-08-07},
publisher = {arXiv},
author = {Wu, Shuang and Li, Guoqi and Chen, Feng and Shi, Luping},
month = feb,
year = {2018},
note = {267 citations (Semantic Scholar/arXiv) [2022-08-18]
arXiv:1802.04680 [cs]},
keywords = {Computer Science - Machine Learning},
}
Downloads: 0
{"_id":"FPQ2cTDntWZG6gQ3W","bibbaseid":"wu-li-chen-shi-trainingandinferencewithintegersindeepneuralnetworks-2018","author_short":["Wu, S.","Li, G.","Chen, F.","Shi, L."],"bibdata":{"bibtype":"misc","type":"misc","title":"Training and Inference with Integers in Deep Neural Networks","url":"http://arxiv.org/abs/1802.04680","abstract":"Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as \"WAGE\" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.","urldate":"2022-08-07","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Wu"],"firstnames":["Shuang"],"suffixes":[]},{"propositions":[],"lastnames":["Li"],"firstnames":["Guoqi"],"suffixes":[]},{"propositions":[],"lastnames":["Chen"],"firstnames":["Feng"],"suffixes":[]},{"propositions":[],"lastnames":["Shi"],"firstnames":["Luping"],"suffixes":[]}],"month":"February","year":"2018","note":"267 citations (Semantic Scholar/arXiv) [2022-08-18] arXiv:1802.04680 [cs]","keywords":"Computer Science - Machine Learning","bibtex":"@misc{wu_training_2018,\n\ttitle = {Training and {Inference} with {Integers} in {Deep} {Neural} {Networks}},\n\turl = {http://arxiv.org/abs/1802.04680},\n\tabstract = {Researches on deep neural networks with discrete parameters and their deployment in embedded systems have been active and promising topics. Although previous works have successfully reduced precision in inference, transferring both training and inference processes to low-bitwidth integers has not been demonstrated simultaneously. In this work, we develop a new method termed as \"WAGE\" to discretize both training and inference, where weights (W), activations (A), gradients (G) and errors (E) among layers are shifted and linearly constrained to low-bitwidth integers. To perform pure discrete dataflow for fixed-point devices, we further replace batch normalization by a constant scaling layer and simplify other components that are arduous for integer implementation. Improved accuracies can be obtained on multiple datasets, which indicates that WAGE somehow acts as a type of regularization. Empirically, we demonstrate the potential to deploy training in hardware systems such as integer-based deep learning accelerators and neuromorphic chips with comparable accuracy and higher energy efficiency, which is crucial to future AI applications in variable scenarios with transfer and continual learning demands.},\n\turldate = {2022-08-07},\n\tpublisher = {arXiv},\n\tauthor = {Wu, Shuang and Li, Guoqi and Chen, Feng and Shi, Luping},\n\tmonth = feb,\n\tyear = {2018},\n\tnote = {267 citations (Semantic Scholar/arXiv) [2022-08-18]\narXiv:1802.04680 [cs]},\n\tkeywords = {Computer Science - Machine Learning},\n}\n\n\n\n","author_short":["Wu, S.","Li, G.","Chen, F.","Shi, L."],"key":"wu_training_2018","id":"wu_training_2018","bibbaseid":"wu-li-chen-shi-trainingandinferencewithintegersindeepneuralnetworks-2018","role":"author","urls":{"Paper":"http://arxiv.org/abs/1802.04680"},"keyword":["Computer Science - Machine Learning"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero/qiuyuanwang","dataSources":["wWPhSRj9hrZuqsm9D"],"keywords":["computer science - machine learning"],"search_terms":["training","inference","integers","deep","neural","networks","wu","li","chen","shi"],"title":"Training and Inference with Integers in Deep Neural Networks","year":2018}