General-purpose code acceleration with limited-precision analog computation. Amant, R. S., Yazdanbakhsh, A., Park, J., Thwaites, B., Esmaeilzadeh, H., Hassibi, A., Ceze, L., & Burger, D. In 2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA), pages 505–516, June, 2014. doi abstract bibtex As improvements in per-transistor speed and energy efficiency diminish, radical departures from conventional approaches are becoming critical to improving the performance and energy efficiency of general-purpose processors. We propose a solution-from circuit to compiler-that enables general-purpose use of limited-precision, analog hardware to accelerate “approximable” code-code that can tolerate imprecise execution. We utilize an algorithmic transformation that automatically converts approximable regions of code from a von Neumann model to an “analog” neural model. We outline the challenges of taking an analog approach, including restricted-range value encoding, limited precision in computation, circuit inaccuracies, noise, and constraints on supported topologies. We address these limitations with a combination of circuit techniques, a hardware/software interface, neural-network training techniques, and compiler support. Analog neural acceleration provides whole application speedup of 3.7× and energy savings of 6.3× with quality loss less than 10% for all except one benchmark. These results show that using limited-precision analog circuits for code acceleration, through a neural approach, is both feasible and beneficial over a range of approximation-tolerant, emerging applications including financial analysis, signal processing, robotics, 3D gaming, compression, and image processing.
@inproceedings{amant_general-purpose_2014,
title = {General-purpose code acceleration with limited-precision analog computation},
doi = {10.1109/ISCA.2014.6853213},
abstract = {As improvements in per-transistor speed and energy efficiency diminish, radical departures from conventional approaches are becoming critical to improving the performance and energy efficiency of general-purpose processors. We propose a solution-from circuit to compiler-that enables general-purpose use of limited-precision, analog hardware to accelerate “approximable” code-code that can tolerate imprecise execution. We utilize an algorithmic transformation that automatically converts approximable regions of code from a von Neumann model to an “analog” neural model. We outline the challenges of taking an analog approach, including restricted-range value encoding, limited precision in computation, circuit inaccuracies, noise, and constraints on supported topologies. We address these limitations with a combination of circuit techniques, a hardware/software interface, neural-network training techniques, and compiler support. Analog neural acceleration provides whole application speedup of 3.7× and energy savings of 6.3× with quality loss less than 10\% for all except one benchmark. These results show that using limited-precision analog circuits for code acceleration, through a neural approach, is both feasible and beneficial over a range of approximation-tolerant, emerging applications including financial analysis, signal processing, robotics, 3D gaming, compression, and image processing.},
booktitle = {2014 {ACM}/{IEEE} 41st {International} {Symposium} on {Computer} {Architecture} ({ISCA})},
author = {Amant, R. S. and Yazdanbakhsh, A. and Park, J. and Thwaites, B. and Esmaeilzadeh, H. and Hassibi, A. and Ceze, L. and Burger, D.},
month = jun,
year = {2014},
pages = {505--516}
}
Downloads: 0
{"_id":"3mvaDAQ2qtshDg6sY","bibbaseid":"amant-yazdanbakhsh-park-thwaites-esmaeilzadeh-hassibi-ceze-burger-generalpurposecodeaccelerationwithlimitedprecisionanalogcomputation-2014","authorIDs":[],"author_short":["Amant, R. S.","Yazdanbakhsh, A.","Park, J.","Thwaites, B.","Esmaeilzadeh, H.","Hassibi, A.","Ceze, L.","Burger, D."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"General-purpose code acceleration with limited-precision analog computation","doi":"10.1109/ISCA.2014.6853213","abstract":"As improvements in per-transistor speed and energy efficiency diminish, radical departures from conventional approaches are becoming critical to improving the performance and energy efficiency of general-purpose processors. We propose a solution-from circuit to compiler-that enables general-purpose use of limited-precision, analog hardware to accelerate “approximable” code-code that can tolerate imprecise execution. We utilize an algorithmic transformation that automatically converts approximable regions of code from a von Neumann model to an “analog” neural model. We outline the challenges of taking an analog approach, including restricted-range value encoding, limited precision in computation, circuit inaccuracies, noise, and constraints on supported topologies. We address these limitations with a combination of circuit techniques, a hardware/software interface, neural-network training techniques, and compiler support. Analog neural acceleration provides whole application speedup of 3.7× and energy savings of 6.3× with quality loss less than 10% for all except one benchmark. These results show that using limited-precision analog circuits for code acceleration, through a neural approach, is both feasible and beneficial over a range of approximation-tolerant, emerging applications including financial analysis, signal processing, robotics, 3D gaming, compression, and image processing.","booktitle":"2014 ACM/IEEE 41st International Symposium on Computer Architecture (ISCA)","author":[{"propositions":[],"lastnames":["Amant"],"firstnames":["R.","S."],"suffixes":[]},{"propositions":[],"lastnames":["Yazdanbakhsh"],"firstnames":["A."],"suffixes":[]},{"propositions":[],"lastnames":["Park"],"firstnames":["J."],"suffixes":[]},{"propositions":[],"lastnames":["Thwaites"],"firstnames":["B."],"suffixes":[]},{"propositions":[],"lastnames":["Esmaeilzadeh"],"firstnames":["H."],"suffixes":[]},{"propositions":[],"lastnames":["Hassibi"],"firstnames":["A."],"suffixes":[]},{"propositions":[],"lastnames":["Ceze"],"firstnames":["L."],"suffixes":[]},{"propositions":[],"lastnames":["Burger"],"firstnames":["D."],"suffixes":[]}],"month":"June","year":"2014","pages":"505–516","bibtex":"@inproceedings{amant_general-purpose_2014,\n\ttitle = {General-purpose code acceleration with limited-precision analog computation},\n\tdoi = {10.1109/ISCA.2014.6853213},\n\tabstract = {As improvements in per-transistor speed and energy efficiency diminish, radical departures from conventional approaches are becoming critical to improving the performance and energy efficiency of general-purpose processors. We propose a solution-from circuit to compiler-that enables general-purpose use of limited-precision, analog hardware to accelerate “approximable” code-code that can tolerate imprecise execution. We utilize an algorithmic transformation that automatically converts approximable regions of code from a von Neumann model to an “analog” neural model. We outline the challenges of taking an analog approach, including restricted-range value encoding, limited precision in computation, circuit inaccuracies, noise, and constraints on supported topologies. We address these limitations with a combination of circuit techniques, a hardware/software interface, neural-network training techniques, and compiler support. Analog neural acceleration provides whole application speedup of 3.7× and energy savings of 6.3× with quality loss less than 10\\% for all except one benchmark. These results show that using limited-precision analog circuits for code acceleration, through a neural approach, is both feasible and beneficial over a range of approximation-tolerant, emerging applications including financial analysis, signal processing, robotics, 3D gaming, compression, and image processing.},\n\tbooktitle = {2014 {ACM}/{IEEE} 41st {International} {Symposium} on {Computer} {Architecture} ({ISCA})},\n\tauthor = {Amant, R. S. and Yazdanbakhsh, A. and Park, J. and Thwaites, B. and Esmaeilzadeh, H. and Hassibi, A. and Ceze, L. and Burger, D.},\n\tmonth = jun,\n\tyear = {2014},\n\tpages = {505--516}\n}\n\n","author_short":["Amant, R. S.","Yazdanbakhsh, A.","Park, J.","Thwaites, B.","Esmaeilzadeh, H.","Hassibi, A.","Ceze, L.","Burger, D."],"key":"amant_general-purpose_2014","id":"amant_general-purpose_2014","bibbaseid":"amant-yazdanbakhsh-park-thwaites-esmaeilzadeh-hassibi-ceze-burger-generalpurposecodeaccelerationwithlimitedprecisionanalogcomputation-2014","role":"author","urls":{},"downloads":0},"bibtype":"inproceedings","biburl":"https://bibbase.org/zotero/ky25","creationDate":"2019-05-11T17:47:04.368Z","downloads":0,"keywords":[],"search_terms":["general","purpose","code","acceleration","limited","precision","analog","computation","amant","yazdanbakhsh","park","thwaites","esmaeilzadeh","hassibi","ceze","burger"],"title":"General-purpose code acceleration with limited-precision analog computation","year":2014,"dataSources":["XxiQtwZYfozhQmvGR"]}