On-line Functional Testing of Memristor-mapped Deep Neural Networks using Backdoored Checksums. Chen, C. & Chakrabarty, K. In 2021 IEEE International Test Conference (ITC), pages 83–92, 2021. 1 citations (Semantic Scholar/DOI) [2023-02-27] ISSN: 2378-2250doi abstract bibtex Deep learning (DL) applications are becoming in- creasingly ubiquitous. However, recent research has highlighted a number of reliability concerns associated with deep neural networks (DNNs) used for DL. In particular, hardware-level reliability of DNNs is of concern when DL models are mapped to specialized neuromorphic hardware such as memristor-based crossbars. Faults in the crossbars can deviate the corresponding DNN model weights from their trained values. It is therefore desirable to have an on-device "checksum" function to indicate if model weights are deviated. We present a backdooring technique that fine-tunes DNN weights to implement the checksum function. The backdoored checksum function is triggered only when inferencing is carried out using a special set of data points with watermarks. We show that backdooring, i.e., fine-tuning of DNN weights, has no impact on the inferencing accuracy of the original DNN model. Moreover, the implemented checksum functions for AlexNet and VGG-16 remarkably outperform baseline approaches. Based on the proposed on-line functional testing solution, we present a computing framework that can efficiently recover the inferencing accuracy of a memristor-mapped DNN from weight deviations. Compared to related recent work, the proposed framework achieves 5.6 × speed-up in time-to-recovery and reduces the on-chip test data volume by 99.99%.
@inproceedings{chen_-line_2021,
title = {On-line {Functional} {Testing} of {Memristor}-mapped {Deep} {Neural} {Networks} using {Backdoored} {Checksums}},
doi = {10.1109/itc50571.2021.00016},
abstract = {Deep learning (DL) applications are becoming in- creasingly ubiquitous. However, recent research has highlighted a number of reliability concerns associated with deep neural networks (DNNs) used for DL. In particular, hardware-level reliability of DNNs is of concern when DL models are mapped to specialized neuromorphic hardware such as memristor-based crossbars. Faults in the crossbars can deviate the corresponding DNN model weights from their trained values. It is therefore desirable to have an on-device "checksum" function to indicate if model weights are deviated. We present a backdooring technique that fine-tunes DNN weights to implement the checksum function. The backdoored checksum function is triggered only when inferencing is carried out using a special set of data points with watermarks. We show that backdooring, i.e., fine-tuning of DNN weights, has no impact on the inferencing accuracy of the original DNN model. Moreover, the implemented checksum functions for AlexNet and VGG-16 remarkably outperform baseline approaches. Based on the proposed on-line functional testing solution, we present a computing framework that can efficiently recover the inferencing accuracy of a memristor-mapped DNN from weight deviations. Compared to related recent work, the proposed framework achieves 5.6 × speed-up in time-to-recovery and reduces the on-chip test data volume by 99.99\%.},
booktitle = {2021 {IEEE} {International} {Test} {Conference} ({ITC})},
author = {Chen, Ching-Yuan and Chakrabarty, Krishnendu},
year = {2021},
note = {1 citations (Semantic Scholar/DOI) [2023-02-27]
ISSN: 2378-2250},
keywords = {/unread, Deep learning, Degradation, Hardware, Memristors, Neuromorphics, Training, Watermarking},
pages = {83--92},
}
Downloads: 0
{"_id":"vzrKKhHAtrJZWEAQr","bibbaseid":"chen-chakrabarty-onlinefunctionaltestingofmemristormappeddeepneuralnetworksusingbackdooredchecksums-2021","author_short":["Chen, C.","Chakrabarty, K."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"On-line Functional Testing of Memristor-mapped Deep Neural Networks using Backdoored Checksums","doi":"10.1109/itc50571.2021.00016","abstract":"Deep learning (DL) applications are becoming in- creasingly ubiquitous. However, recent research has highlighted a number of reliability concerns associated with deep neural networks (DNNs) used for DL. In particular, hardware-level reliability of DNNs is of concern when DL models are mapped to specialized neuromorphic hardware such as memristor-based crossbars. Faults in the crossbars can deviate the corresponding DNN model weights from their trained values. It is therefore desirable to have an on-device \"checksum\" function to indicate if model weights are deviated. We present a backdooring technique that fine-tunes DNN weights to implement the checksum function. The backdoored checksum function is triggered only when inferencing is carried out using a special set of data points with watermarks. We show that backdooring, i.e., fine-tuning of DNN weights, has no impact on the inferencing accuracy of the original DNN model. Moreover, the implemented checksum functions for AlexNet and VGG-16 remarkably outperform baseline approaches. Based on the proposed on-line functional testing solution, we present a computing framework that can efficiently recover the inferencing accuracy of a memristor-mapped DNN from weight deviations. Compared to related recent work, the proposed framework achieves 5.6 × speed-up in time-to-recovery and reduces the on-chip test data volume by 99.99%.","booktitle":"2021 IEEE International Test Conference (ITC)","author":[{"propositions":[],"lastnames":["Chen"],"firstnames":["Ching-Yuan"],"suffixes":[]},{"propositions":[],"lastnames":["Chakrabarty"],"firstnames":["Krishnendu"],"suffixes":[]}],"year":"2021","note":"1 citations (Semantic Scholar/DOI) [2023-02-27] ISSN: 2378-2250","keywords":"/unread, Deep learning, Degradation, Hardware, Memristors, Neuromorphics, Training, Watermarking","pages":"83–92","bibtex":"@inproceedings{chen_-line_2021,\n\ttitle = {On-line {Functional} {Testing} of {Memristor}-mapped {Deep} {Neural} {Networks} using {Backdoored} {Checksums}},\n\tdoi = {10.1109/itc50571.2021.00016},\n\tabstract = {Deep learning (DL) applications are becoming in- creasingly ubiquitous. However, recent research has highlighted a number of reliability concerns associated with deep neural networks (DNNs) used for DL. In particular, hardware-level reliability of DNNs is of concern when DL models are mapped to specialized neuromorphic hardware such as memristor-based crossbars. Faults in the crossbars can deviate the corresponding DNN model weights from their trained values. It is therefore desirable to have an on-device \"checksum\" function to indicate if model weights are deviated. We present a backdooring technique that fine-tunes DNN weights to implement the checksum function. The backdoored checksum function is triggered only when inferencing is carried out using a special set of data points with watermarks. We show that backdooring, i.e., fine-tuning of DNN weights, has no impact on the inferencing accuracy of the original DNN model. Moreover, the implemented checksum functions for AlexNet and VGG-16 remarkably outperform baseline approaches. Based on the proposed on-line functional testing solution, we present a computing framework that can efficiently recover the inferencing accuracy of a memristor-mapped DNN from weight deviations. Compared to related recent work, the proposed framework achieves 5.6 × speed-up in time-to-recovery and reduces the on-chip test data volume by 99.99\\%.},\n\tbooktitle = {2021 {IEEE} {International} {Test} {Conference} ({ITC})},\n\tauthor = {Chen, Ching-Yuan and Chakrabarty, Krishnendu},\n\tyear = {2021},\n\tnote = {1 citations (Semantic Scholar/DOI) [2023-02-27]\nISSN: 2378-2250},\n\tkeywords = {/unread, Deep learning, Degradation, Hardware, Memristors, Neuromorphics, Training, Watermarking},\n\tpages = {83--92},\n}\n\n","author_short":["Chen, C.","Chakrabarty, K."],"key":"chen_-line_2021","id":"chen_-line_2021","bibbaseid":"chen-chakrabarty-onlinefunctionaltestingofmemristormappeddeepneuralnetworksusingbackdooredchecksums-2021","role":"author","urls":{},"keyword":["/unread","Deep learning","Degradation","Hardware","Memristors","Neuromorphics","Training","Watermarking"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/zotero/victorjhu","dataSources":["CmHEoydhafhbkXXt5"],"keywords":["/unread","deep learning","degradation","hardware","memristors","neuromorphics","training","watermarking"],"search_terms":["line","functional","testing","memristor","mapped","deep","neural","networks","using","backdoored","checksums","chen","chakrabarty"],"title":"On-line Functional Testing of Memristor-mapped Deep Neural Networks using Backdoored Checksums","year":2021}