On-line Functional Testing of Memristor-mapped Deep Neural Networks using Backdoored Checksums. Chen, C. & Chakrabarty, K. In 2021 IEEE International Test Conference (ITC), pages 83–92, 2021. 1 citations (Semantic Scholar/DOI) [2023-02-27] ISSN: 2378-2250
doi  abstract   bibtex   
Deep learning (DL) applications are becoming in- creasingly ubiquitous. However, recent research has highlighted a number of reliability concerns associated with deep neural networks (DNNs) used for DL. In particular, hardware-level reliability of DNNs is of concern when DL models are mapped to specialized neuromorphic hardware such as memristor-based crossbars. Faults in the crossbars can deviate the corresponding DNN model weights from their trained values. It is therefore desirable to have an on-device "checksum" function to indicate if model weights are deviated. We present a backdooring technique that fine-tunes DNN weights to implement the checksum function. The backdoored checksum function is triggered only when inferencing is carried out using a special set of data points with watermarks. We show that backdooring, i.e., fine-tuning of DNN weights, has no impact on the inferencing accuracy of the original DNN model. Moreover, the implemented checksum functions for AlexNet and VGG-16 remarkably outperform baseline approaches. Based on the proposed on-line functional testing solution, we present a computing framework that can efficiently recover the inferencing accuracy of a memristor-mapped DNN from weight deviations. Compared to related recent work, the proposed framework achieves 5.6 × speed-up in time-to-recovery and reduces the on-chip test data volume by 99.99%.
@inproceedings{chen_-line_2021,
	title = {On-line {Functional} {Testing} of {Memristor}-mapped {Deep} {Neural} {Networks} using {Backdoored} {Checksums}},
	doi = {10.1109/itc50571.2021.00016},
	abstract = {Deep learning (DL) applications are becoming in- creasingly ubiquitous. However, recent research has highlighted a number of reliability concerns associated with deep neural networks (DNNs) used for DL. In particular, hardware-level reliability of DNNs is of concern when DL models are mapped to specialized neuromorphic hardware such as memristor-based crossbars. Faults in the crossbars can deviate the corresponding DNN model weights from their trained values. It is therefore desirable to have an on-device "checksum" function to indicate if model weights are deviated. We present a backdooring technique that fine-tunes DNN weights to implement the checksum function. The backdoored checksum function is triggered only when inferencing is carried out using a special set of data points with watermarks. We show that backdooring, i.e., fine-tuning of DNN weights, has no impact on the inferencing accuracy of the original DNN model. Moreover, the implemented checksum functions for AlexNet and VGG-16 remarkably outperform baseline approaches. Based on the proposed on-line functional testing solution, we present a computing framework that can efficiently recover the inferencing accuracy of a memristor-mapped DNN from weight deviations. Compared to related recent work, the proposed framework achieves 5.6 × speed-up in time-to-recovery and reduces the on-chip test data volume by 99.99\%.},
	booktitle = {2021 {IEEE} {International} {Test} {Conference} ({ITC})},
	author = {Chen, Ching-Yuan and Chakrabarty, Krishnendu},
	year = {2021},
	note = {1 citations (Semantic Scholar/DOI) [2023-02-27]
ISSN: 2378-2250},
	keywords = {/unread, Deep learning, Degradation, Hardware, Memristors, Neuromorphics, Training, Watermarking},
	pages = {83--92},
}

Downloads: 0