Performance Characterization of Deep Learning Models for Breathing-based Authentication on Resource-Constrained Devices. Chauhan, J., Rajasegaran, J., Seneviratne, S., Misra, A., Seneviratne, A., & Lee, Y. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies - IMWUT, 2(4):1-24, ACM, 12, 2018.
Performance Characterization of Deep Learning Models for Breathing-based Authentication on Resource-Constrained Devices [pdf]Paper  Performance Characterization of Deep Learning Models for Breathing-based Authentication on Resource-Constrained Devices [link]Website  abstract   bibtex   
Providing secure access to smart devices such as smartphones, wearables and various other IoT devices is becoming increasingly important, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary authentication mechanism for secure access. Executing sophisticated machine learning pipelines for such authentication on such devices remains an open problem, given their resource limitations in terms of storage, memory and computational power. To investigate this challenge, we compare the performance of an end-to-end system for both user identification and user verification tasks based on breathing acoustics on three type of smart devices: smartphone, smartwatch and Raspberry Pi using both shallow classifiers (i.e., SVM, GMM, Logistic Regression) and deep learning based classifiers (e.g., LSTM, MLP). Via detailed analysis, we conclude that LSTM models for acoustic classification are the smallest in size, have the lowest inference time and are more accurate than all other compared classifiers. An uncompressed LSTM model provides an average f-score of 80%-94% while requiring only 50--180 KB of storage (depending on the breathing gesture). The resulting inference can be done on smartphones and smartwatches within approximately 7--10 ms and 18--66 ms respectively, thereby making them suitable for resource-constrained devices. Further memory and computational savings can be achieved using model compression methods such as weight quantization and fully connected layer factorization: in particular, a combination of quantization and factorization achieves 25%--55% reduction in LSTM model size, with almost no loss in performance. We also compare the performance on GPUs and show that the use of GPU can reduce the inference time of LSTM models by a factor of 300%. These results provide a practical way to deploy breathing based biometrics, and more broadly LSTM-based classifiers, in future ubiquitous computing applications.

Downloads: 0