Towards Efficient Verification of Quantized Neural Networks. Huang, P., Wu, H., Yang, Y., Daukantas, I., Wu, M., Zhang, Y., & Barrett, C. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 21152–21160, February, 2024.
Towards Efficient Verification of Quantized Neural Networks [link]Paper  abstract   bibtex   
Quantization replaces floating point arithmetic with integer arithmetic in deep neural network models, providing more efficient on-device inference with less power and memory. In this work, we propose a framework for formally verifying properties of quantized neural networks. Our baseline technique is based on integer linear programming which guarantees both soundness and completeness. We then show how efficiency can be improved by utilizing gradient-based heuristic search methods and also bound-propagation techniques. We evaluate our approach on perception networks quantized with PyTorch. Our results show that we can verify quantized networks with better scalability and efficiency than the previous state of the art.
@inproceedings{HWY+24,
  url       = "https://arxiv.org/abs/2312.12679",
  author    = "Huang, Pei and Wu, Haoze and Yang, Yuting and Daukantas, Ieva and Wu, Min and Zhang, Yedi and Barrett, Clark",
  title     = "Towards Efficient Verification of Quantized Neural Networks",
  booktitle = "Proceedings of the AAAI Conference on Artificial Intelligence",
  volume    = 38,
  number    = 19,
  pages     = "21152--21160",
  month     = feb,
  year      = 2024,
  category  = "Conference Publications",
  abstract  = "Quantization replaces floating point arithmetic with integer
                  arithmetic in deep neural network models, providing more
                  efficient on-device inference with less power and memory. In
                  this work, we propose a framework for formally verifying
                  properties of quantized neural networks. Our baseline
                  technique is based on integer linear programming which
                  guarantees both soundness and completeness. We then show how
                  efficiency can be improved by utilizing gradient-based
                  heuristic search methods and also bound-propagation
                  techniques. We evaluate our approach on perception networks
                  quantized with PyTorch. Our results show that we can verify
                  quantized networks with better scalability and efficiency
                  than the previous state of the art."
}

Downloads: 0