Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method. Jiang, J., Zhang, Z., Xu, C., Yu, Z., & Peng, Y. 2023. cite arxiv:2305.08960
Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method [link]Paper  abstract   bibtex   
Backpropagation (BP) is the most important gradient estimation method for training neural networks in deep learning. However, the literature shows that neural networks trained by BP are vulnerable to adversarial attacks. We develop the likelihood ratio (LR) method, a new gradient estimation method, for training a broad range of neural network architectures, including convolutional neural networks, recurrent neural networks, graph neural networks, and spiking neural networks, without recursive gradient computation. We propose three methods to efficiently reduce the variance of the gradient estimation in the neural network training process. Our experiments yield numerical results for training different neural networks on several datasets. All results demonstrate that the LR method is effective for training various neural networks and significantly improves the robustness of the neural networks under adversarial attacks relative to the BP method.
@misc{jiang2023training,
  abstract = {Backpropagation (BP) is the most important gradient estimation method for
training neural networks in deep learning. However, the literature shows that
neural networks trained by BP are vulnerable to adversarial attacks. We develop
the likelihood ratio (LR) method, a new gradient estimation method, for
training a broad range of neural network architectures, including convolutional
neural networks, recurrent neural networks, graph neural networks, and spiking
neural networks, without recursive gradient computation. We propose three
methods to efficiently reduce the variance of the gradient estimation in the
neural network training process. Our experiments yield numerical results for
training different neural networks on several datasets. All results demonstrate
that the LR method is effective for training various neural networks and
significantly improves the robustness of the neural networks under adversarial
attacks relative to the BP method.},
  added-at = {2023-07-25T13:29:28.000+0200},
  author = {Jiang, Jinyang and Zhang, Zeliang and Xu, Chenliang and Yu, Zhaofei and Peng, Yijie},
  biburl = {https://www.bibsonomy.org/bibtex/21629ed1ec2513ac7f0d0a0d649b8a351/koncar},
  description = {Training Neural Networks without Backpropagation: A Deeper Dive into the Likelihood Ratio Method},
  interhash = {2e1cb5b7bd723a4f871a1b92787c702b},
  intrahash = {1629ed1ec2513ac7f0d0a0d649b8a351},
  keywords = {neuralnetworks nobackprop},
  note = {cite arxiv:2305.08960},
  timestamp = {2023-07-25T13:29:28.000+0200},
  title = {Training Neural Networks without Backpropagation: A Deeper Dive into the
  Likelihood Ratio Method},
  url = {http://arxiv.org/abs/2305.08960},
  year = 2023
}

Downloads: 0