Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training. Harvey, W., Teng, M., & Wood, F. In NeurIPS Workshop on Bayesian Deep Learning, 2019.
Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training [pdf]Paper  Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training [link]Arxiv  Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training [pdf]Poster  abstract   bibtex   3 downloads  
We introduce the use of Bayesian optimal experimental design techniques for generating glimpse sequences to use in semi-supervised training of hard attention networks. Hard attention holds the promise of greater energy efficiency and superior inference performance. Employing such networks for image classification usually involves choosing a sequence of glimpse locations from a stochastic policy. As the outputs of observations are typically non-differentiable with respect to their glimpse locations, unsupervised gradient learning of such a policy requires REINFORCE-style updates. Also, the only reward signal is the final classification accuracy. For these reasons hard attention networks, despite their promise, have not achieved the wide adoption that soft attention networks have and, in many practical settings, are difficult to train. We find that our method for semi-supervised training makes it easier and faster to train hard attention networks and correspondingly could make them practical to consider in situations where they were not before.

Downloads: 3