Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training. Harvey, W., Teng, M., & Wood, F. In NeurIPS Workshop on Bayesian Deep Learning, 2019. Paper Arxiv Poster abstract bibtex 3 downloads We introduce the use of Bayesian optimal experimental design techniques for generating glimpse sequences to use in semi-supervised training of hard attention networks. Hard attention holds the promise of greater energy efficiency and superior inference performance. Employing such networks for image classification usually involves choosing a sequence of glimpse locations from a stochastic policy. As the outputs of observations are typically non-differentiable with respect to their glimpse locations, unsupervised gradient learning of such a policy requires REINFORCE-style updates. Also, the only reward signal is the final classification accuracy. For these reasons hard attention networks, despite their promise, have not achieved the wide adoption that soft attention networks have and, in many practical settings, are difficult to train. We find that our method for semi-supervised training makes it easier and faster to train hard attention networks and correspondingly could make them practical to consider in situations where they were not before.
@inproceedings{HAR-19,
title={Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training},
author={Harvey, William and Teng, Michael and Wood, Frank},
booktitle={NeurIPS Workshop on Bayesian Deep Learning},
year={2019},
support = {D3M,LwLL},
archiveprefix = {arXiv},
eprint = {1906.05462},
url_Paper={http://bayesiandeeplearning.org/2019/papers/38.pdf},
url_ArXiv={https://arxiv.org/abs/1906.05462},
url_Poster={https://github.com/plai-group/bibliography/blob/master/presentations_posters/HAR-19.pdf},
abstract = {We introduce the use of Bayesian optimal experimental design techniques for generating glimpse sequences to use in semi-supervised training of hard attention networks. Hard attention holds the promise of greater energy efficiency and superior inference performance. Employing such networks for image classification usually involves choosing a sequence of glimpse locations from a stochastic policy. As the outputs of observations are typically non-differentiable with respect to their glimpse locations, unsupervised gradient learning of such a policy requires REINFORCE-style updates. Also, the only reward signal is the final classification accuracy. For these reasons hard attention networks, despite their promise, have not achieved the wide adoption that soft attention networks have and, in many practical settings, are difficult to train. We find that our method for semi-supervised training makes it easier and faster to train hard attention networks and correspondingly could make them practical to consider in situations where they were not before.},
}
Downloads: 3
{"_id":"bEe6A8envE6Jiqtv7","bibbaseid":"harvey-teng-wood-nearoptimalglimpsesequencesforimprovedhardattentionneuralnetworktraining-2019","authorIDs":["5e309447cb949bdf01000179","5e30abb4c99510de0100012e","5e30afefc99510de0100016e"],"author_short":["Harvey, W.","Teng, M.","Wood, F."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training","author":[{"propositions":[],"lastnames":["Harvey"],"firstnames":["William"],"suffixes":[]},{"propositions":[],"lastnames":["Teng"],"firstnames":["Michael"],"suffixes":[]},{"propositions":[],"lastnames":["Wood"],"firstnames":["Frank"],"suffixes":[]}],"booktitle":"NeurIPS Workshop on Bayesian Deep Learning","year":"2019","support":"D3M,LwLL","archiveprefix":"arXiv","eprint":"1906.05462","url_paper":"http://bayesiandeeplearning.org/2019/papers/38.pdf","url_arxiv":"https://arxiv.org/abs/1906.05462","url_poster":"https://github.com/plai-group/bibliography/blob/master/presentations_posters/HAR-19.pdf","abstract":"We introduce the use of Bayesian optimal experimental design techniques for generating glimpse sequences to use in semi-supervised training of hard attention networks. Hard attention holds the promise of greater energy efficiency and superior inference performance. Employing such networks for image classification usually involves choosing a sequence of glimpse locations from a stochastic policy. As the outputs of observations are typically non-differentiable with respect to their glimpse locations, unsupervised gradient learning of such a policy requires REINFORCE-style updates. Also, the only reward signal is the final classification accuracy. For these reasons hard attention networks, despite their promise, have not achieved the wide adoption that soft attention networks have and, in many practical settings, are difficult to train. We find that our method for semi-supervised training makes it easier and faster to train hard attention networks and correspondingly could make them practical to consider in situations where they were not before.","bibtex":"@inproceedings{HAR-19,\n title={Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training},\n author={Harvey, William and Teng, Michael and Wood, Frank},\n booktitle={NeurIPS Workshop on Bayesian Deep Learning},\n year={2019},\n support = {D3M,LwLL},\n archiveprefix = {arXiv},\n eprint = {1906.05462},\n url_Paper={http://bayesiandeeplearning.org/2019/papers/38.pdf},\n url_ArXiv={https://arxiv.org/abs/1906.05462},\n url_Poster={https://github.com/plai-group/bibliography/blob/master/presentations_posters/HAR-19.pdf},\n abstract = {We introduce the use of Bayesian optimal experimental design techniques for generating glimpse sequences to use in semi-supervised training of hard attention networks. Hard attention holds the promise of greater energy efficiency and superior inference performance. Employing such networks for image classification usually involves choosing a sequence of glimpse locations from a stochastic policy. As the outputs of observations are typically non-differentiable with respect to their glimpse locations, unsupervised gradient learning of such a policy requires REINFORCE-style updates. Also, the only reward signal is the final classification accuracy. For these reasons hard attention networks, despite their promise, have not achieved the wide adoption that soft attention networks have and, in many practical settings, are difficult to train. We find that our method for semi-supervised training makes it easier and faster to train hard attention networks and correspondingly could make them practical to consider in situations where they were not before.},\n}\n\n","author_short":["Harvey, W.","Teng, M.","Wood, F."],"key":"HAR-19","id":"HAR-19","bibbaseid":"harvey-teng-wood-nearoptimalglimpsesequencesforimprovedhardattentionneuralnetworktraining-2019","role":"author","urls":{" paper":"http://bayesiandeeplearning.org/2019/papers/38.pdf"," arxiv":"https://arxiv.org/abs/1906.05462"," poster":"https://github.com/plai-group/bibliography/blob/master/presentations_posters/HAR-19.pdf"},"metadata":{"authorlinks":{}},"downloads":3},"bibtype":"inproceedings","biburl":"https://raw.githubusercontent.com/plai-group/bibliography/master/group_publications.bib","creationDate":"2019-06-21T17:45:59.234Z","downloads":3,"keywords":[],"search_terms":["near","optimal","glimpse","sequences","improved","hard","attention","neural","network","training","harvey","teng","wood"],"title":"Near-Optimal Glimpse Sequences for Improved Hard Attention Neural Network Training","year":2019,"dataSources":["7avRLRrz2ifJGMKcD","BKH7YtW7K7WNMA3cj","wyN5DxtoT6AQuiXnm"]}