What Does BERT Look at? An Analysis of BERT's Attention. Clark, K., Khandelwal, U., Levy, O., & Manning, C. D. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics.
What Does BERT Look at? An Analysis of BERT's Attention [link]Paper  doi  abstract   bibtex   1 download  
Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention.
@inproceedings{Clark2019,
abstract = {Large pre-trained neural networks such as BERT have had great recent success in NLP, motivating a growing body of research investigating what aspects of language they are able to learn from unlabeled data. Most recent analysis has focused on model outputs (e.g., language model surprisal) or internal vector representations (e.g., probing classifiers). Complementary to these works, we propose methods for analyzing the attention mechanisms of pre-trained models and apply them to BERT. BERT's attention heads exhibit patterns such as attending to delimiter tokens, specific positional offsets, or broadly attending over the whole sentence, with heads in the same layer often exhibiting similar behaviors. We further show that certain attention heads correspond well to linguistic notions of syntax and coreference. For example, we find heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions with remarkably high accuracy. Lastly, we propose an attention-based probing classifier and use it to further demonstrate that substantial syntactic information is captured in BERT's attention.},
address = {Stroudsburg, PA, USA},
archivePrefix = {arXiv},
arxivId = {1906.04341},
author = {Clark, Kevin and Khandelwal, Urvashi and Levy, Omer and Manning, Christopher D.},
booktitle = {Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP},
doi = {10.18653/v1/W19-4828},
eprint = {1906.04341},
file = {:Users/shanest/Documents/Library/Clark et al/Proceedings of the 2019 ACL Workshop BlackboxNLP Analyzing and Interpreting Neural Networks for NLP/Clark et al. - 2019 - What Does BERT Look at An Analysis of BERT's Attention.pdf:pdf},
keywords = {method: attention},
pages = {276--286},
publisher = {Association for Computational Linguistics},
title = {{What Does BERT Look at? An Analysis of BERT's Attention}},
url = {https://www.aclweb.org/anthology/W19-4828},
year = {2019}
}

Downloads: 1