Revealing the Dark Secrets of BERT. Kovaleva, O., Romanov, A., Rogers, A., & Rumshisky, A. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4364–4373, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics.
Revealing the Dark Secrets of BERT [link]Paper  doi  abstract   bibtex   1 download  
BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.
@inproceedings{Kovaleva2019,
abstract = {BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.},
address = {Stroudsburg, PA, USA},
archivePrefix = {arXiv},
arxivId = {1908.08593},
author = {Kovaleva, Olga and Romanov, Alexey and Rogers, Anna and Rumshisky, Anna},
booktitle = {Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
doi = {10.18653/v1/D19-1445},
eprint = {1908.08593},
file = {:Users/shanest/Documents/Library/Kovaleva et al/Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural./Kovaleva et al. - 2019 - Revealing the Dark Secrets of BERT.pdf:pdf},
keywords = {method: attention,method: pruning},
pages = {4364--4373},
publisher = {Association for Computational Linguistics},
title = {{Revealing the Dark Secrets of BERT}},
url = {https://www.aclweb.org/anthology/D19-1445},
year = {2019}
}

Downloads: 1