What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. Dalvi, F., Durrani, N., Sajjad, H., Belinkov, Y., Bau, A., & Glass, J. In Association for the Advancement of Artificial Intelligence (AAAI), dec, 2019.
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models [link]Paper  abstract   bibtex   
Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their interpretability remains a challenge. Previous work largely focused on what these models learn at the representation level. We break this analysis down further and study individual dimensions (neurons) in the vector representation learned by end-to-end neural models in NLP tasks. We propose two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation Analysis, an unsupervised method to extract salient neurons w.r.t. the model itself. We evaluate the effectiveness of our techniques by ablating the identified neurons and reevaluating the network's performance for two tasks: neural machine translation (NMT) and neural language modeling (NLM). We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models? ii) are certain neurons exclusive to some properties and not others? iii) is the information more or less distributed in NMT vs. NLM? and iv) how important are the neurons identified through the linguistic correlation method to the overall task? Our code is publicly available as part of the NeuroX toolkit (Dalvi et al. 2019).
@inproceedings{Dalvi2019,
abstract = {Despite the remarkable evolution of deep neural networks in natural language processing (NLP), their interpretability remains a challenge. Previous work largely focused on what these models learn at the representation level. We break this analysis down further and study individual dimensions (neurons) in the vector representation learned by end-to-end neural models in NLP tasks. We propose two methods: Linguistic Correlation Analysis, based on a supervised method to extract the most relevant neurons with respect to an extrinsic task, and Cross-model Correlation Analysis, an unsupervised method to extract salient neurons w.r.t. the model itself. We evaluate the effectiveness of our techniques by ablating the identified neurons and reevaluating the network's performance for two tasks: neural machine translation (NMT) and neural language modeling (NLM). We further present a comprehensive analysis of neurons with the aim to address the following questions: i) how localized or distributed are different linguistic properties in the models? ii) are certain neurons exclusive to some properties and not others? iii) is the information more or less distributed in NMT vs. NLM? and iv) how important are the neurons identified through the linguistic correlation method to the overall task? Our code is publicly available as part of the NeuroX toolkit (Dalvi et al. 2019).},
archivePrefix = {arXiv},
arxivId = {1812.09355},
author = {Dalvi, Fahim and Durrani, Nadir and Sajjad, Hassan and Belinkov, Yonatan and Bau, Anthony and Glass, James},
booktitle = {Association for the Advancement of Artificial Intelligence (AAAI)},
eprint = {1812.09355},
file = {:Users/shanest/Documents/Library/Dalvi et al/Association for the Advancement of Artificial Intelligence (AAAI)/Dalvi et al. - 2019 - What Is One Grain of Sand in the Desert Analyzing Individual Neurons in Deep NLP Models.pdf:pdf},
keywords = {method: cross-model correlation analysis,method: individual neurons,method: linguistic correlation analysis},
month = {dec},
title = {{What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models}},
url = {http://arxiv.org/abs/1812.09355},
year = {2019}
}

Downloads: 0