Linguistic Knowledge and Transferability of Contextual Representations. Liu, N. F., Gardner, M., Belinkov, Y., Peters, M. E., & Smith, N. A. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics.
Linguistic Knowledge and Transferability of Contextual Representations [link]Paper  doi  abstract   bibtex   
Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of seventeen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.
@inproceedings{Liu2019,
abstract = {Contextual word representations derived from large-scale neural language models are successful across a diverse set of NLP tasks, suggesting that they encode useful and transferable features of language. To shed light on the linguistic knowledge they capture, we study the representations produced by several recent pretrained contextualizers (variants of ELMo, the OpenAI transformer language model, and BERT) with a suite of seventeen diverse probing tasks. We find that linear models trained on top of frozen contextual representations are competitive with state-of-the-art task-specific models in many cases, but fail on tasks requiring fine-grained linguistic knowledge (e.g., conjunct identification). To investigate the transferability of contextual word representations, we quantify differences in the transferability of individual layers within contextualizers, especially between recurrent neural networks (RNNs) and transformers. For instance, higher layers of RNNs are more task-specific, while transformer layers do not exhibit the same monotonic trend. In addition, to better understand what makes contextual word representations transferable, we compare language model pretraining with eleven supervised pretraining tasks. For any given task, pretraining on a closely related task yields better performance than language model pretraining (which is better on average) when the pretraining dataset is fixed. However, language model pretraining on more data gives the best results.},
address = {Stroudsburg, PA, USA},
archivePrefix = {arXiv},
arxivId = {1903.08855},
author = {Liu, Nelson F. and Gardner, Matt and Belinkov, Yonatan and Peters, Matthew E. and Smith, Noah A.},
booktitle = {Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
doi = {10.18653/v1/N19-1112},
eprint = {1903.08855},
file = {:Users/shanest/Documents/Library/Liu et al/Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologie./Liu et al. - 2019 - Linguistic Knowledge and Transferability of Contextual Representations.pdf:pdf},
keywords = {method: diagnostic classifier,method: layer-wise analysis,method: model comparison,phenomenon: various},
pages = {1073--1094},
publisher = {Association for Computational Linguistics},
title = {{Linguistic Knowledge and Transferability of Contextual Representations}},
url = {http://aclweb.org/anthology/N19-1112},
year = {2019}
}

Downloads: 0