generated by bibbase.org
  2020 (5)
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; and Soricut, R. In International Conference of Learning Representations, 2020.
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations [link]Paper   link   bibtex   3 downloads  
Are Pre-trained Language Models Aware of Phrases? Simple but Strong Baselines for Grammar Inductio. Kim, T.; Choi, J.; Lee, S.; and Edmiston, D. In International Conference of Learning Representations, 2020.
link   bibtex  
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data. Keysers, D.; Schärli, N.; Scales, N.; Buisman, H.; Furrer, D.; Kashubin, S.; Momchev, N.; Sinopalnikov, D.; Stafiniak, L.; Tihon, T.; Tsarkov, D.; Wang, X.; van Zee, M.; and Bousquet, O. In International Conference of Learning Representations, 2020.
Measuring Compositional Generalization: A Comprehensive Method on Realistic Data [link]Paper   link   bibtex   13 downloads  
Linguistic generalization and compositionality in modern artificial neural networks. Baroni, M. Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1791). 2020.
Linguistic generalization and compositionality in modern artificial neural networks [link]Paper   doi   link   bibtex   abstract   2 downloads  
Evaluating Commonsense in Pre-trained Language Models. Zhou, X.; Zhang, Y.; Cui, L.; and Huang, D. In Association for the Advancement of Artificial Intelligence (AAAI), 2020.
Evaluating Commonsense in Pre-trained Language Models [link]Paper   link   bibtex   abstract   1 download  
  2019 (55)
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned. Voita, E.; Talbot, D.; Moiseev, F.; Sennrich, R.; and Titov, I. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5797–5808, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned [link]Paper   doi   link   bibtex   abstract   1 download  
What Does BERT Learn about the Structure of Language?. Jawahar, G.; Sagot, B.; and Seddah, D. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3651–3657, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
What Does BERT Learn about the Structure of Language? [link]Paper   doi   link   bibtex   abstract   3 downloads  
Revealing the Dark Secrets of BERT. Kovaleva, O.; Romanov, A.; Rogers, A.; and Rumshisky, A. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4364–4373, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Revealing the Dark Secrets of BERT [link]Paper   doi   link   bibtex   abstract   1 download  
The emergence of number and syntax units in LSTM language models. Lakretz, Y.; Kruszewski, G.; Desbordes, T.; Hupkes, D.; Dehaene, S.; and Baroni, M. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 11–20, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
The emergence of number and syntax units in LSTM language models [link]Paper   doi   link   bibtex  
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; and Liu, P. J ,1–53. oct 2019.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer [link]Paper   link   bibtex   abstract  
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; and Zettlemoyer, L. . oct 2019.
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension [link]Paper   link   bibtex   abstract  
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages. Ravfogel, S.; Goldberg, Y.; and Linzen, T. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3532–3542, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Studying the Inductive Biases of RNNs with Synthetic Variations of Natural Languages [link]Paper   doi   link   bibtex  
HuggingFace's Transformers: State-of-the-art Natural Language Processing. Wolf, T.; Debut, L.; Sanh, V.; Chaumond, J.; Delangue, C.; Moi, A.; Cistac, P.; Rault, T.; Louf, R.; Funtowicz, M.; and Brew, J. . oct 2019.
HuggingFace's Transformers: State-of-the-art Natural Language Processing [link]Paper   link   bibtex   abstract  
XLNet: Generalized Autoregressive Pretraining for Language Understanding. Yang, Z.; Dai, Z.; Yang, Y.; Carbonell, J.; Salakhutdinov, R.; and Le, Q. V In Advances in Neural Information Processing Systems 32 (NeurIPS 2019), 2019.
XLNet: Generalized Autoregressive Pretraining for Language Understanding [link]Paper   link   bibtex  
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models. Dalvi, F.; Durrani, N.; Sajjad, H.; Belinkov, Y.; Bau, A.; and Glass, J. In Association for the Advancement of Artificial Intelligence (AAAI), dec 2019.
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models [link]Paper   link   bibtex   abstract  
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing @ NeurIPS 2019, 2019.
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter [link]Paper   link   bibtex   abstract  
Cross-lingual Language Model Pretraining. Lample, G.; and Conneau, A. In 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019.
Cross-lingual Language Model Pretraining [link]Paper   link   bibtex   abstract  
Unsupervised Cross-lingual Representation Learning at Scale. Conneau, A.; Khandelwal, K.; Goyal, N.; Chaudhary, V.; Wenzek, G.; Guzmán, F.; Grave, E.; Ott, M.; Zettlemoyer, L.; and Stoyanov, V. . nov 2019.
Unsupervised Cross-lingual Representation Learning at Scale [link]Paper   link   bibtex   abstract  
RoBERTa: A Robustly Optimized BERT Pretraining Approach. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. . 2019.
RoBERTa: A Robustly Optimized BERT Pretraining Approach [link]Paper   link   bibtex   abstract  
Is Attention Interpretable?. Serrano, S.; and Smith, N. A. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2931–2951, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Is Attention Interpretable? [link]Paper   doi   link   bibtex   abstract   1 download  
Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments. Warstadt, A.; and Bowman, S. R. . 2019.
Grammatical Analysis of Pretrained Sentence Encoders with Acceptability Judgments [link]Paper   link   bibtex   abstract  
What Does BERT Look at? An Analysis of BERT's Attention. Clark, K.; Khandelwal, U.; Levy, O.; and Manning, C. D. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276–286, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
What Does BERT Look at? An Analysis of BERT's Attention [link]Paper   doi   link   bibtex   abstract   1 download  
Language Models are Unsupervised Multitask Learners. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; and Sutskever, I. . 2019.
Language Models are Unsupervised Multitask Learners [link]Paper   link   bibtex   abstract  
Attention is not not Explanation. Wiegreffe, S.; and Pinter, Y. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 11–20, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Attention is not not Explanation [link]Paper   doi   link   bibtex   abstract   1 download  
Attention is not Explanation. Jain, S.; and Wallace, B. C. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3543–3556, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Attention is not Explanation [link]Paper   doi   link   bibtex   1 download  
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding [link]Paper   doi   link   bibtex  
Open Sesame: Getting inside BERT's Linguistic Knowledge. Lin, Y.; Tan, Y. C.; and Frank, R. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241–253, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Open Sesame: Getting inside BERT's Linguistic Knowledge [link]Paper   doi   link   bibtex  
A Structural Probe for Finding Syntax in Word Representations. Hewitt, J.; and Manning, C. D. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4129–4138, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
A Structural Probe for Finding Syntax in Word Representations [link]Paper   doi   link   bibtex   abstract  
On Measuring Social Biases in Sentence Encoders. May, C.; Wang, A.; Bordia, S.; Bowman, S. R.; and Rudinger, R. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 622–628, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
On Measuring Social Biases in Sentence Encoders [link]Paper   doi   link   bibtex   abstract  
Neural language models as psycholinguistic subjects: Representations of syntactic state. Futrell, R.; Wilcox, E.; Morita, T.; Qian, P.; Ballesteros, M.; and Levy, R. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 32–42, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Neural language models as psycholinguistic subjects: Representations of syntactic state [link]Paper   doi   link   bibtex   abstract  
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. McCoy, T.; Pavlick, E.; and Linzen, T. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference [link]Paper   doi   link   bibtex   abstract  
Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and Brains. Abnar, S.; Beinborn, L.; Choenni, R.; and Zuidema, W. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 191–203, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Blackbox Meets Blackbox: Representational Similarity & Stability Analysis of Neural Language Models and Brains [link]Paper   doi   link   bibtex  
The compositionality of neural networks: integrating symbolism and connectionism. Hupkes, D.; Dankers, V.; Mul, M.; and Bruni, E. ,1–40. 2019.
The compositionality of neural networks: integrating symbolism and connectionism [link]Paper   link   bibtex   abstract  
Correlating Neural and Symbolic Representations of Language. Chrupała, G.; and Alishahi, A. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2952–2962, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Correlating Neural and Symbolic Representations of Language [link]Paper   doi   link   bibtex   abstract  
Tabula Nearly Rasa: Probing the Linguistic Knowledge of Character-level Neural Language Models Trained on Unsegmented Text. Hahn, M.; and Baroni, M. Transactions of the Association for Computational Linguistics, 7: 467–484. 2019.
Tabula Nearly Rasa: Probing the Linguistic Knowledge of Character-level Neural Language Models Trained on Unsegmented Text [link]Paper   doi   link   bibtex   abstract  
Probing Natural Language Inference Models through Semantic Fragments. Richardson, K.; Hu, H.; Moss, L. S; and Sabharwal, A. In Association for the Advancement of Artificial Intelligence (AAAI), 2019.
Probing Natural Language Inference Models through Semantic Fragments [link]Paper   link   bibtex   1 download  
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations. Wilcox, E.; Levy, R.; and Futrell, R. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 181–190, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Hierarchical Representation in Neural Language Models: Suppression and Recovery of Expectations [link]Paper   doi   link   bibtex  
Analysis Methods in Neural Language Processing: A Survey. Belinkov, Y.; and Glass, J. Transactions of the Association for Computational Linguistics, 7: 49–72. 2019.
doi   link   bibtex   abstract  
Can Neural Networks Understand Monotonicity Reasoning?. Yanaka, H.; Mineshima, K.; Bekki, D.; Inui, K.; Sekine, S.; Abzianidze, L.; and Bos, J. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 31–40, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Can Neural Networks Understand Monotonicity Reasoning? [link]Paper   doi   link   bibtex   abstract  
Visualizing and Measuring the Geometry of BERT. Coenen, A.; Yuan, A.; Kim, B.; Pearce, A.; Viégas, F.; and Wattenberg, M. In Proceedings of the 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), 2019.
Visualizing and Measuring the Geometry of BERT [link]Paper   link   bibtex  
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies. Wilcox, E.; Qian, P.; Futrell, R.; Ballesteros, M.; and Levy, R. In Proceedings of North American Association for Computational Linguistics (NAACL), pages 3302–3312, 2019.
Structural Supervision Improves Learning of Non-Local Grammatical Dependencies [link]Paper   link   bibtex   abstract  
Designing and Interpreting Probes with Control Tasks. Hewitt, J.; and Liang, P. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2733–2743, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Designing and Interpreting Probes with Control Tasks [link]Paper   doi   link   bibtex  
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Ettinger, A. . 2019.
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models [link]Paper   link   bibtex  
Assessing Incrementality in Sequence-to-Sequence Models. Ulmer, D.; Hupkes, D.; and Bruni, E. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 209–217, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Assessing Incrementality in Sequence-to-Sequence Models [link]Paper   doi   link   bibtex  
Neural Network Acceptability Judgments. Warstadt, A.; Singh, A.; and Bowman, S. R. Transactions of the Association for Computational Linguistics. 2019.
Neural Network Acceptability Judgments [link]Paper   link   bibtex   abstract  
What do you learn from context? Probing for sentence structure in contextualized word representations. Tenney, I.; Xia, P.; Chen, B.; Wang, A.; Poliak, A.; McCoy, R T.; Kim, N.; Van Durme, B.; Bowman, S. R; Das, D.; and Pavlick, E. In International Conference of Learning Representations (ICLR 2019), pages 1–17, 2019.
What do you learn from context? Probing for sentence structure in contextualized word representations [link]Paper   link   bibtex   abstract  
RNNs Implicitly Implement Tensor Product Representations. McCoy, R. T.; Linzen, T.; Dunbar, E.; and Smolensky, P. In International Conference of Learning Representations (ICLR), 2019.
RNNs Implicitly Implement Tensor Product Representations [link]Paper   link   bibtex   abstract  
BERT Rediscovers the Classical NLP Pipeline. Tenney, I.; Das, D.; and Pavlick, E. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4593–4601, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
BERT Rediscovers the Classical NLP Pipeline [link]Paper   doi   link   bibtex  
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives. Voita, E.; Sennrich, R.; and Titov, I. In Empirical Methods in Natural Language Processing (EMNLP), 2019.
The Bottom-up Evolution of Representations in the Transformer: A Study with Machine Translation and Language Modeling Objectives [link]Paper   link   bibtex   abstract  
Linguistic Knowledge and Transferability of Contextual Representations. Liu, N. F.; Gardner, M.; Belinkov, Y.; Peters, M. E.; and Smith, N. A. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073–1094, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Linguistic Knowledge and Transferability of Contextual Representations [link]Paper   doi   link   bibtex   abstract  
What can linguistics and deep learning contribute to each other? Response to Pater. Linzen, T. Language, 95(1). 2019.
What can linguistics and deep learning contribute to each other? Response to Pater [link]Paper   doi   link   bibtex  
Measuring Compositionality in Representation Learning. Andreas, J. In International Conference of Learning Representations, 2019.
Measuring Compositionality in Representation Learning [link]Paper   link   bibtex   abstract   14 downloads  
A case for deep learning in semantics: Response to Pater. Potts, C. Language. 2019.
A case for deep learning in semantics: Response to Pater [link]Paper   doi   link   bibtex   1 download  
Probing Neural Network Comprehension of Natural Language Arguments. Niven, T.; and Kao, H. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4658–4664, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
Probing Neural Network Comprehension of Natural Language Arguments [link]Paper   doi   link   bibtex   abstract  
Assessing BERT's Syntactic Abilities. Goldberg, Y. ,2–5. 2019.
Assessing BERT's Syntactic Abilities [link]Paper   link   bibtex   abstract  
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study. An, A.; Qian, P.; Wilcox, E.; and Levy, R. In Empirical Methods in Natural Language Processing (EMNLP), 2019.
Representation of Constituents in Neural Language Models: Coordination Phrase as a Case Study [link]Paper   link   bibtex   abstract  
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment. Jumelet, J.; Zuidema, W.; and Hupkes, D. In Proceedings of the Conference on Computational Natural Language Learning (CoNLL), 2019.
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment [link]Paper   link   bibtex   abstract  
Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs. Warstadt, A.; Cao, Y.; Grosu, I.; Peng, W.; Blix, H.; Nie, Y.; Alsop, A.; Bordia, S.; Liu, H.; Parrish, A.; Wang, S.; Phang, J.; Mohananey, A.; Htut, P. M.; Jeretič, P.; and Bowman, S. R. In Empirical Methods in Natural Language Processing (EMNLP), 2019.
Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs [link]Paper   link   bibtex   abstract  
Generative linguistics and neural networks at 60: Foundation, friction, and fusion. Pater, J. Language, 95(1). 2019.
Generative linguistics and neural networks at 60: Foundation, friction, and fusion [link]Paper   doi   link   bibtex   abstract   1 download  
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. Ethayarajh, K. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 55–65, Stroudsburg, PA, USA, 2019. Association for Computational Linguistics
How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings [link]Paper   doi   link   bibtex   abstract  
  2018 (20)
Distinct patterns of syntactic agreement errors in recurrent networks and humans. Linzen, T.; and Leonard, B. In Proceedings of the 40th Annual Conference of the Cognitive Science Society, 2018.
Distinct patterns of syntactic agreement errors in recurrent networks and humans [link]Paper   link   bibtex  
Hypothesis Only Baselines in Natural Language Inference. Poliak, A.; Naradowsky, J.; Haldar, A.; Rudinger, R.; and Van Durme, B. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, pages 180–191, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Hypothesis Only Baselines in Natural Language Inference [link]Paper   doi   link   bibtex  
Can Neural Networks Understand Logical Entailment. Evans, R.; Saxton, D.; Amos, D.; Kohli, P.; and Grefenstette, E. In International Conference of Learning Representations, 2018.
Can Neural Networks Understand Logical Entailment [link]Paper   link   bibtex  
Stress Test Evaluation for Natural Language Inference. Naik, A.; Ravichander, A.; Sadeh, N.; Rose, C.; and Neubig, G. In Proceedings ofthe 27th International Conference on Computational Linguistics (COLING), pages 2340–2353, 2018.
Stress Test Evaluation for Natural Language Inference [link]Paper   link   bibtex   abstract  
Deep contextualized word representations. Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. In Proceedings of North American Association for Computational Linguistics (NAACL), feb 2018.
Deep contextualized word representations [link]Paper   link   bibtex   abstract  
What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties}. Conneau, A.; Kruszewski, G.; Lample, G.; Barrault, L.; and Baroni, M. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 2126–2136, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties} [link]Paper   doi   link   bibtex   abstract  
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks. Lake, B.; and Baroni, M. In International Conference of Machine Learning (ICML 2018), 2018.
Generalization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks [link]Paper   link   bibtex   8 downloads  
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information. Giulianelli, M.; Harding, J.; Mohnert, F.; Hupkes, D.; and Zuidema, W. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP, pages 240–248, 2018.
Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information [link]Paper   link   bibtex   abstract  
Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate. Kirov, C.; and Cotterell, R. Transactions of the Association of Computational Linguistics, 6: 651–665. 2018.
Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate [link]Paper   link   bibtex   abstract  
Dissecting Contextual Word Embeddings: Architecture and Representation. Peters, M.; Neumann, M.; Zettlemoyer, L.; and Yih, W. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1499–1509, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Dissecting Contextual Word Embeddings: Architecture and Representation [link]Paper   doi   link   bibtex   abstract  
Annotation Artifacts in Natural Language Inference Data. Gururangan, S.; Swayamdipta, S.; Levy, O.; Schwartz, R.; Bowman, S.; and Smith, N. A In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 107–112, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Annotation Artifacts in Natural Language Inference Data [link]Paper   doi   link   bibtex  
Evaluating the Ability of LSTMs to Learn Context-Free Grammars. Sennhauser, L.; and Berwick, R. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 115–124, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Evaluating the Ability of LSTMs to Learn Context-Free Grammars [link]Paper   doi   link   bibtex   abstract  
Visualisation and `Diagnostic Classifiers' Reveal how Recurrent and Recursive Neural Networks Process Hierarchical Structure. Hupkes, D.; Veldhoen, S.; and Zuidema, W. Journal of Artificial Intelligence Research, 61: 907–926. 2018.
Visualisation and `Diagnostic Classifiers' Reveal how Recurrent and Recursive Neural Networks Process Hierarchical Structure [link]Paper   doi   link   bibtex   abstract  
Do Language Models Understand \emphAnything? On the Ability of LSTMs to Understand Negative Polarity Items. Jumelet, J.; and Hupkes, D. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 222–231, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Do Language Models Understand \emphAnything? On the Ability of LSTMs to Understand Negative Polarity Items [link]Paper   doi   link   bibtex  
Targeted Syntactic Evaluation of Language Models. Marvin, R.; and Linzen, T. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Targeted Syntactic Evaluation of Language Models [link]Paper   doi   link   bibtex   abstract  
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis. Zhang, K.; and Bowman, S. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 359–361, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis [link]Paper   doi   link   bibtex   abstract  
Deep RNNs Encode Soft Hierarchical Syntax. Blevins, T.; Levy, O.; and Zettlemoyer, L. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 14–19, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Deep RNNs Encode Soft Hierarchical Syntax [link]Paper   doi   link   bibtex   abstract  
Lexicosyntactic Inference in Neural Models. White, A. S.; Rudinger, R.; Rawlins, K.; and Van Durme, B. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4717–4724, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Lexicosyntactic Inference in Neural Models [link]Paper   doi   link   bibtex   abstract  
Colorless Green Recurrent Networks Dream Hierarchically. Gulordava, K.; Bojanowski, P.; Grave, E.; Linzen, T.; and Baroni, M. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Colorless Green Recurrent Networks Dream Hierarchically [link]Paper   doi   link   bibtex   abstract  
Probing sentence embeddings for structure-dependent tense. Bacon, G.; and Regier, T. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 334–336, Stroudsburg, PA, USA, 2018. Association for Computational Linguistics
Probing sentence embeddings for structure-dependent tense [link]Paper   doi   link   bibtex  
  2017 (2)
Understanding intermediate layers using linear classifier probes. Alain, G.; and Bengio, Y. . 2017.
Understanding intermediate layers using linear classifier probes [link]Paper   link   bibtex   abstract  
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks. Adi, Y.; Kermany, E.; Belinkov, Y.; Lavi, O.; and Goldberg, Y. In International Conference on Learning Representations, 2017.
Fine-grained Analysis of Sentence Embeddings Using Auxiliary Prediction Tasks [link]Paper   link   bibtex   abstract  
  2016 (4)
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies. Linzen, T.; Dupoux, E.; and Goldberg, Y. Transactions of the Association for Computational Linguistics, 4: 521–535. 2016.
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies [link]Paper   link   bibtex   abstract  
Probing for semantic evidence of composition by means of simple classification tasks. Ettinger, A.; Elgohary, A.; and Resnik, P. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139, Stroudsburg, PA, USA, 2016. Association for Computational Linguistics
Probing for semantic evidence of composition by means of simple classification tasks [link]Paper   doi   link   bibtex   abstract  
Exploring the Limits of Language Modeling. Jozefowicz, R.; Vinyals, O.; Schuster, M.; Shazeer, N.; and Wu, Y. 2016.
Exploring the Limits of Language Modeling [link]Paper   link   bibtex   abstract  
Diagnostic classifiers: Revealing how neural networks process hierarchical structure. Veldhoen, S.; Hupkes, D.; and Zuidema, W. CEUR Workshop Proceedings, 1773. 2016.
link   bibtex   abstract  
  2015 (1)
Visualizing and Understanding Recurrent Networks. Karpathy, A.; Johnson, J.; and Fei-Fei, L. ,1–12. 2015.
Visualizing and Understanding Recurrent Networks [link]Paper   link   bibtex   abstract