Neural Network Acceptability Judgments. Warstadt, A., Singh, A., & Bowman, S. R. Transactions of the Association for Computational Linguistics, 2019. Paper abstract bibtex In this work, we explore the ability of artificial neural networks to judge the grammatical acceptability of a sentence. Machine learning research of this kind is well placed to answer important open questions about the role of prior linguistic bias in language acquisition by providing a test for the Poverty of the Stimulus Argument. In service of this goal, we introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical by expert linguists. We train several recurrent neural networks to do binary acceptability classification. These models set a baseline for the task. Error-analysis testing the models on specific grammatical phenomena reveals that they learn some systematic grammatical generalizations like subject-verb-object word order without any grammatical supervision. We find that neural sequence models show promise on the acceptability classification task. However, human-like performance across a wide range of grammatical constructions remains far off.
@article{Warstadt2018,
abstract = {In this work, we explore the ability of artificial neural networks to judge the grammatical acceptability of a sentence. Machine learning research of this kind is well placed to answer important open questions about the role of prior linguistic bias in language acquisition by providing a test for the Poverty of the Stimulus Argument. In service of this goal, we introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical by expert linguists. We train several recurrent neural networks to do binary acceptability classification. These models set a baseline for the task. Error-analysis testing the models on specific grammatical phenomena reveals that they learn some systematic grammatical generalizations like subject-verb-object word order without any grammatical supervision. We find that neural sequence models show promise on the acceptability classification task. However, human-like performance across a wide range of grammatical constructions remains far off.},
archivePrefix = {arXiv},
arxivId = {1805.12471},
author = {Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R.},
eprint = {1805.12471},
file = {:Users/shanest/Documents/Library/Warstadt, Singh, Bowman/Transactions of the Association for Computational Linguistics/Warstadt, Singh, Bowman - 2019 - Neural Network Acceptability Judgments.pdf:pdf},
journal = {Transactions of the Association for Computational Linguistics},
keywords = {dataset,method: acceptability judgment,method: model comparison,method: new data},
title = {{Neural Network Acceptability Judgments}},
url = {http://arxiv.org/abs/1805.12471},
year = {2019}
}
Downloads: 0
{"_id":"84mRvioajmNKBxMCK","bibbaseid":"warstadt-singh-bowman-neuralnetworkacceptabilityjudgments-2019","authorIDs":[],"author_short":["Warstadt, A.","Singh, A.","Bowman, S. R."],"bibdata":{"bibtype":"article","type":"article","abstract":"In this work, we explore the ability of artificial neural networks to judge the grammatical acceptability of a sentence. Machine learning research of this kind is well placed to answer important open questions about the role of prior linguistic bias in language acquisition by providing a test for the Poverty of the Stimulus Argument. In service of this goal, we introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical by expert linguists. We train several recurrent neural networks to do binary acceptability classification. These models set a baseline for the task. Error-analysis testing the models on specific grammatical phenomena reveals that they learn some systematic grammatical generalizations like subject-verb-object word order without any grammatical supervision. We find that neural sequence models show promise on the acceptability classification task. However, human-like performance across a wide range of grammatical constructions remains far off.","archiveprefix":"arXiv","arxivid":"1805.12471","author":[{"propositions":[],"lastnames":["Warstadt"],"firstnames":["Alex"],"suffixes":[]},{"propositions":[],"lastnames":["Singh"],"firstnames":["Amanpreet"],"suffixes":[]},{"propositions":[],"lastnames":["Bowman"],"firstnames":["Samuel","R."],"suffixes":[]}],"eprint":"1805.12471","file":":Users/shanest/Documents/Library/Warstadt, Singh, Bowman/Transactions of the Association for Computational Linguistics/Warstadt, Singh, Bowman - 2019 - Neural Network Acceptability Judgments.pdf:pdf","journal":"Transactions of the Association for Computational Linguistics","keywords":"dataset,method: acceptability judgment,method: model comparison,method: new data","title":"Neural Network Acceptability Judgments","url":"http://arxiv.org/abs/1805.12471","year":"2019","bibtex":"@article{Warstadt2018,\nabstract = {In this work, we explore the ability of artificial neural networks to judge the grammatical acceptability of a sentence. Machine learning research of this kind is well placed to answer important open questions about the role of prior linguistic bias in language acquisition by providing a test for the Poverty of the Stimulus Argument. In service of this goal, we introduce the Corpus of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical by expert linguists. We train several recurrent neural networks to do binary acceptability classification. These models set a baseline for the task. Error-analysis testing the models on specific grammatical phenomena reveals that they learn some systematic grammatical generalizations like subject-verb-object word order without any grammatical supervision. We find that neural sequence models show promise on the acceptability classification task. However, human-like performance across a wide range of grammatical constructions remains far off.},\narchivePrefix = {arXiv},\narxivId = {1805.12471},\nauthor = {Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R.},\neprint = {1805.12471},\nfile = {:Users/shanest/Documents/Library/Warstadt, Singh, Bowman/Transactions of the Association for Computational Linguistics/Warstadt, Singh, Bowman - 2019 - Neural Network Acceptability Judgments.pdf:pdf},\njournal = {Transactions of the Association for Computational Linguistics},\nkeywords = {dataset,method: acceptability judgment,method: model comparison,method: new data},\ntitle = {{Neural Network Acceptability Judgments}},\nurl = {http://arxiv.org/abs/1805.12471},\nyear = {2019}\n}\n","author_short":["Warstadt, A.","Singh, A.","Bowman, S. R."],"key":"Warstadt2018","id":"Warstadt2018","bibbaseid":"warstadt-singh-bowman-neuralnetworkacceptabilityjudgments-2019","role":"author","urls":{"Paper":"http://arxiv.org/abs/1805.12471"},"keyword":["dataset","method: acceptability judgment","method: model comparison","method: new data"],"metadata":{"authorlinks":{}},"downloads":0},"bibtype":"article","biburl":"https://www.shane.st/teaching/575/win20/MachineLearning-interpretability.bib","creationDate":"2020-01-05T04:04:02.879Z","downloads":0,"keywords":["dataset","method: acceptability judgment","method: model comparison","method: new data"],"search_terms":["neural","network","acceptability","judgments","warstadt","singh","bowman"],"title":"Neural Network Acceptability Judgments","year":2019,"dataSources":["okYcdTpf4JJ2zkj7A","znj7izS5PeehdLR3G"]}