The Self-Contained Negation Test Set. Kletz, D., Amsili, P., & Candito, M. In Belinkov, Y., Hao, S., Jumelet, J., Kim, N., McCarthy, A., & Mohebbi, H., editors, Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 212–221, Singapore, December, 2023. Association for Computational Linguistics.
Pdf
Poster
Acl doi abstract bibtex Several methodologies have recently been proposed to evaluate the ability of Pretrained Language Models (PLMs) to interpret negation. In this article, we build on Gubelmann and Handschuh (2022), which studies the modification of PLMs' predictions as a function of the polarity of inputs, in English. Crucially, this test uses ``self-contained'' inputs ending with a masked position: depending on the polarity of a verb in the input, a particular token is either semantically ruled out or allowed at the masked position. By replicating Gubelmann and Handschuh (2022) experiments, we have uncovered flaws that weaken the conclusions that can be drawn from this test. We thus propose an improved version, the Self-Contained Neg Test, which is more controlled, more systematic, and entirely based on examples forming minimal pairs varying only in the presence or absence of verbal negation in English. When applying our test to the roberta and bert base and large models, we show that only roberta-large shows trends that match the expectations, while bert-base is mostly insensitive to negation. For all the tested models though, in a significant number of test instances the top-1 prediction remains the token that is semantically forbidden by the context, which shows how much room for improvement remains for a proper treatment of the negation phenomenon.
@InProceedings{ kletz23.bbnlp,
title = "The Self-Contained Negation Test Set",
author = "Kletz, David and Amsili, Pascal and Candito, Marie",
editor = "Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and
Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein",
booktitle = "Proceedings of the 6th BlackboxNLP Workshop: Analyzing and
Interpreting Neural Networks for NLP",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
pages = "212--221",
abstract = "Several methodologies have recently been proposed to
evaluate the ability of Pretrained Language Models (PLMs)
to interpret negation. In this article, we build on
Gubelmann and Handschuh (2022), which studies the
modification of PLMs{'} predictions as a function of the
polarity of inputs, in English. Crucially, this test uses
{``}self-contained{''} inputs ending with a masked
position: depending on the polarity of a verb in the input,
a particular token is either semantically ruled out or
allowed at the masked position. By replicating Gubelmann
and Handschuh (2022) experiments, we have uncovered flaws
that weaken the conclusions that can be drawn from this
test. We thus propose an improved version, the
Self-Contained Neg Test, which is more controlled, more
systematic, and entirely based on examples forming minimal
pairs varying only in the presence or absence of verbal
negation in English. When applying our test to the roberta
and bert base and large models, we show that only
roberta-large shows trends that match the expectations,
while bert-base is mostly insensitive to negation. For all
the tested models though, in a significant number of test
instances the top-1 prediction remains the token that is
semantically forbidden by the context, which shows how much
room for improvement remains for a proper treatment of the
negation phenomenon.",
url_pdf = "../Docs/papers/kletz23_bbnlp.pdf",
url_poster = "../Docs/posters/kletz23_bbnlp.pdf",
url_acl = "https://aclanthology.org/2023.blackboxnlp-1.16",
doi = {10.18653/v1/2023.blackboxnlp-1.16}
}
Downloads: 0
{"_id":"pRo43fRpboQy8z5ou","bibbaseid":"kletz-amsili-candito-theselfcontainednegationtestset-2023","author_short":["Kletz, D.","Amsili, P.","Candito, M."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"The Self-Contained Negation Test Set","author":[{"propositions":[],"lastnames":["Kletz"],"firstnames":["David"],"suffixes":[]},{"propositions":[],"lastnames":["Amsili"],"firstnames":["Pascal"],"suffixes":[]},{"propositions":[],"lastnames":["Candito"],"firstnames":["Marie"],"suffixes":[]}],"editor":[{"propositions":[],"lastnames":["Belinkov"],"firstnames":["Yonatan"],"suffixes":[]},{"propositions":[],"lastnames":["Hao"],"firstnames":["Sophie"],"suffixes":[]},{"propositions":[],"lastnames":["Jumelet"],"firstnames":["Jaap"],"suffixes":[]},{"propositions":[],"lastnames":["Kim"],"firstnames":["Najoung"],"suffixes":[]},{"propositions":[],"lastnames":["McCarthy"],"firstnames":["Arya"],"suffixes":[]},{"propositions":[],"lastnames":["Mohebbi"],"firstnames":["Hosein"],"suffixes":[]}],"booktitle":"Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP","month":"December","year":"2023","address":"Singapore","publisher":"Association for Computational Linguistics","pages":"212–221","abstract":"Several methodologies have recently been proposed to evaluate the ability of Pretrained Language Models (PLMs) to interpret negation. In this article, we build on Gubelmann and Handschuh (2022), which studies the modification of PLMs' predictions as a function of the polarity of inputs, in English. Crucially, this test uses ``self-contained'' inputs ending with a masked position: depending on the polarity of a verb in the input, a particular token is either semantically ruled out or allowed at the masked position. By replicating Gubelmann and Handschuh (2022) experiments, we have uncovered flaws that weaken the conclusions that can be drawn from this test. We thus propose an improved version, the Self-Contained Neg Test, which is more controlled, more systematic, and entirely based on examples forming minimal pairs varying only in the presence or absence of verbal negation in English. When applying our test to the roberta and bert base and large models, we show that only roberta-large shows trends that match the expectations, while bert-base is mostly insensitive to negation. For all the tested models though, in a significant number of test instances the top-1 prediction remains the token that is semantically forbidden by the context, which shows how much room for improvement remains for a proper treatment of the negation phenomenon.","url_pdf":"../Docs/papers/kletz23_bbnlp.pdf","url_poster":"../Docs/posters/kletz23_bbnlp.pdf","url_acl":"https://aclanthology.org/2023.blackboxnlp-1.16","doi":"10.18653/v1/2023.blackboxnlp-1.16","bibtex":"@InProceedings{\t kletz23.bbnlp,\n title\t\t= \"The Self-Contained Negation Test Set\",\n author\t= \"Kletz, David and Amsili, Pascal and Candito, Marie\",\n editor\t= \"Belinkov, Yonatan and Hao, Sophie and Jumelet, Jaap and\n\t\t Kim, Najoung and McCarthy, Arya and Mohebbi, Hosein\",\n booktitle\t= \"Proceedings of the 6th BlackboxNLP Workshop: Analyzing and\n\t\t Interpreting Neural Networks for NLP\",\n month\t\t= dec,\n year\t\t= \"2023\",\n address\t= \"Singapore\",\n publisher\t= \"Association for Computational Linguistics\",\n pages\t\t= \"212--221\",\n abstract\t= \"Several methodologies have recently been proposed to\n\t\t evaluate the ability of Pretrained Language Models (PLMs)\n\t\t to interpret negation. In this article, we build on\n\t\t Gubelmann and Handschuh (2022), which studies the\n\t\t modification of PLMs{'} predictions as a function of the\n\t\t polarity of inputs, in English. Crucially, this test uses\n\t\t {``}self-contained{''} inputs ending with a masked\n\t\t position: depending on the polarity of a verb in the input,\n\t\t a particular token is either semantically ruled out or\n\t\t allowed at the masked position. By replicating Gubelmann\n\t\t and Handschuh (2022) experiments, we have uncovered flaws\n\t\t that weaken the conclusions that can be drawn from this\n\t\t test. We thus propose an improved version, the\n\t\t Self-Contained Neg Test, which is more controlled, more\n\t\t systematic, and entirely based on examples forming minimal\n\t\t pairs varying only in the presence or absence of verbal\n\t\t negation in English. When applying our test to the roberta\n\t\t and bert base and large models, we show that only\n\t\t roberta-large shows trends that match the expectations,\n\t\t while bert-base is mostly insensitive to negation. For all\n\t\t the tested models though, in a significant number of test\n\t\t instances the top-1 prediction remains the token that is\n\t\t semantically forbidden by the context, which shows how much\n\t\t room for improvement remains for a proper treatment of the\n\t\t negation phenomenon.\",\n url_pdf\t= \"../Docs/papers/kletz23_bbnlp.pdf\",\n url_poster\t= \"../Docs/posters/kletz23_bbnlp.pdf\",\n url_acl\t= \"https://aclanthology.org/2023.blackboxnlp-1.16\",\n doi\t\t= {10.18653/v1/2023.blackboxnlp-1.16}\n}\n\n","author_short":["Kletz, D.","Amsili, P.","Candito, M."],"editor_short":["Belinkov, Y.","Hao, S.","Jumelet, J.","Kim, N.","McCarthy, A.","Mohebbi, H."],"key":"kletz23.bbnlp","id":"kletz23.bbnlp","bibbaseid":"kletz-amsili-candito-theselfcontainednegationtestset-2023","role":"author","urls":{" pdf":"http://www.linguist.univ-paris-diderot.fr/~amsili/Docs/papers/kletz23_bbnlp.pdf"," poster":"http://www.linguist.univ-paris-diderot.fr/~amsili/Docs/posters/kletz23_bbnlp.pdf"," acl":"https://aclanthology.org/2023.blackboxnlp-1.16"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"http://www.linguist.univ-paris-diderot.fr/~amsili/Rech/amsili.bib","dataSources":["G8v5dYGnBDTY4xtwo"],"keywords":[],"search_terms":["self","contained","negation","test","set","kletz","amsili","candito"],"title":"The Self-Contained Negation Test Set","year":2023}