PaCo: Preconditions Attributed to Commonsense Knowledge. Qasemi, E., Ilievski, F., Chen, M., & Szekely, P. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing-Findings, pages 6781–6796, Abu Dhabi, United Arab Emirates, December, 2022. Association for Computational Linguistics.
PaCo: Preconditions Attributed to Commonsense Knowledge [link]Paper  doi  abstract   bibtex   
Humans can seamlessly reason with circumstantial preconditions of commonsense knowledge. We understand that a glass is used for drinking water, unless the glass is broken or the water is toxic. Despite state-of-the-art (SOTA) language models’ (LMs) impressive performance on inferring commonsense knowledge, it is unclear whether they understand the circumstantial preconditions. To address this gap, we propose a novel challenge of reasoning with circumstantial preconditions. We collect a dataset, called PaCo, consisting of 12.4 thousand preconditions of commonsense statements expressed in natural language. Based on this dataset, we create three canonical evaluation tasks and use them to examine the capability of existing LMs to understand situational preconditions. Our results reveal a 10-30% gap between machine and human performance on our tasks, which shows that reasoning with preconditions is an open challenge.
@inproceedings{qasemi2022paco,
    title = "PaCo: Preconditions Attributed to Commonsense Knowledge",
    author = "Qasemi, Ehsan and Ilievski, Filip and Chen, Muhao and Szekely, Pedro",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing-Findings",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/emnlp-22-ingestion/2022.findings-emnlp.505/",
    doi = "10.48550/ARXIV.2104.08712",
    pages = "6781–6796",
    abstract = "Humans can seamlessly reason with circumstantial preconditions of commonsense knowledge. We understand that a glass is used for drinking water, unless the glass is broken or the water is toxic. Despite state-of-the-art (SOTA) language models’ (LMs) impressive performance on inferring commonsense knowledge, it is unclear whether they understand the circumstantial preconditions. To address this gap, we propose a novel challenge of reasoning with circumstantial preconditions. We collect a dataset, called PaCo, consisting of 12.4 thousand preconditions of commonsense statements expressed in natural language. Based on this dataset, we create three canonical evaluation tasks and use them to examine the capability of existing LMs to understand situational preconditions. Our results reveal a 10-30\% gap between machine and human performance on our tasks, which shows that reasoning with preconditions is an open challenge.",
    ISIArea = {NLP} 
}

Downloads: 0