{"_id":"SaYp8XPd7SKdbxfPY","bibbaseid":"hsu-huang-boschee-miller-natarajan-chang-peng-degreeadataefficientgenerationbasedeventextractionmodel-2022","author_short":["Hsu, I.","Huang, K.","Boschee, E.","Miller, S.","Natarajan, P.","Chang, K.","Peng, N."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"DEGREE: A Data-Efficient Generation-Based Event Extraction Model","author":[{"propositions":[],"lastnames":["Hsu"],"firstnames":["I-Hung"],"suffixes":[]},{"propositions":[],"lastnames":["Huang"],"firstnames":["Kuan-Hao"],"suffixes":[]},{"propositions":[],"lastnames":["Boschee"],"firstnames":["Elizabeth"],"suffixes":[]},{"propositions":[],"lastnames":["Miller"],"firstnames":["Scott"],"suffixes":[]},{"propositions":[],"lastnames":["Natarajan"],"firstnames":["Prem"],"suffixes":[]},{"propositions":[],"lastnames":["Chang"],"firstnames":["Kai-Wei"],"suffixes":[]},{"propositions":[],"lastnames":["Peng"],"firstnames":["Nanyun"],"suffixes":[]}],"booktitle":"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","month":"July","year":"2022","address":"Seattle, United States","publisher":"Association for Computational Linguistics","url":"https://aclanthology.org/2022.naacl-main.138","doi":"10.18653/v1/2022.naacl-main.138","pages":"1890–1908","abstract":"Event extraction requires high-quality expert human annotations, which are usually expensive. Therefore, learning a data-efficient event extraction model that can be trained with only a few labeled examples has become a crucial challenge. In this paper, we focus on low-resource end-to-end event extraction and propose DEGREE, a data-efficient model that formulates event extraction as a conditional generation problem. Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern. The final event predictions are then extracted from the generated sentence with a deterministic algorithm. DEGREE has three advantages to learn well with less training data. First, our designed prompts provide semantic guidance for DEGREE to leverage DEGREE and thus better capture the event arguments. Moreover, DEGREE is capable of using additional weakly-supervised information, such as the description of events encoded in the prompts. Finally, DEGREE learns triggers and arguments jointly in an end-to-end manner, which encourages the model to better utilize the shared knowledge and dependencies among them. Our experimental results demonstrate the strong performance of DEGREE for low-resource event extraction.","bibtex":"@inproceedings{hsu-etal-2022-degree,\r\n title = \"{DEGREE}: A Data-Efficient Generation-Based Event Extraction Model\",\r\n author = \"Hsu, I-Hung and\r\n Huang, Kuan-Hao and\r\n Boschee, Elizabeth and\r\n Miller, Scott and\r\n Natarajan, Prem and\r\n Chang, Kai-Wei and\r\n Peng, Nanyun\",\r\n booktitle = \"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies\",\r\n month = jul,\r\n year = \"2022\",\r\n address = \"Seattle, United States\",\r\n publisher = \"Association for Computational Linguistics\",\r\n url = \"https://aclanthology.org/2022.naacl-main.138\",\r\n doi = \"10.18653/v1/2022.naacl-main.138\",\r\n pages = \"1890--1908\",\r\n abstract = \"Event extraction requires high-quality expert human annotations, which are usually expensive. Therefore, learning a data-efficient event extraction model that can be trained with only a few labeled examples has become a crucial challenge. In this paper, we focus on low-resource end-to-end event extraction and propose DEGREE, a data-efficient model that formulates event extraction as a conditional generation problem. Given a passage and a manually designed prompt, DEGREE learns to summarize the events mentioned in the passage into a natural sentence that follows a predefined pattern. The final event predictions are then extracted from the generated sentence with a deterministic algorithm. DEGREE has three advantages to learn well with less training data. First, our designed prompts provide semantic guidance for DEGREE to leverage DEGREE and thus better capture the event arguments. Moreover, DEGREE is capable of using additional weakly-supervised information, such as the description of events encoded in the prompts. Finally, DEGREE learns triggers and arguments jointly in an end-to-end manner, which encourages the model to better utilize the shared knowledge and dependencies among them. Our experimental results demonstrate the strong performance of DEGREE for low-resource event extraction.\",\r\n}\r\n\r\n\r\n","author_short":["Hsu, I.","Huang, K.","Boschee, E.","Miller, S.","Natarajan, P.","Chang, K.","Peng, N."],"bibbaseid":"hsu-huang-boschee-miller-natarajan-chang-peng-degreeadataefficientgenerationbasedeventextractionmodel-2022","role":"author","urls":{"Paper":"https://aclanthology.org/2022.naacl-main.138"},"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://bibbase.org/f/SKBwv9n9W4YYh9SfC/boschee-2023.bib","dataSources":["6xESkCofuRDYuE4dM","dfnxo2P7wcDdnT5Pz"],"keywords":[],"search_terms":["degree","data","efficient","generation","based","event","extraction","model","hsu","huang","boschee","miller","natarajan","chang","peng"],"title":"DEGREE: A Data-Efficient Generation-Based Event Extraction Model","year":2022}