{"_id":"N3NwZE68akGi8Euzn","bibbaseid":"mhamdi-freedman-may-contextualizedcrosslingualeventtriggerextractionwithminimalresources-2019","author_short":["M'hamdi, M.","Freedman, M.","May, J."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Contextualized Cross-Lingual Event Trigger Extraction with Minimal Resources","author":[{"propositions":[],"lastnames":["M'hamdi"],"firstnames":["Meryem"],"suffixes":[]},{"propositions":[],"lastnames":["Freedman"],"firstnames":["Marjorie"],"suffixes":[]},{"propositions":[],"lastnames":["May"],"firstnames":["Jonathan"],"suffixes":[]}],"booktitle":"Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)","month":"November","year":"2019","address":"Hong Kong, China","publisher":"Association for Computational Linguistics","url":"https://www.aclweb.org/anthology/K19-1061","doi":"10.18653/v1/K19-1061","pages":"656–665","abstract":"Event trigger extraction is an information extraction task of practical utility, yet it is challenging due to the difficulty of disambiguating word sense meaning. Previous approaches rely extensively on hand-crafted language-specific features and are applied mainly to English for which annotated datasets and Natural Language Processing (NLP) tools are available. However, the availability of such resources varies from one language to another. Recently, contextualized Bidirectional Encoder Representations from Transformers (BERT) models have established state-of-the-art performance for a variety of NLP tasks. However, there has not been much effort in exploring language transfer using BERT for event extraction. In this work, we treat event trigger extraction as a sequence tagging problem and propose a cross-lingual framework for training it without any hand-crafted features. We experiment with different flavors of transfer learning from high-resourced to low-resourced languages and compare the performance of different multilingual embeddings for event trigger extraction. Our results show that training in a multilingual setting outperforms language-specific models for both English and Chinese. Our work is the first to experiment with two event architecture variants in a cross-lingual setting, to show the effectiveness of contextualized embeddings obtained using BERT, and to explore and analyze its performance on Arabic.","bibtex":"@inproceedings{mhamdi-etal-2019-contextualized,\n title = \"Contextualized Cross-Lingual Event Trigger Extraction with Minimal Resources\",\n author = \"M{'}hamdi, Meryem and\n Freedman, Marjorie and\n May, Jonathan\",\n booktitle = \"Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)\",\n month = nov,\n year = \"2019\",\n address = \"Hong Kong, China\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://www.aclweb.org/anthology/K19-1061\",\n doi = \"10.18653/v1/K19-1061\",\n pages = \"656--665\",\n abstract = \"Event trigger extraction is an information extraction task of practical utility, yet it is challenging due to the difficulty of disambiguating word sense meaning. Previous approaches rely extensively on hand-crafted language-specific features and are applied mainly to English for which annotated datasets and Natural Language Processing (NLP) tools are available. However, the availability of such resources varies from one language to another. Recently, contextualized Bidirectional Encoder Representations from Transformers (BERT) models have established state-of-the-art performance for a variety of NLP tasks. However, there has not been much effort in exploring language transfer using BERT for event extraction. In this work, we treat event trigger extraction as a sequence tagging problem and propose a cross-lingual framework for training it without any hand-crafted features. We experiment with different flavors of transfer learning from high-resourced to low-resourced languages and compare the performance of different multilingual embeddings for event trigger extraction. Our results show that training in a multilingual setting outperforms language-specific models for both English and Chinese. Our work is the first to experiment with two event architecture variants in a cross-lingual setting, to show the effectiveness of contextualized embeddings obtained using BERT, and to explore and analyze its performance on Arabic.\",\n}\n\n","author_short":["M'hamdi, M.","Freedman, M.","May, J."],"key":"mhamdi-etal-2019-contextualized","id":"mhamdi-etal-2019-contextualized","bibbaseid":"mhamdi-freedman-may-contextualizedcrosslingualeventtriggerextractionwithminimalresources-2019","role":"author","urls":{"Paper":"https://www.aclweb.org/anthology/K19-1061"},"metadata":{"authorlinks":{}}},"bibtype":"inproceedings","biburl":"https://jonmay.github.io/webpage/cutelabname/cutelabname.bib","dataSources":["ZdhKtP2cSp3Aki2ge","XrmWr2bSbrn83kSAg","6vWm95zHaeFyBZmFs","X5WBAKQabka5TW5z7","ic8NkoWgYyQa8ga6A","vurcukuNjQhut4Q2x","TRA7coGYryriucaFr","BnZgtH7HDESgbxKxt","hbZSwot2msWk92m5B","TuE5hi7j4WXXPH7Ri","D7uT8WysJetCvvFX7","fcWjcoAgajPvXWcp7","GvHfaAWP6AfN6oLQE","kEea7YES5bdJiBa3M","j3Qzx9HAAC6WtJDHS","5eM3sAccSEpjSDHHQ","mdKvQEkTwJWHLGhfR"],"keywords":[],"search_terms":["contextualized","cross","lingual","event","trigger","extraction","minimal","resources","m'hamdi","freedman","may"],"title":"Contextualized Cross-Lingual Event Trigger Extraction with Minimal Resources","year":2019}