Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction. Yu, P., May, J., & Ji, H. In Findings of the Association for Computational Linguistics: EACL 2023, pages 754–769, Dubrovnik, Croatia, May, 2023. Association for Computational Linguistics. Paper doi abstract bibtex 5 downloads Recent research in cross-lingual learning has found that combining large-scale pretrained multilingual language models with machine translation can yield good performance. We explore this idea for cross-lingual event extraction with a new model architecture that jointly encodes a source language input sentence with its translation to the target language during training, and takes a target language sentence with its translation back to the source language as input during evaluation. However, we observe significant representational gap between the native source language texts during training and the texts translated into source language during evaluation, as well as the texts translated into target language during training and the native target language texts during evaluation. This representational gap undermines the effectiveness of cross-lingual transfer learning for event extraction with machine-translated data. In order to mitigate this problem, we propose an adversarial training framework that encourages the language model to produce more similar representations for the translated text and the native text. To be specific, we train the language model such that its hidden representations are able to fool a jointly trained discriminator that distinguishes translated texts' representations from native texts' representations. We conduct experiments on cross-lingual for event extraction across three languages. Results demonstrate that our proposed adversarial training can effectively incorporate machine translation to improve event extraction, while simply adding machine-translated data yields unstable performance due to the representational gap.
@inproceedings{yu-etal-2023-bridging,
title = "Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction",
author = "Yu, Pengfei and
May, Jonathan and
Ji, Heng",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.57",
doi = "10.18653/v1/2023.findings-eacl.57",
pages = "754--769",
abstract = "Recent research in cross-lingual learning has found that combining large-scale pretrained multilingual language models with machine translation can yield good performance. We explore this idea for cross-lingual event extraction with a new model architecture that jointly encodes a source language input sentence with its translation to the target language during training, and takes a target language sentence with its translation back to the source language as input during evaluation. However, we observe significant representational gap between the native source language texts during training and the texts translated into source language during evaluation, as well as the texts translated into target language during training and the native target language texts during evaluation. This representational gap undermines the effectiveness of cross-lingual transfer learning for event extraction with machine-translated data. In order to mitigate this problem, we propose an adversarial training framework that encourages the language model to produce more similar representations for the translated text and the native text. To be specific, we train the language model such that its hidden representations are able to fool a jointly trained discriminator that distinguishes translated texts{'} representations from native texts{'} representations. We conduct experiments on cross-lingual for event extraction across three languages. Results demonstrate that our proposed adversarial training can effectively incorporate machine translation to improve event extraction, while simply adding machine-translated data yields unstable performance due to the representational gap.",
}
Downloads: 5
{"_id":"3jLPDKDvJ5xvWDEos","bibbaseid":"yu-may-ji-bridgingthegapbetweennativetextandtranslatedtextthroughadversariallearningacasestudyoncrosslingualeventextraction-2023","author_short":["Yu, P.","May, J.","Ji, H."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction","author":[{"propositions":[],"lastnames":["Yu"],"firstnames":["Pengfei"],"suffixes":[]},{"propositions":[],"lastnames":["May"],"firstnames":["Jonathan"],"suffixes":[]},{"propositions":[],"lastnames":["Ji"],"firstnames":["Heng"],"suffixes":[]}],"booktitle":"Findings of the Association for Computational Linguistics: EACL 2023","month":"May","year":"2023","address":"Dubrovnik, Croatia","publisher":"Association for Computational Linguistics","url":"https://aclanthology.org/2023.findings-eacl.57","doi":"10.18653/v1/2023.findings-eacl.57","pages":"754–769","abstract":"Recent research in cross-lingual learning has found that combining large-scale pretrained multilingual language models with machine translation can yield good performance. We explore this idea for cross-lingual event extraction with a new model architecture that jointly encodes a source language input sentence with its translation to the target language during training, and takes a target language sentence with its translation back to the source language as input during evaluation. However, we observe significant representational gap between the native source language texts during training and the texts translated into source language during evaluation, as well as the texts translated into target language during training and the native target language texts during evaluation. This representational gap undermines the effectiveness of cross-lingual transfer learning for event extraction with machine-translated data. In order to mitigate this problem, we propose an adversarial training framework that encourages the language model to produce more similar representations for the translated text and the native text. To be specific, we train the language model such that its hidden representations are able to fool a jointly trained discriminator that distinguishes translated texts' representations from native texts' representations. We conduct experiments on cross-lingual for event extraction across three languages. Results demonstrate that our proposed adversarial training can effectively incorporate machine translation to improve event extraction, while simply adding machine-translated data yields unstable performance due to the representational gap.","bibtex":"@inproceedings{yu-etal-2023-bridging,\n title = \"Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction\",\n author = \"Yu, Pengfei and\n May, Jonathan and\n Ji, Heng\",\n booktitle = \"Findings of the Association for Computational Linguistics: EACL 2023\",\n month = may,\n year = \"2023\",\n address = \"Dubrovnik, Croatia\",\n publisher = \"Association for Computational Linguistics\",\n url = \"https://aclanthology.org/2023.findings-eacl.57\",\n doi = \"10.18653/v1/2023.findings-eacl.57\",\n pages = \"754--769\",\n abstract = \"Recent research in cross-lingual learning has found that combining large-scale pretrained multilingual language models with machine translation can yield good performance. We explore this idea for cross-lingual event extraction with a new model architecture that jointly encodes a source language input sentence with its translation to the target language during training, and takes a target language sentence with its translation back to the source language as input during evaluation. However, we observe significant representational gap between the native source language texts during training and the texts translated into source language during evaluation, as well as the texts translated into target language during training and the native target language texts during evaluation. This representational gap undermines the effectiveness of cross-lingual transfer learning for event extraction with machine-translated data. In order to mitigate this problem, we propose an adversarial training framework that encourages the language model to produce more similar representations for the translated text and the native text. To be specific, we train the language model such that its hidden representations are able to fool a jointly trained discriminator that distinguishes translated texts{'} representations from native texts{'} representations. We conduct experiments on cross-lingual for event extraction across three languages. Results demonstrate that our proposed adversarial training can effectively incorporate machine translation to improve event extraction, while simply adding machine-translated data yields unstable performance due to the representational gap.\",\n}\n\n","author_short":["Yu, P.","May, J.","Ji, H."],"key":"yu-etal-2023-bridging","id":"yu-etal-2023-bridging","bibbaseid":"yu-may-ji-bridgingthegapbetweennativetextandtranslatedtextthroughadversariallearningacasestudyoncrosslingualeventextraction-2023","role":"author","urls":{"Paper":"https://aclanthology.org/2023.findings-eacl.57"},"metadata":{"authorlinks":{}},"downloads":5},"bibtype":"inproceedings","biburl":"https://jonmay.github.io/webpage/cutelabname/cutelabname.bib","dataSources":["j3Qzx9HAAC6WtJDHS","5eM3sAccSEpjSDHHQ"],"keywords":[],"search_terms":["bridging","gap","between","native","text","translated","text","through","adversarial","learning","case","study","cross","lingual","event","extraction","yu","may","ji"],"title":"Bridging the Gap between Native Text and Translated Text through Adversarial Learning: A Case Study on Cross-Lingual Event Extraction","year":2023,"downloads":8}