PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification. Yang, Y., Zhang, Y., Tar, C., & Baldridge, J. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3685–3690, Hong Kong, China, 2019. Association for Computational Linguistics.
PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification [link]Paper  doi  abstract   bibtex   
Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. We remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. We provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.
@inproceedings{yang_paws-x_2019,
	address = {Hong Kong, China},
	title = {{PAWS}-{X}: {A} {Cross}-lingual {Adversarial} {Dataset} for {Paraphrase} {Identification}},
	shorttitle = {{PAWS}-{X}},
	url = {https://www.aclweb.org/anthology/D19-1382},
	doi = {10.18653/v1/D19-1382},
	abstract = {Most existing work on adversarial data generation focuses on English. For example, PAWS (Paraphrase Adversaries from Word Scrambling) consists of challenging English paraphrase identification pairs from Wikipedia and Quora. We remedy this gap with PAWS-X, a new dataset of 23,659 human translated PAWS evaluation pairs in six typologically distinct languages: French, Spanish, German, Chinese, Japanese, and Korean. We provide baseline numbers for three models with different capacity to capture non-local context and sentence structure, and using different multilingual training and evaluation regimes. Multilingual BERT fine-tuned on PAWS English plus machine-translated data performs the best, with a range of 83.1-90.8 accuracy across the non-English languages and an average accuracy gain of 23\% over the next best model. PAWS-X shows the effectiveness of deep, multilingual pre-training while also leaving considerable headroom as a new challenge to drive multilingual research that better captures structure and contextual information.},
	language = {en},
	urldate = {2025-03-12},
	booktitle = {Proceedings of the 2019 {Conference} on {Empirical} {Methods} in {Natural} {Language} {Processing} and the 9th {International} {Joint} {Conference} on {Natural} {Language} {Processing} ({EMNLP}-{IJCNLP})},
	publisher = {Association for Computational Linguistics},
	author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},
	year = {2019},
	pages = {3685--3690},
}

Downloads: 0