Improving Formality Style Transfer with Context-Aware Rule Injection. Yao, Z. & Yu, H. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1561–1570, Online, August, 2021. Association for Computational Linguistics. Paper doi abstract bibtex Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text. Here we present Context-Aware Rule Injection (CARI), an innovative method for formality style transfer (FST) by injecting multiple rules into an end-to-end BERT-based encoder and decoder model. CARI is able to learn to select optimal rules based on context. The intrinsic evaluation showed that CARI achieved the new highest performance on the FST benchmark dataset. Our extrinsic evaluation showed that CARI can greatly improve the regular pre-trained models' performance on several tweet sentiment analysis tasks. Our contributions are as follows: 1.We propose a new method, CARI, to integrate rules for pre-trained language models. CARI is context-aware and can trained end-to-end with the downstream NLP applications. 2.We have achieved new state-of-the-art results for FST on the benchmark GYAFC dataset. 3.We are the first to evaluate FST methods with extrinsic evaluation and specifically on sentiment classification tasks. We show that CARI outperformed existing rule-based FST approaches for sentiment classification.
@inproceedings{yao_improving_2021,
address = {Online},
title = {Improving {Formality} {Style} {Transfer} with {Context}-{Aware} {Rule} {Injection}},
url = {https://aclanthology.org/2021.acl-long.124},
doi = {10.18653/v1/2021.acl-long.124},
abstract = {Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text. Here we present Context-Aware Rule Injection (CARI), an innovative method for formality style transfer (FST) by injecting multiple rules into an end-to-end BERT-based encoder and decoder model. CARI is able to learn to select optimal rules based on context. The intrinsic evaluation showed that CARI achieved the new highest performance on the FST benchmark dataset. Our extrinsic evaluation showed that CARI can greatly improve the regular pre-trained models' performance on several tweet sentiment analysis tasks. Our contributions are as follows: 1.We propose a new method, CARI, to integrate rules for pre-trained language models. CARI is context-aware and can trained end-to-end with the downstream NLP applications. 2.We have achieved new state-of-the-art results for FST on the benchmark GYAFC dataset. 3.We are the first to evaluate FST methods with extrinsic evaluation and specifically on sentiment classification tasks. We show that CARI outperformed existing rule-based FST approaches for sentiment classification.},
urldate = {2021-09-21},
booktitle = {Proceedings of the 59th {Annual} {Meeting} of the {Association} for {Computational} {Linguistics} and the 11th {International} {Joint} {Conference} on {Natural} {Language} {Processing} ({Volume} 1: {Long} {Papers})},
publisher = {Association for Computational Linguistics},
author = {Yao, Zonghai and Yu, Hong},
month = aug,
year = {2021},
pages = {1561--1570},
}
Downloads: 0
{"_id":"CcYt8uo5WYgtsuWi8","bibbaseid":"yao-yu-improvingformalitystyletransferwithcontextawareruleinjection-2021","author_short":["Yao, Z.","Yu, H."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","address":"Online","title":"Improving Formality Style Transfer with Context-Aware Rule Injection","url":"https://aclanthology.org/2021.acl-long.124","doi":"10.18653/v1/2021.acl-long.124","abstract":"Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text. Here we present Context-Aware Rule Injection (CARI), an innovative method for formality style transfer (FST) by injecting multiple rules into an end-to-end BERT-based encoder and decoder model. CARI is able to learn to select optimal rules based on context. The intrinsic evaluation showed that CARI achieved the new highest performance on the FST benchmark dataset. Our extrinsic evaluation showed that CARI can greatly improve the regular pre-trained models' performance on several tweet sentiment analysis tasks. Our contributions are as follows: 1.We propose a new method, CARI, to integrate rules for pre-trained language models. CARI is context-aware and can trained end-to-end with the downstream NLP applications. 2.We have achieved new state-of-the-art results for FST on the benchmark GYAFC dataset. 3.We are the first to evaluate FST methods with extrinsic evaluation and specifically on sentiment classification tasks. We show that CARI outperformed existing rule-based FST approaches for sentiment classification.","urldate":"2021-09-21","booktitle":"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)","publisher":"Association for Computational Linguistics","author":[{"propositions":[],"lastnames":["Yao"],"firstnames":["Zonghai"],"suffixes":[]},{"propositions":[],"lastnames":["Yu"],"firstnames":["Hong"],"suffixes":[]}],"month":"August","year":"2021","pages":"1561–1570","bibtex":"@inproceedings{yao_improving_2021,\n\taddress = {Online},\n\ttitle = {Improving {Formality} {Style} {Transfer} with {Context}-{Aware} {Rule} {Injection}},\n\turl = {https://aclanthology.org/2021.acl-long.124},\n\tdoi = {10.18653/v1/2021.acl-long.124},\n\tabstract = {Models pre-trained on large-scale regular text corpora often do not work well for user-generated data where the language styles differ significantly from the mainstream text. Here we present Context-Aware Rule Injection (CARI), an innovative method for formality style transfer (FST) by injecting multiple rules into an end-to-end BERT-based encoder and decoder model. CARI is able to learn to select optimal rules based on context. The intrinsic evaluation showed that CARI achieved the new highest performance on the FST benchmark dataset. Our extrinsic evaluation showed that CARI can greatly improve the regular pre-trained models' performance on several tweet sentiment analysis tasks. Our contributions are as follows: 1.We propose a new method, CARI, to integrate rules for pre-trained language models. CARI is context-aware and can trained end-to-end with the downstream NLP applications. 2.We have achieved new state-of-the-art results for FST on the benchmark GYAFC dataset. 3.We are the first to evaluate FST methods with extrinsic evaluation and specifically on sentiment classification tasks. We show that CARI outperformed existing rule-based FST approaches for sentiment classification.},\n\turldate = {2021-09-21},\n\tbooktitle = {Proceedings of the 59th {Annual} {Meeting} of the {Association} for {Computational} {Linguistics} and the 11th {International} {Joint} {Conference} on {Natural} {Language} {Processing} ({Volume} 1: {Long} {Papers})},\n\tpublisher = {Association for Computational Linguistics},\n\tauthor = {Yao, Zonghai and Yu, Hong},\n\tmonth = aug,\n\tyear = {2021},\n\tpages = {1561--1570},\n}\n\n","author_short":["Yao, Z.","Yu, H."],"key":"yao_improving_2021","id":"yao_improving_2021","bibbaseid":"yao-yu-improvingformalitystyletransferwithcontextawareruleinjection-2021","role":"author","urls":{"Paper":"https://aclanthology.org/2021.acl-long.124"},"metadata":{"authorlinks":{}},"html":""},"bibtype":"inproceedings","biburl":"http://fenway.cs.uml.edu/papers/pubs-all.bib","dataSources":["TqaA9miSB65nRfS5H"],"keywords":[],"search_terms":["improving","formality","style","transfer","context","aware","rule","injection","yao","yu"],"title":"Improving Formality Style Transfer with Context-Aware Rule Injection","year":2021}