Reprint: a randomized extrapolation based on principal components for data augmentation. Wei, J., Chen, Q., Peng, P., Guedj, B., & Li, L. 2022. Submitted.Paper Pdf Code abstract bibtex Data scarcity and data imbalance have attracted a lot of attention in many fields. Data augmentation, explored as an effective approach to tackle them, can improve the robustness and efficiency of classification models by generating new samples. This paper presents REPRINT, a simple and effective hidden-space data augmentation method for imbalanced data classification. Given hidden-space representations of samples in each class, REPRINT extrapolates, in a randomized fashion, augmented examples for target class by using subspaces spanned by principal components to summarize distribution structure of both source and target class. Consequently, the examples generated would diversify the target while maintaining the original geometry of target distribution. Besides, this method involves a label refinement component which allows to synthesize new soft labels for augmented examples. Compared with different NLP data augmentation approaches under a range of data imbalanced scenarios on four text classification benchmark, REPRINT shows prominent improvements. Moreover, through comprehensive ablation studies, we show that label refinement is better than label-preserving for augmented examples, and that our method suggests stable and consistent improvements in terms of suitable choices of principal components. Moreover, REPRINT is appealing for its easy-to-use since it contains only one hyperparameter determining the dimension of subspace and requires low computational resource.
@unpublished{wei2022reprint,
title={Reprint: a randomized extrapolation based on principal components for data augmentation},
author={Jiale Wei and Qiyuan Chen and Pai Peng and Benjamin Guedj and Le Li},
year={2022},
note = "Submitted.",
abstract = {Data scarcity and data imbalance have attracted a lot of attention in many fields. Data augmentation, explored as an effective approach to tackle them, can improve the robustness and efficiency of classification models by generating new samples. This paper presents REPRINT, a simple and effective hidden-space data augmentation method for imbalanced data classification. Given hidden-space representations of samples in each class, REPRINT extrapolates, in a randomized fashion, augmented examples for target class by using subspaces spanned by principal components to summarize distribution structure of both source and target class. Consequently, the examples generated would diversify the target while maintaining the original geometry of target distribution. Besides, this method involves a label refinement component which allows to synthesize new soft labels for augmented examples. Compared with different NLP data augmentation approaches under a range of data imbalanced scenarios on four text classification benchmark, REPRINT shows prominent improvements. Moreover, through comprehensive ablation studies, we show that label refinement is better than label-preserving for augmented examples, and that our method suggests stable and consistent improvements in terms of suitable choices of principal components. Moreover, REPRINT is appealing for its easy-to-use since it contains only one hyperparameter determining the dimension of subspace and requires low computational resource.},
url = {https://arxiv.org/abs/2204.12024},
url_PDF = {https://arxiv.org/pdf/2204.12024.pdf},
url_Code = {https://github.com/bigdata-ccnu/REPRINT},
eprint={2204.12024},
archivePrefix={arXiv},
primaryClass={cs.CL},
keywords={mine}
}
Downloads: 0
{"_id":"sdbWzGKTEZWNAYAQB","bibbaseid":"wei-chen-peng-guedj-li-reprintarandomizedextrapolationbasedonprincipalcomponentsfordataaugmentation-2022","author_short":["Wei, J.","Chen, Q.","Peng, P.","Guedj, B.","Li, L."],"bibdata":{"bibtype":"unpublished","type":"unpublished","title":"Reprint: a randomized extrapolation based on principal components for data augmentation","author":[{"firstnames":["Jiale"],"propositions":[],"lastnames":["Wei"],"suffixes":[]},{"firstnames":["Qiyuan"],"propositions":[],"lastnames":["Chen"],"suffixes":[]},{"firstnames":["Pai"],"propositions":[],"lastnames":["Peng"],"suffixes":[]},{"firstnames":["Benjamin"],"propositions":[],"lastnames":["Guedj"],"suffixes":[]},{"firstnames":["Le"],"propositions":[],"lastnames":["Li"],"suffixes":[]}],"year":"2022","note":"Submitted.","abstract":"Data scarcity and data imbalance have attracted a lot of attention in many fields. Data augmentation, explored as an effective approach to tackle them, can improve the robustness and efficiency of classification models by generating new samples. This paper presents REPRINT, a simple and effective hidden-space data augmentation method for imbalanced data classification. Given hidden-space representations of samples in each class, REPRINT extrapolates, in a randomized fashion, augmented examples for target class by using subspaces spanned by principal components to summarize distribution structure of both source and target class. Consequently, the examples generated would diversify the target while maintaining the original geometry of target distribution. Besides, this method involves a label refinement component which allows to synthesize new soft labels for augmented examples. Compared with different NLP data augmentation approaches under a range of data imbalanced scenarios on four text classification benchmark, REPRINT shows prominent improvements. Moreover, through comprehensive ablation studies, we show that label refinement is better than label-preserving for augmented examples, and that our method suggests stable and consistent improvements in terms of suitable choices of principal components. Moreover, REPRINT is appealing for its easy-to-use since it contains only one hyperparameter determining the dimension of subspace and requires low computational resource.","url":"https://arxiv.org/abs/2204.12024","url_pdf":"https://arxiv.org/pdf/2204.12024.pdf","url_code":"https://github.com/bigdata-ccnu/REPRINT","eprint":"2204.12024","archiveprefix":"arXiv","primaryclass":"cs.CL","keywords":"mine","bibtex":"@unpublished{wei2022reprint,\ntitle={Reprint: a randomized extrapolation based on principal components for data augmentation},\nauthor={Jiale Wei and Qiyuan Chen and Pai Peng and Benjamin Guedj and Le Li},\nyear={2022},\nnote = \"Submitted.\",\nabstract = {Data scarcity and data imbalance have attracted a lot of attention in many fields. Data augmentation, explored as an effective approach to tackle them, can improve the robustness and efficiency of classification models by generating new samples. This paper presents REPRINT, a simple and effective hidden-space data augmentation method for imbalanced data classification. Given hidden-space representations of samples in each class, REPRINT extrapolates, in a randomized fashion, augmented examples for target class by using subspaces spanned by principal components to summarize distribution structure of both source and target class. Consequently, the examples generated would diversify the target while maintaining the original geometry of target distribution. Besides, this method involves a label refinement component which allows to synthesize new soft labels for augmented examples. Compared with different NLP data augmentation approaches under a range of data imbalanced scenarios on four text classification benchmark, REPRINT shows prominent improvements. Moreover, through comprehensive ablation studies, we show that label refinement is better than label-preserving for augmented examples, and that our method suggests stable and consistent improvements in terms of suitable choices of principal components. Moreover, REPRINT is appealing for its easy-to-use since it contains only one hyperparameter determining the dimension of subspace and requires low computational resource.},\nurl = {https://arxiv.org/abs/2204.12024},\nurl_PDF = {https://arxiv.org/pdf/2204.12024.pdf},\nurl_Code = {https://github.com/bigdata-ccnu/REPRINT},\neprint={2204.12024},\narchivePrefix={arXiv},\nprimaryClass={cs.CL},\nkeywords={mine}\n}\n\n","author_short":["Wei, J.","Chen, Q.","Peng, P.","Guedj, B.","Li, L."],"key":"wei2022reprint","id":"wei2022reprint","bibbaseid":"wei-chen-peng-guedj-li-reprintarandomizedextrapolationbasedonprincipalcomponentsfordataaugmentation-2022","role":"author","urls":{"Paper":"https://arxiv.org/abs/2204.12024"," pdf":"https://arxiv.org/pdf/2204.12024.pdf"," code":"https://github.com/bigdata-ccnu/REPRINT"},"keyword":["mine"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"unpublished","biburl":"https://bguedj.github.io/files/bguedj-publications.bib","dataSources":["suE7RgYeZEnSYr5Fy"],"keywords":["mine"],"search_terms":["reprint","randomized","extrapolation","based","principal","components","data","augmentation","wei","chen","peng","guedj","li"],"title":"Reprint: a randomized extrapolation based on principal components for data augmentation","year":2022}