Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative. Dery, L. M., Michel, P., Talwalkar, A., & Neubig, G. February, 2022. arXiv:2109.07437 [cs]
Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative [link]Paper  doi  abstract   bibtex   
In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency.
@misc{dery_should_2022,
	title = {Should {We} {Be} {Pre}-training? {An} {Argument} for {End}-task {Aware} {Training} as an {Alternative}},
	shorttitle = {Should {We} {Be} {Pre}-training?},
	url = {http://arxiv.org/abs/2109.07437},
	doi = {10.48550/arXiv.2109.07437},
	abstract = {In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency.},
	urldate = {2023-02-06},
	publisher = {arXiv},
	author = {Dery, Lucio M. and Michel, Paul and Talwalkar, Ameet and Neubig, Graham},
	month = feb,
	year = {2022},
	note = {arXiv:2109.07437 [cs]},
	keywords = {Computer Science - Computation and Language, Computer Science - Machine Learning},
}

Downloads: 0