Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative. Dery, L. M., Michel, P., Talwalkar, A., & Neubig, G. February, 2022. arXiv:2109.07437 [cs]
Paper doi abstract bibtex In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency.
@misc{dery_should_2022,
title = {Should {We} {Be} {Pre}-training? {An} {Argument} for {End}-task {Aware} {Training} as an {Alternative}},
shorttitle = {Should {We} {Be} {Pre}-training?},
url = {http://arxiv.org/abs/2109.07437},
doi = {10.48550/arXiv.2109.07437},
abstract = {In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency.},
urldate = {2023-02-06},
publisher = {arXiv},
author = {Dery, Lucio M. and Michel, Paul and Talwalkar, Ameet and Neubig, Graham},
month = feb,
year = {2022},
note = {arXiv:2109.07437 [cs]},
keywords = {Computer Science - Computation and Language, Computer Science - Machine Learning},
}
Downloads: 0
{"_id":"7FTeQ58nvTePXxbt9","bibbaseid":"dery-michel-talwalkar-neubig-shouldwebepretraininganargumentforendtaskawaretrainingasanalternative-2022","author_short":["Dery, L. M.","Michel, P.","Talwalkar, A.","Neubig, G."],"bibdata":{"bibtype":"misc","type":"misc","title":"Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative","shorttitle":"Should We Be Pre-training?","url":"http://arxiv.org/abs/2109.07437","doi":"10.48550/arXiv.2109.07437","abstract":"In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency.","urldate":"2023-02-06","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Dery"],"firstnames":["Lucio","M."],"suffixes":[]},{"propositions":[],"lastnames":["Michel"],"firstnames":["Paul"],"suffixes":[]},{"propositions":[],"lastnames":["Talwalkar"],"firstnames":["Ameet"],"suffixes":[]},{"propositions":[],"lastnames":["Neubig"],"firstnames":["Graham"],"suffixes":[]}],"month":"February","year":"2022","note":"arXiv:2109.07437 [cs]","keywords":"Computer Science - Computation and Language, Computer Science - Machine Learning","bibtex":"@misc{dery_should_2022,\n\ttitle = {Should {We} {Be} {Pre}-training? {An} {Argument} for {End}-task {Aware} {Training} as an {Alternative}},\n\tshorttitle = {Should {We} {Be} {Pre}-training?},\n\turl = {http://arxiv.org/abs/2109.07437},\n\tdoi = {10.48550/arXiv.2109.07437},\n\tabstract = {In most settings of practical concern, machine learning practitioners know in advance what end-task they wish to boost with auxiliary tasks. However, widely used methods for leveraging auxiliary data like pre-training and its continued-pretraining variant are end-task agnostic: they rarely, if ever, exploit knowledge of the target task. We study replacing end-task agnostic continued training of pre-trained language models with end-task aware training of said models. We argue that for sufficiently important end-tasks, the benefits of leveraging auxiliary data in a task-aware fashion can justify forgoing the traditional approach of obtaining generic, end-task agnostic representations as with (continued) pre-training. On three different low-resource NLP tasks from two domains, we demonstrate that multi-tasking the end-task and auxiliary objectives results in significantly better downstream task performance than the widely-used task-agnostic continued pre-training paradigm of Gururangan et al. (2020). We next introduce an online meta-learning algorithm that learns a set of multi-task weights to better balance among our multiple auxiliary objectives, achieving further improvements on end-task performance and data efficiency.},\n\turldate = {2023-02-06},\n\tpublisher = {arXiv},\n\tauthor = {Dery, Lucio M. and Michel, Paul and Talwalkar, Ameet and Neubig, Graham},\n\tmonth = feb,\n\tyear = {2022},\n\tnote = {arXiv:2109.07437 [cs]},\n\tkeywords = {Computer Science - Computation and Language, Computer Science - Machine Learning},\n}\n\n","author_short":["Dery, L. M.","Michel, P.","Talwalkar, A.","Neubig, G."],"key":"dery_should_2022","id":"dery_should_2022","bibbaseid":"dery-michel-talwalkar-neubig-shouldwebepretraininganargumentforendtaskawaretrainingasanalternative-2022","role":"author","urls":{"Paper":"http://arxiv.org/abs/2109.07437"},"keyword":["Computer Science - Computation and Language","Computer Science - Machine Learning"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero/mxmplx","dataSources":["aXmRAq63YsH7a3ufx"],"keywords":["computer science - computation and language","computer science - machine learning"],"search_terms":["pre","training","argument","end","task","aware","training","alternative","dery","michel","talwalkar","neubig"],"title":"Should We Be Pre-training? An Argument for End-task Aware Training as an Alternative","year":2022}