Balancing Average and Worst-case Accuracy in Multitask Learning. Michel, P., Ruder, S., & Yogatama, D. 2021. cite arxiv:2110.05838Comment: Under reviewPaper abstract bibtex When training and evaluating machine learning models on a large number of tasks, it is important to not only look at average task accuracy – which may be biased by easy or redundant tasks – but also worst-case accuracy (i.e. the performance on the task with the lowest accuracy). In this work, we show how to use techniques from the distributionally robust optimization (DRO) literature to improve worst-case performance in multitask learning. We highlight several failure cases of DRO when applied off-the-shelf and present an improved method, Lookahead-DRO (L-DRO), which mitigates these issues. The core idea of L-DRO is to anticipate the interaction between tasks during training in order to choose a dynamic re-weighting of the various task losses, which will (i) lead to minimal worst-case loss and (ii) train on as many tasks as possible. After demonstrating the efficacy of L-DRO on a small controlled synthetic setting, we evaluate it on two realistic benchmarks: a multitask version of the CIFAR-100 image classification dataset and a large-scale multilingual language modeling experiment. Our empirical results show that L-DRO achieves a better trade-off between average and worst-case accuracy with little computational overhead compared to several strong baselines.
@misc{michel2021balancing,
abstract = {When training and evaluating machine learning models on a large number of
tasks, it is important to not only look at average task accuracy -- which may
be biased by easy or redundant tasks -- but also worst-case accuracy (i.e. the
performance on the task with the lowest accuracy). In this work, we show how to
use techniques from the distributionally robust optimization (DRO) literature
to improve worst-case performance in multitask learning. We highlight several
failure cases of DRO when applied off-the-shelf and present an improved method,
Lookahead-DRO (L-DRO), which mitigates these issues. The core idea of L-DRO is
to anticipate the interaction between tasks during training in order to choose
a dynamic re-weighting of the various task losses, which will (i) lead to
minimal worst-case loss and (ii) train on as many tasks as possible. After
demonstrating the efficacy of L-DRO on a small controlled synthetic setting, we
evaluate it on two realistic benchmarks: a multitask version of the CIFAR-100
image classification dataset and a large-scale multilingual language modeling
experiment. Our empirical results show that L-DRO achieves a better trade-off
between average and worst-case accuracy with little computational overhead
compared to several strong baselines.},
added-at = {2021-10-15T09:48:00.000+0200},
author = {Michel, Paul and Ruder, Sebastian and Yogatama, Dani},
biburl = {https://www.bibsonomy.org/bibtex/2e0e75595c64685caf5860ae550a553e9/topel},
interhash = {a0425de74cd6d26b92143ffeb3cf74f1},
intrahash = {e0e75595c64685caf5860ae550a553e9},
keywords = {multitask},
note = {cite arxiv:2110.05838Comment: Under review},
timestamp = {2021-10-15T09:48:00.000+0200},
title = {Balancing Average and Worst-case Accuracy in Multitask Learning},
url = {http://arxiv.org/abs/2110.05838},
year = 2021
}
Downloads: 0
{"_id":"RFHCRYa9BLwg56EAS","bibbaseid":"michel-ruder-yogatama-balancingaverageandworstcaseaccuracyinmultitasklearning-2021","author_short":["Michel, P.","Ruder, S.","Yogatama, D."],"bibdata":{"bibtype":"misc","type":"misc","abstract":"When training and evaluating machine learning models on a large number of tasks, it is important to not only look at average task accuracy – which may be biased by easy or redundant tasks – but also worst-case accuracy (i.e. the performance on the task with the lowest accuracy). In this work, we show how to use techniques from the distributionally robust optimization (DRO) literature to improve worst-case performance in multitask learning. We highlight several failure cases of DRO when applied off-the-shelf and present an improved method, Lookahead-DRO (L-DRO), which mitigates these issues. The core idea of L-DRO is to anticipate the interaction between tasks during training in order to choose a dynamic re-weighting of the various task losses, which will (i) lead to minimal worst-case loss and (ii) train on as many tasks as possible. After demonstrating the efficacy of L-DRO on a small controlled synthetic setting, we evaluate it on two realistic benchmarks: a multitask version of the CIFAR-100 image classification dataset and a large-scale multilingual language modeling experiment. Our empirical results show that L-DRO achieves a better trade-off between average and worst-case accuracy with little computational overhead compared to several strong baselines.","added-at":"2021-10-15T09:48:00.000+0200","author":[{"propositions":[],"lastnames":["Michel"],"firstnames":["Paul"],"suffixes":[]},{"propositions":[],"lastnames":["Ruder"],"firstnames":["Sebastian"],"suffixes":[]},{"propositions":[],"lastnames":["Yogatama"],"firstnames":["Dani"],"suffixes":[]}],"biburl":"https://www.bibsonomy.org/bibtex/2e0e75595c64685caf5860ae550a553e9/topel","interhash":"a0425de74cd6d26b92143ffeb3cf74f1","intrahash":"e0e75595c64685caf5860ae550a553e9","keywords":"multitask","note":"cite arxiv:2110.05838Comment: Under review","timestamp":"2021-10-15T09:48:00.000+0200","title":"Balancing Average and Worst-case Accuracy in Multitask Learning","url":"http://arxiv.org/abs/2110.05838","year":"2021","bibtex":"@misc{michel2021balancing,\n abstract = {When training and evaluating machine learning models on a large number of\r\ntasks, it is important to not only look at average task accuracy -- which may\r\nbe biased by easy or redundant tasks -- but also worst-case accuracy (i.e. the\r\nperformance on the task with the lowest accuracy). In this work, we show how to\r\nuse techniques from the distributionally robust optimization (DRO) literature\r\nto improve worst-case performance in multitask learning. We highlight several\r\nfailure cases of DRO when applied off-the-shelf and present an improved method,\r\nLookahead-DRO (L-DRO), which mitigates these issues. The core idea of L-DRO is\r\nto anticipate the interaction between tasks during training in order to choose\r\na dynamic re-weighting of the various task losses, which will (i) lead to\r\nminimal worst-case loss and (ii) train on as many tasks as possible. After\r\ndemonstrating the efficacy of L-DRO on a small controlled synthetic setting, we\r\nevaluate it on two realistic benchmarks: a multitask version of the CIFAR-100\r\nimage classification dataset and a large-scale multilingual language modeling\r\nexperiment. Our empirical results show that L-DRO achieves a better trade-off\r\nbetween average and worst-case accuracy with little computational overhead\r\ncompared to several strong baselines.},\n added-at = {2021-10-15T09:48:00.000+0200},\n author = {Michel, Paul and Ruder, Sebastian and Yogatama, Dani},\n biburl = {https://www.bibsonomy.org/bibtex/2e0e75595c64685caf5860ae550a553e9/topel},\n interhash = {a0425de74cd6d26b92143ffeb3cf74f1},\n intrahash = {e0e75595c64685caf5860ae550a553e9},\n keywords = {multitask},\n note = {cite arxiv:2110.05838Comment: Under review},\n timestamp = {2021-10-15T09:48:00.000+0200},\n title = {Balancing Average and Worst-case Accuracy in Multitask Learning},\n url = {http://arxiv.org/abs/2110.05838},\n year = 2021\n}\n\n","author_short":["Michel, P.","Ruder, S.","Yogatama, D."],"key":"michel2021balancing","id":"michel2021balancing","bibbaseid":"michel-ruder-yogatama-balancingaverageandworstcaseaccuracyinmultitasklearning-2021","role":"author","urls":{"Paper":"http://arxiv.org/abs/2110.05838"},"keyword":["multitask"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"http://www.bibsonomy.org/bib/author/Ruder?items=1000","dataSources":["JFLn7yTRgDTDaR4Am"],"keywords":["multitask"],"search_terms":["balancing","average","worst","case","accuracy","multitask","learning","michel","ruder","yogatama"],"title":"Balancing Average and Worst-case Accuracy in Multitask Learning","year":2021}