Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. Finn, C., Abbeel, P., & Levine, S. July, 2017. arXiv:1703.03400 [cs]
Paper doi abstract bibtex We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.
@misc{finn_model-agnostic_2017,
title = {Model-{Agnostic} {Meta}-{Learning} for {Fast} {Adaptation} of {Deep} {Networks}},
url = {http://arxiv.org/abs/1703.03400},
doi = {10.48550/arXiv.1703.03400},
abstract = {We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.},
language = {en},
urldate = {2023-07-05},
publisher = {arXiv},
author = {Finn, Chelsea and Abbeel, Pieter and Levine, Sergey},
month = jul,
year = {2017},
note = {arXiv:1703.03400 [cs]},
keywords = {\#Few-shot, \#ICML{\textgreater}17, \#Meta-learning, /unread, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Computer Science - Neural and Evolutionary Computing, ⭐⭐⭐⭐⭐},
}
Downloads: 0
{"_id":"kCFacGuyStA4phoHX","bibbaseid":"finn-abbeel-levine-modelagnosticmetalearningforfastadaptationofdeepnetworks-2017","authorIDs":[],"author_short":["Finn, C.","Abbeel, P.","Levine, S."],"bibdata":{"bibtype":"misc","type":"misc","title":"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks","url":"http://arxiv.org/abs/1703.03400","doi":"10.48550/arXiv.1703.03400","abstract":"We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.","language":"en","urldate":"2023-07-05","publisher":"arXiv","author":[{"propositions":[],"lastnames":["Finn"],"firstnames":["Chelsea"],"suffixes":[]},{"propositions":[],"lastnames":["Abbeel"],"firstnames":["Pieter"],"suffixes":[]},{"propositions":[],"lastnames":["Levine"],"firstnames":["Sergey"],"suffixes":[]}],"month":"July","year":"2017","note":"arXiv:1703.03400 [cs]","keywords":"#Few-shot, #ICML\\textgreater17, #Meta-learning, /unread, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Computer Science - Neural and Evolutionary Computing, ⭐⭐⭐⭐⭐","bibtex":"@misc{finn_model-agnostic_2017,\n\ttitle = {Model-{Agnostic} {Meta}-{Learning} for {Fast} {Adaptation} of {Deep} {Networks}},\n\turl = {http://arxiv.org/abs/1703.03400},\n\tdoi = {10.48550/arXiv.1703.03400},\n\tabstract = {We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on two few-shot image classification benchmarks, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.},\n\tlanguage = {en},\n\turldate = {2023-07-05},\n\tpublisher = {arXiv},\n\tauthor = {Finn, Chelsea and Abbeel, Pieter and Levine, Sergey},\n\tmonth = jul,\n\tyear = {2017},\n\tnote = {arXiv:1703.03400 [cs]},\n\tkeywords = {\\#Few-shot, \\#ICML{\\textgreater}17, \\#Meta-learning, /unread, Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Computer Science - Neural and Evolutionary Computing, ⭐⭐⭐⭐⭐},\n}\n\n\n\n","author_short":["Finn, C.","Abbeel, P.","Levine, S."],"key":"finn_model-agnostic_2017-1","id":"finn_model-agnostic_2017-1","bibbaseid":"finn-abbeel-levine-modelagnosticmetalearningforfastadaptationofdeepnetworks-2017","role":"author","urls":{"Paper":"http://arxiv.org/abs/1703.03400"},"keyword":["#Few-shot","#ICML\\textgreater17","#Meta-learning","/unread","Computer Science - Artificial Intelligence","Computer Science - Computer Vision and Pattern Recognition","Computer Science - Machine Learning","Computer Science - Neural and Evolutionary Computing","⭐⭐⭐⭐⭐"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"misc","biburl":"https://bibbase.org/zotero/zzhenry2012","creationDate":"2020-01-27T02:13:34.062Z","downloads":0,"keywords":["#few-shot","#icml\\textgreater17","#meta-learning","/unread","computer science - artificial intelligence","computer science - computer vision and pattern recognition","computer science - machine learning","computer science - neural and evolutionary computing","⭐⭐⭐⭐⭐"],"search_terms":["model","agnostic","meta","learning","fast","adaptation","deep","networks","finn","abbeel","levine"],"title":"Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks","year":2017,"dataSources":["hEoKh4ygEAWbAZ5iy","rQFxZQs78YQJ9m34s","aa93N4XLSyosCX8Sf","cx4WvnDhXJhiLqdQo","nZHrFJKyxKKDaWYM8"]}