Three scenarios for continual learning. van de Ven, G. M. & Tolias, A. S. arXiv:1904.07734 [cs, stat], April, 2019. arXiv: 1904.07734Paper abstract bibtex Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and–in case it is not–whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.
@article{van_de_ven_three_2019,
title = {Three scenarios for continual learning},
url = {http://arxiv.org/abs/1904.07734},
abstract = {Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and--in case it is not--whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.},
urldate = {2022-03-19},
journal = {arXiv:1904.07734 [cs, stat]},
author = {van de Ven, Gido M. and Tolias, Andreas S.},
month = apr,
year = {2019},
note = {arXiv: 1904.07734},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning},
}
Downloads: 0
{"_id":"ijbpvr9QTWr97kLkK","bibbaseid":"vandeven-tolias-threescenariosforcontinuallearning-2019","authorIDs":[],"author_short":["van de Ven, G. M.","Tolias, A. S."],"bibdata":{"bibtype":"article","type":"article","title":"Three scenarios for continual learning","url":"http://arxiv.org/abs/1904.07734","abstract":"Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and–in case it is not–whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.","urldate":"2022-03-19","journal":"arXiv:1904.07734 [cs, stat]","author":[{"propositions":["van","de"],"lastnames":["Ven"],"firstnames":["Gido","M."],"suffixes":[]},{"propositions":[],"lastnames":["Tolias"],"firstnames":["Andreas","S."],"suffixes":[]}],"month":"April","year":"2019","note":"arXiv: 1904.07734","keywords":"Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning","bibtex":"@article{van_de_ven_three_2019,\n\ttitle = {Three scenarios for continual learning},\n\turl = {http://arxiv.org/abs/1904.07734},\n\tabstract = {Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and--in case it is not--whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.},\n\turldate = {2022-03-19},\n\tjournal = {arXiv:1904.07734 [cs, stat]},\n\tauthor = {van de Ven, Gido M. and Tolias, Andreas S.},\n\tmonth = apr,\n\tyear = {2019},\n\tnote = {arXiv: 1904.07734},\n\tkeywords = {Computer Science - Artificial Intelligence, Computer Science - Computer Vision and Pattern Recognition, Computer Science - Machine Learning, Statistics - Machine Learning},\n}\n\n\n\n","author_short":["van de Ven, G. M.","Tolias, A. S."],"key":"van_de_ven_three_2019","id":"van_de_ven_three_2019","bibbaseid":"vandeven-tolias-threescenariosforcontinuallearning-2019","role":"author","urls":{"Paper":"http://arxiv.org/abs/1904.07734"},"keyword":["Computer Science - Artificial Intelligence","Computer Science - Computer Vision and Pattern Recognition","Computer Science - Machine Learning","Statistics - Machine Learning"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"article","biburl":"https://bibbase.org/zotero/mh_lenguyen","creationDate":"2021-02-12T21:37:01.481Z","downloads":0,"keywords":["computer science - artificial intelligence","computer science - computer vision and pattern recognition","computer science - machine learning","statistics - machine learning"],"search_terms":["three","scenarios","continual","learning","van de ven","tolias"],"title":"Three scenarios for continual learning","year":2019,"dataSources":["qLJ7Ld8T2ZKybATHB","iwKepCrWBps7ojhDx"]}