Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models. Ahrens, K., Lehmann, H. H., Lee, J. H., & Wermter, S. arXiv preprint arXiv:2312.08888, arXiv, 2023. abstract bibtex We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.
@article{ahrens_read_2023,
title = {Read {{Between}} the {{Layers}}: {{Leveraging Intra-Layer Representations}} for {{Rehearsal-Free Continual Learning}} with {{Pre-Trained Models}}},
shorttitle = {Read {{Between}} the {{Layers}}},
author = {Ahrens, Kyra and Lehmann, Hans Hergen and Lee, Jae Hee and Wermter, Stefan},
year = {2023},
eprint = {2312.08888},
primaryclass = {cs},
publisher = {arXiv},
urldate = {2023-12-18},
abstract = {We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.},
archiveprefix = {arxiv},
file = {/Users/jae/Zotero/storage/CSPUZH4W/Ahrens et al. - 2023 - Read Between the Layers Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learnin.pdf},
journal = {arXiv preprint arXiv:2312.08888}
}
Downloads: 0
{"_id":"kR4Le7C77Ktib5N8s","bibbaseid":"ahrens-lehmann-lee-wermter-readbetweenthelayersleveragingintralayerrepresentationsforrehearsalfreecontinuallearningwithpretrainedmodels-2023","author_short":["Ahrens, K.","Lehmann, H. H.","Lee, J. H.","Wermter, S."],"bibdata":{"bibtype":"article","type":"article","title":"Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models","shorttitle":"Read Between the Layers","author":[{"propositions":[],"lastnames":["Ahrens"],"firstnames":["Kyra"],"suffixes":[]},{"propositions":[],"lastnames":["Lehmann"],"firstnames":["Hans","Hergen"],"suffixes":[]},{"propositions":[],"lastnames":["Lee"],"firstnames":["Jae","Hee"],"suffixes":[]},{"propositions":[],"lastnames":["Wermter"],"firstnames":["Stefan"],"suffixes":[]}],"year":"2023","eprint":"2312.08888","primaryclass":"cs","publisher":"arXiv","urldate":"2023-12-18","abstract":"We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.","archiveprefix":"arxiv","file":"/Users/jae/Zotero/storage/CSPUZH4W/Ahrens et al. - 2023 - Read Between the Layers Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learnin.pdf","journal":"arXiv preprint arXiv:2312.08888","bibtex":"@article{ahrens_read_2023,\n title = {Read {{Between}} the {{Layers}}: {{Leveraging Intra-Layer Representations}} for {{Rehearsal-Free Continual Learning}} with {{Pre-Trained Models}}},\n shorttitle = {Read {{Between}} the {{Layers}}},\n author = {Ahrens, Kyra and Lehmann, Hans Hergen and Lee, Jae Hee and Wermter, Stefan},\n year = {2023},\n eprint = {2312.08888},\n primaryclass = {cs},\n publisher = {arXiv},\n urldate = {2023-12-18},\n abstract = {We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.},\n archiveprefix = {arxiv},\n file = {/Users/jae/Zotero/storage/CSPUZH4W/Ahrens et al. - 2023 - Read Between the Layers Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learnin.pdf},\n journal = {arXiv preprint arXiv:2312.08888}\n}\n\n","author_short":["Ahrens, K.","Lehmann, H. H.","Lee, J. H.","Wermter, S."],"bibbaseid":"ahrens-lehmann-lee-wermter-readbetweenthelayersleveragingintralayerrepresentationsforrehearsalfreecontinuallearningwithpretrainedmodels-2023","role":"author","urls":{},"metadata":{"authorlinks":{}}},"bibtype":"article","biburl":"https://bibbase.org/f/LH9Dpt57nicKPPtLv/Exported Items.bib","dataSources":["cwGdKRqJNaMgaAhQN","cEc4uKmbByTGnEFSd","QLKcoK33WeKznwHFz","3aNkZpB96uqzFFxLW"],"keywords":[],"search_terms":["read","between","layers","leveraging","intra","layer","representations","rehearsal","free","continual","learning","pre","trained","models","ahrens","lehmann","lee","wermter"],"title":"Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models","year":2023}