Read Between the Layers: Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learning with Pre-Trained Models. Ahrens, K., Lehmann, H. H., Lee, J. H., & Wermter, S. arXiv preprint arXiv:2312.08888, arXiv, 2023.
abstract   bibtex   
We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.
@article{ahrens_read_2023,
  title = {Read {{Between}} the {{Layers}}: {{Leveraging Intra-Layer Representations}} for {{Rehearsal-Free Continual Learning}} with {{Pre-Trained Models}}},
  shorttitle = {Read {{Between}} the {{Layers}}},
  author = {Ahrens, Kyra and Lehmann, Hans Hergen and Lee, Jae Hee and Wermter, Stefan},
  year = {2023},
  eprint = {2312.08888},
  primaryclass = {cs},
  publisher = {arXiv},
  urldate = {2023-12-18},
  abstract = {We address the Continual Learning (CL) problem, where a model has to learn a sequence of tasks from non-stationary distributions while preserving prior knowledge as it encounters new experiences. With the advancement of foundation models, CL research has shifted focus from the initial learning-from-scratch paradigm to the use of generic features from large-scale pre-training. However, existing approaches to CL with pre-trained models only focus on separating the class-specific features from the final representation layer and neglect the power of intermediate representations that capture low- and mid-level features naturally more invariant to domain shifts. In this work, we propose LayUP, a new class-prototype-based approach to continual learning that leverages second-order feature statistics from multiple intermediate layers of a pre-trained network. Our method is conceptually simple, does not require any replay buffer, and works out of the box with any foundation model. LayUP improves over the state-of-the-art on four of the seven class-incremental learning settings at a considerably reduced memory and computational footprint compared with the next best baseline. Our results demonstrate that fully exhausting the representational capacities of pre-trained models in CL goes far beyond their final embeddings.},
  archiveprefix = {arxiv},
  file = {/Users/jae/Zotero/storage/CSPUZH4W/Ahrens et al. - 2023 - Read Between the Layers Leveraging Intra-Layer Representations for Rehearsal-Free Continual Learnin.pdf},
  journal = {arXiv preprint arXiv:2312.08888}
}

Downloads: 0