Skip to content

📝 🎉 A curated list of awesome papers for incremental learning with pre-trained models

Notifications You must be signed in to change notification settings

sun-hailong/Awesome-Incremental-Learning-with-Pre-trained-Models

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 

Repository files navigation

Awesome Incremental Learning with Pre-trained ModelsAwesome

🤗 Contributing

🚀Feel free to contact me or add pull request if you find👀 any interesting paper is missing.

We Need You!

📋Markdown format:

- Paper Name. (**Conference Year**) [[paper](link)] [[code](link)]

🛠️Toolbox

  • PILOT: A Pre-Trained Model-Based Continual Learning Toolbox (arXiv23)[paper] [code]

📝Survey

  • Continual Learning with Pre-Trained Models: A Survey (arXiv23)[paper] [code]
  • Deep Class-Incremental Learning: A Survey (arXiv23)[paper] [code]

📑Papers

2023

  • Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need (arXiv23)[paper] [code]
  • PromptFusion: Decoupling Stability and Plasticity for Continual Learning (arXiv23)[paper]
  • CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning (CVPR23)[paper] [code]
  • Isolation and Impartial Aggregation: A Paradigm of Incremental Learning without Interference (AAAI23)[paper] [code]
  • PIVOT: Prompting for Video Continual Learning (CVPR23)[paper]
  • DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning (ICML23)[paper]
  • Learning Expressive Prompting With Residuals for Vision Transformers (CVPR23)[paper]
  • Multimodal Parameter-Efficient Few-Shot Class Incremental Learning (arXiv23)[paper]
  • Real-Time Evaluation in Online Continual Learning: A New Hope (CVPR23 Highlight)[paper] [code]
  • Remind of the Past: Incremental Learning with Analogical Prompts (arXiv23)[paper] [code]
  • On the Usage of Continual Learning for Out-of-Distribution Generalization in Pre-trained Language Models of Code (arXiv23)[paper]
  • AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning (CVPR23)[paper]
  • Incrementer: Transformer for Class-Incremental Semantic Segmentation With Knowledge Distillation Focusing on Old Class (CVPR23)[paper]
  • Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation (CVPR23)[paper]
  • Continual Detection Transformer for Incremental Object Detection (CVPR23)[paper] [code]
  • Principles of Forgetting in Domain-Incremental Semantic Segmentation in Adverse Weather Conditions (CVPR23)[paper]
  • Computationally Budgeted Continual Learning: What Does Matter? (CVPR23)[paper] [code]
  • Unsupervised Continual Semantic Adaptation through Neural Rendering (CVPR23)[paper]
  • ConStruct-VL: Data-Free Continual Structured VL Concepts Learning (CVPR23)[paper] [code]
  • Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning (CVPR23)[paper] [code]
  • Learning without Forgetting for Vision-Language Models (arXiv23)[paper]
  • Image-Object-Specific Prompt Learning for Few-Shot Class-Incremental Learning (arXiv23)[paper]
  • Introducing Language Guidance in Prompt-based Continual Learning (ICCV23)[paper]
  • When Prompt-based Incremental Learning Does Not Meet Strong Pretraining (ICCV23)[paper] [code]
  • Class Incremental Learning with Pre-trained Vision-Language Models (arXiv23)[paper]
  • Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS23)[paper] [code]
  • FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning (NeurIPS23)[paper] [code]
  • RanPAC: Random Projections and Pre-trained Models for Continual Learning (NeurIPS23)[paper] [code]
  • Continual Learners are Incremental Model Generalizers (ICML23)[paper]
  • DDGR: Continual Learning with Deep Diffusion-based Generative Replay (ICML23)[paper] [code]
  • Continual Vision-Language Representation Learning with Off-Diagonal Information (ICML23)[paper]
  • Self-regulating Prompts: Foundational Model Adaptation without Forgetting (ICCV23)[paper] [code]
  • CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation (ICCV23)[paper] [code]
  • Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning (ICCV23)[paper] [code]
  • First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning (ICCV23)[paper]
  • Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models (ICCV23)[paper] [code]
  • A Unified Continual Learning Framework with General Parameter-Efficient Tuning (ICCV23)[paper] [code]
  • SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model (ICCV23)[paper] [code]

2022

  • Class-Incremental Learning with Strong Pre-trained Models (CVPR22)[paper] [code]
  • Learning to Prompt for Continual Learning (CVPR22)[paper] [code]
  • S-Prompts Learning with Pre-trained Transformers: An Occam's Razor for Domain Incremental Learning (NeurIPS22)[paper] [code]
  • Don't Stop Learning: Towards Continual Learning for the CLIP Model (arXiv22)[paper]
  • DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning (ECCV22)[paper] [code]
  • Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection (COLING22)[paper]
  • Momentum-based Weight Interpolation of Strong Zero-Shot Models for Continual Learning (NeurIPS22)[paper]
  • Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue (ENNLP22)[paper]
  • CLIP model is an Efficient Continual Learner (arXiv22)[paper] [code]
  • Memory Efficient Continual Learning with Transformers (NeurIPS22)[paper]
  • Continual Pre-Training Mitigates Forgetting in Language and Vision (arXiv22)[paper] [code]
  • Fine-tuned Language Models are Continual Learners (arXiv22)[paper] [code]
  • Continual Learning with Foundation Models: An Empirical Study of Latent Replay (CoLLAs22)[paper] [code]
  • Effect of scale on catastrophic forgetting in neural networks (ICLR22)[paper]
  • Continual Training of Language Models for Few-Shot Learning (arXiv22)[paper] [code]
  • CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks (NeurIPS22)[paper] [code]
  • A Simple Baseline that Questions the Use of Pretrained-Models in Continual Learning (arXiv22)[paper] [code]
  • ELLE: Efficient Lifelong Pre-training for Emerging Data (ACL22)[paper] [code]

2021

  • An Empirical Investigation of the Role of Pre-training in Lifelong Learning (arXiv21)[paper] [code]
  • Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning (ENNLP21)[paper] [code]

About

📝 🎉 A curated list of awesome papers for incremental learning with pre-trained models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published