This codebase implements some SOTA continual / incremental / lifelong learning methods by PyTorch.
By the way, this is also the official repository of Adapter Learning in Pretrained Feature Extractor for Continual Learning of Diseases. MICCAI2023
One step baseline method:
- Finetune: Baseline for the upper bound of continual learning which updates parameters with data of all classes available at the same time.
Continual methods already supported:
- Finetune: Baseline method which simply updates parameters when new task data arrive.(with or without memory replay of old class data.)
- iCaRL: Incremental Classifier and Representation Learning. [paper]
- GEM: Gradient Episodic Memory for Continual Learning. NIPS2017 [paper]
- UCIR: Learning a Unified Classifier Incrementally via Rebalancing. CVPR2019[paper]
- BiC: Large Scale Incremental Learning. [paper]
- PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. [paper]
- WA: Maintaining Discrimination and Fairness in Class Incremental Learning. CVPR2020 [paper]
- Dark Experience for General Continual Learning: a Strong, Simple Baseline. NeurIPS [paper]
- DER: Dynamically Expandable Representation for Class Incremental Learning. CVPR2021[paper]
- L2P: Learning to Prompt for continual learning. CVPR2022[paper]
- DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning. ECCV2022 [paper]
- CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning. CVPR2023 [paper]
- ACL: Adapter Learning in Pretrained Feature Extractor for Continual Learning of Diseases. MICCAI2023 [paper]
Contrastive model pretraining methods already supported:
- MoCov2: Improved Baselines with Momentum Contrastive Learning. [paper]
- SimSiam: Exploring Simple Siamese Representation Learning. [paper]
Coming soon:
- LwF: Learning without Forgetting. ECCV2016 [paper]
- EWC: Overcoming catastrophic forgetting in neural networks. PNAS2017 [paper]
- LwM: Learning without Memorizing. [paper]
- Layerwise Optimization by Gradient Decomposition for Continual Learning. CVPR2021[paper]
- FOSTER: Feature Boosting and Compression for Class-incremental Learning. ECCV 2022 [paper]
- Class-Incremental Continual Learning into the eXtended DER-verse. TPAMI 2022 [paper]
- Continual Learning with Bayesian Model based on a Fixed Pre-trained Feature Extractor. MICCAI2021[paper]
- Continual Learning of New Diseases with Dual Distillation and Ensemble Strategy. MICCAI2020[paper]
conda create -n CL_Pytorch python=3.8
conda activate CL_Pytorch
pip install -r requirement.txt
-
Edit the hyperparameters in the corresponding
options/XXX/XXX.yaml
file -
Train models:
python main.py --config options/XXX/XXX.yaml
- Test models with checkpoint (ensure save_model option is True before training)
python main.py --checkpoint_dir logs/XXX/XXX.pkl
If you want to temporary change GPU device in the experiment, you can type --device #GPU_ID
in terminal without changing 'device' in .yaml
config file.
Add corresponding dataset .py file to datasets/
. It is done! The programme can automatically import the newly added datasets.
we put continual learning methods inplementations in /methods/multi_steps
folder, pretrain methods in /methods/pretrain
folder and normal one step training methods in /methods/singel_steps
.
Supported Datasets:
-
Natural image datasets: CIFAR-10, CIFAR-100, ImageNet100, ImageNet1K, ImageNet-R, TinyImageNet, CUB-200
-
Medical image datasets: MedMNIST, path16, Skin7, Skin8, Skin40
More information about the supported datasets can be found in datasets/
We use os.environ['DATA']
to access image data. You can config your environment variables in your computer by editing ~/.bashrc
or just change the code.
More details can be found in Reproduce_results.md.
We sincerely thank the following works for providing help.
https://github.com/zhchuu/continual-learning-reproduce
https://github.com/G-U-N/PyCIL
https://github.com/GT-RIPL/CODA-Prompt
https://github.com/aimagelab/mammoth
- Results need to be checked: ewc
- Methods need to be modified: mas, lwf
- Multi GPU processing module need to be add.
- A detailed documentation is coming soon