Qiankun Gao, Chen Zhao, Yifan Sun, Teng Xi, Gang Zhang, Bernard Ghanem, Jian Zhang
[Paper
] [Supp
] [arXiv
] [BibTex
]
- [2023/08/19] Camera ready is submitted.
- [2023/07/14] Accepted to ICCV 2023 as poster presentation, code is released to the public!
-
Install all dependencies via
pip
pip install -r requirements.txt
⚠️ Removetorch
andtorchvision
fromrequirements.txt
first if another version of pytorch have already installed.
-
Create a dataset root diretory, e.g.,
data
. -
CIFAR100
andImageNet-R
datasets will be automatically downloaded, whileDomainNet
requires manual download. -
Overview of dataset root diretory
├── cifar100 │ └── cifar-100-python ├── domainnet │ ├── clipart │ ├── infograph │ ├── painting │ ├── quickdraw │ ├── real │ └── sketch └── imagenet-r ├── imagenet-r ├── train_list.txt └── val_list.txt
⚠️ The train-validation split of ImageNet-R dataset are consistent with the L2P JAX code, replace thetrain_list.txt
andval_list.txt
with train_list_coda-p.txt and val_list_coda-p.txt if you want to use the train-validation splitation of CODA-Prompt.
-
Generate config file (replace
<root>
with your dataset root path)python main.py data.root=<root> data.dataset=cifar100 --print_config > cifar100.yaml
-
Run code with an experiment config file
python main.py --config=cifar100.yaml
-
Reproduce results in the paper
We provide configs and Makefile to quickly reproduce the ten-tasks experimental results reported in the paper, run the following command if the
make
has been installed:make vit_adapter make vit_lora make vit_prefix make swin_adapter make convnext_adapter
Run
make
command withBASE
arg (default isbase/cifar100_order1.yaml
) to reproduce other experiments, e.g.:make BASE="base/imagenet-r_order1.yaml" vit_adapter
Modifiy
data.num_increment_classes
(5/10
for CIFAR100/ImageNet-R) in base config files to reproduce20-task
experiments.
- PyTorch implementation of L2P and DualPrompt.
- JAX implementation of L2P and DualPrompt: https://github.com/google-research/l2p.
- CODA-Prompt , state-of-the-art work from CVPR 2023.
- ESN, state-of-the-art work from AAAI 2023.
- Continumm, awesome data loading library for Continual Learning.
@inproceedings{gao2023lae,
title={A Unified Continual Learning Framework with General Parameter-Efficient Tuning}
author={Gao, Qiankun and Zhao, Chen and Sun, Yifan and Xi, Teng and Zhang, Gang and Ghanem, Bernard and Zhang, Jian},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
pages={11483--11493},
year={2023}
}