.
├── checkpoints_perceiver_io/ # Directory where model checkpoints are saved
├── logs_perceiver_io/ # Log output directory after running `baseline.sh`
├── models/ # Pretrained models required for CPG training
├── packnet_models/ # Pretrained PackNet models
├── packnet_models_pickaback/ # Models for modeldiff
├── run_pickaback/ # Experiment scripts
├── CPG_cifar100_main_normal.py # Script for CPG experiments (baseline)
├── CPG_MM_main_normal.py # Multimodal version of CPG for CrossBack
├── transfer_kv_perceiver_io.py # Script to swap key/value projections
├── packnet_cifar100_main_normal.py # PackNet training script
├── pickaback_perceiverIO.py # PerceiverIO based script for modeldiff
├── tools/
├── utils/
└── utils_pickaback/
https://github.com/EWHA-Tespa/CrossBack.git
cd CrossBack
📊 CUB-200-2011:
wget https://data.caltech.edu/records/65de6-vp158/files/CUB_200_2011.tgz?download=1 -O CUB_200_2011.tgz # Download images
tar -xvzf CUB_200_2011.tgz # Extract archive
Text data (separate download): https://drive.google.com/file/d/0B0ywwgffWnLLZW9uVHNjb2JmNlE/view?resourcekey=0-8y2UVmBHAlG26HafWYNoFQ
Preprocessing: Place the raw file from https://github.com/EWHA-Tespa/CrossBack-Data-Preprocess/blob/main/cub.ipynb into the dataset directory and run it to preprocess.
📊 MSCOCO:
wget http://images.cocodataset.org/zips/train2017.zip # Download training images
wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip # Download annotations
unzip train2017.zip # Extract images
unzip annotations_trainval2017.zip # Extract annotations
Preprocessing: Place the raw file from https://github.com/EWHA-Tespa/CrossBack-Data-Preprocess/blob/main/mscoco/preprocess_0430.ipynb into the dataset directory and run it.
📊 Oxford-102-flowers:
kaggle competitions download -c oxford-102-flower-pytorch
Text data (separate download): https://drive.google.com/file/d/0B0ywwgffWnLLcms2WWJQRFNSWXM/view?resourcekey=0-Av8zFbeDDvNcF1sSjDR32w
Preprocessing: Place the raw file from https://github.com/EWHA-Tespa/CrossBack-Data-Preprocess/blob/main/oxford.ipynb into the dataset directory and run it.
CrossBack experiments are executed via the run_pickaback/ scripts.
1. Baseline
bash run_pickaback/baseline.sh cub # Specify dataset: cub, mscoco, or oxfordThis will generate logs_perceiver_io/baseline_cub_acc_scratch.txt and save checkpoints under checkpoints_perceiver_io/baseline_scratch.
2-1. Train Backbone Task
The following command trains the first task, which will serve as the backbone model for all subsequent tasks. This command runs both the training and pruning steps together.
bash run_pickaback/wo_backbone.sh cub
Saves checkpoints to checkpoints_perceiver_io/CPG_single_scratch_woexp.
2-2. Pruning ratio
Choose an appropriate pruning ratio for the backbone model.
bash run_pickaback/select_pruning_ratio_of_backbone.sh
3. Find Similar Models
Using ModelDiff, select the model whose decision patterns most closely match those of the target task.
bash run_pickaback/find_backbone.sh
The similar model for the target task is identified as follows:
Selected backbone for target 14 = (euc)4
4. Reconstruct Backbone
Swap the target model’s key/value projections with those of the backbone model to prepare for subsequent cross-modal training.
bash run_pickaback/transfer_kv.sh
5. Train Target Model
Run the following command to train the continual learning model for the target task:
bash run_pickaback/w_backbone_MM.sh
This will save model checkpoints to checkpoints_perceiver_io/CPG_fromsingle_scratch_woexp_target.
This project is based on the following open-source projects: