This repository provides the codebase for reproducing the experiments presented in the paper "Revisiting Learnable Affines for Batch Norm in Few-Shot Transfer Learning". The paper investigates the role of learnable affine parameters in Batch Normalization layers during few-shot transfer learning scenarios. The code allows you to train models and perform fine-tuning on various datasets, such as MiniImageNet, CD-FSL datasets, and ImageNet.
The codebase has been tested with the following versions of packages:
h5py==3.1.0joypy==0.2.5matplotlib==3.4.2numpy==1.21.0pandas==1.2.3Pillow==8.4.0scikit_learn==1.0.1scipy==1.6.0seaborn==0.11.2torch==1.8.1torchvision==0.9.1tqdm==4.60.0
To install all the required packages, run:
pip install -r requirements.txtEnsure all dependencies listed in requirements.txt are installed using the command provided above.
The experiments utilize the MiniImageNet, CD-FSL datasets, and ImageNet datasets. Below are instructions to prepare each dataset:
To prepare MiniImageNet and CD-FSL datasets, follow the steps detailed in the CD-FSL benchmark repository.
You can download the ImageNet dataset from the Kaggle ImageNet Object Localization Challenge.
All the dataset training and validation split files are located in the datasets/split_seed_1 directory.
Set the appropriate dataset paths in the configs.py file.
- Source Dataset Names: "ImageNet", "miniImageNet"
- Target Dataset Names: "EuroSAT", "CropDisease", "ChestX", "ISIC"
- To Train: Refer to this link.
- To Fine-Tune:
python src/finetune.py --save_dir ./logs/baseline_teacher --target_dataset {Target dataset name} --subset_split datasets/split_seed_1/{Target dataset name}_labeled_80.csv --embedding_load_path ./logs/baseline_teacher/checkpoint_best.pkl --freeze_backbone - Pre-trained Model: Checkpoint
- To Train: Refer to this link.
- To Fine-Tune:
python src/finetune.py --save_dir ./logs/baseline_na_teacher --target_dataset {Target dataset name} --subset_split datasets/split_seed_1/{Target dataset name}_labeled_80.csv --embedding_load_path ./logs/baseline_na_teacher/checkpoint_best.pkl --freeze_backbone - Pre-trained Model: Checkpoint
- To Train:
python src/AdaBN.py --dir ./logs/AdaBN/{dataset_name} --base_dictionary logs/baseline_teacher/checkpoint_best.pkl --target_dataset $target_testset --target_subset_split datasets/split_seed_1/$target_testset_unlabeled_20.csv --bsize 256 --epochs 10 --model resnet10 - To Fine-Tune:
python src/finetune.py --save_dir ./logs/AdaBN/{Target dataset name} --target_dataset {Target dataset name} --subset_split datasets/split_seed_1/{Target dataset name}_labeled_80.csv --embedding_load_path ./logs/AdaBN/{Target dataset name}/checkpoint_best.pkl --freeze_backbone - Pre-trained Model: Checkpoint
- To Train:
python src/AdaBN_na.py --dir ./logs/AdaBN_na/{dataset_name} --base_dictionary logs/baseline_na_teacher/checkpoint_best.pkl --target_dataset $target_testset --target_subset_split datasets/split_seed_1/$target_testset_unlabeled_20.csv --bsize 256 --epochs 10 --model resnet10 - To Fine-Tune:
python src/finetune.py --save_dir ./logs/AdaBN_na/{Target dataset name} --target_dataset {Target dataset name} --subset_split datasets/split_seed_1/{Target dataset name}_labeled_80.csv --embedding_load_path ./logs/AdaBN_na/{Target dataset name}/checkpoint_best.pkl --freeze_backbone - Pre-trained Model: Checkpoint
- To Train:
python src/ImageNet.py --dir ./logs/ImageNet/ --arch resnet18 --data ./data/ILSVRC/Data/CLS-LOC --gpu 0
- To Fine-Tune:
python src/ImageNet_finetune.py --save_dir ./logs/ImageNet --target_dataset {Target dataset name} --subset_split datasets/split_seed_1/{Target dataset name}_labeled_80.csv --embedding_load_path ./logs/baseline_teacher/checkpoint_best.pkl --freeze_backbone - Pre-trained Model: Checkpoint
- To Fine-Tune:
python src/finetune.py --save_dir ./logs/eval/baseline_teacher --target_dataset ImageNet_test --subset_split datasets/split_seed_1/ImageNet_val_labeled.csv --embedding_load_path ./logs/baseline_teacher/checkpoint_best.pkl --freeze_backbone
- Pre-trained Model: Checkpoint
- To Adapt:
python src/AdaBN.py --dir ./logs/AdaBN_teacher/miniImageNet --base_dictionary logs/baseline_teacher/checkpoint_best.pkl --target_dataset ImageNet_test --target_subset_split datasets/split_seed_1/ImageNet_val_labeled.csv --bsize 256 --epochs 10 --model resnet10
- To Fine-Tune:
python src/finetune.py --save_dir ./logs/AdaBN_teacher/miniImageNet --target_dataset ImageNet_test --subset_split datasets/split_seed_1/ImageNet_val_labeled.csv --embedding_load_path ./logs/AdaBN_teacher/miniImageNet/checkpoint_best.pkl --freeze_backbone
- Pre-trained Model: Checkpoint
Pre-trained models are available for each experiment, enabling easy replication and validation of results. Refer to the links provided in each experiment section to download the corresponding pre-trained models.
If you find this work useful, please consider citing the paper:
@inproceedings{yazdanpanah2022revisiting,
title={Revisiting learnable affines for batch norm in few-shot transfer learning},
author={Yazdanpanah, Moslem and Rahman, Aamer Abdul and Chaudhary, Muawiz and Desrosiers, Christian and Havaei, Mohammad and Belilovsky, Eugene and Kahou, Samira Ebrahimi},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={9109--9118},
year={2022}
}
- Added instructions for ImageNet training and fine-tuning.
- Improved documentation for dataset preparation.
- Included
hps.yamlConfiguration File: Added ahps.yamlfile to streamline the process of replicating results. The file contains all hyperparameters used in our experiments and can be found in theconfigdirectory.