- Python 3.7
- CUDA 11.2
- PyTorch 1.9.0
- miniImagenet [Google Drive]
- Download and extract it in a certain folder, let's say
/data/FSLDatasets/miniImagenet
, then set_MINI_IMAGENET_DATASET_DIR
of data/mini_imagenet.py to this folder.
- Download and extract it in a certain folder, let's say
- tieredImageNet [Google Drive]
- Download and extract it in a certain folder, let's say
/data/FSLDatasets/tieredImageNet
, then set_TIERED_IMAGENET_DATASET_DIR
of data/tiered_imagenet.py to this folder.
- Download and extract it in a certain folder, let's say
- CIFAR-FS [Google Drive]
- Download and extract it in a certain folder, let's say
/data/FSLDatasets/CIFAR-FS
, then set_CIFAR_FS_DATASET_DIR
of data/CIFAR_FS.py to this folder.
- Download and extract it in a certain folder, let's say
- CUB-FS [Google Drive]
- Download and extract it in a certain folder, let's say
/data/FSLDatasets/cub
, then set_CUB_FS_DATASET_DIR
of data/CUB_FS.py to this folder.
- Download and extract it in a certain folder, let's say
Note: the above datasets are the same as previous works (e.g. FewShotWithoutForgetting, DeepEMD) EXCEPT that we include additional semantic embeddings (GloVe word embeddings for the first 3 datasets and attributes embeddings for CUB-FS). Thus, remember to change the argparse arguments semantic_path
in training and testing scripts.
Our training and testing scripts are all at scripts/
and are all in the form of jupyter notebook, where both the argparse arguments and output logs can be easily found.
Let's take training and testing paradigm on miniimagenet for example.
For the 1st stage training, run all cells in scripts/01_miniimagenet_stage1.ipynb
. And for the 2nd stage training and final testing, run all cells in scripts/01_miniimagenet_stage2_SEGA_5W1S.ipynb
.
The 1-shot and 5-shot classification results can be found in the corresponding jupyter notebooks.
The pre-trained models for all 4 datasets after our first training stage can be downloaded from here.
If you find our paper or codes useful, please consider citing our paper:
@inproceedings{yang2022sega,
title={SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning},
author={Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1056--1066},
year={2022}
}
Our codes are based on Dynamic Few-Shot Visual Learning without Forgetting and MetaOptNet, and we really appreciate it.
If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn