Skip to content

MartaYang/SEGA

Repository files navigation

SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning
(WACV 2022)

1. Requirements

  • Python 3.7
  • CUDA 11.2
  • PyTorch 1.9.0

2. Datasets

  • miniImagenet [Google Drive]
    • Download and extract it in a certain folder, let's say /data/FSLDatasets/miniImagenet, then set _MINI_IMAGENET_DATASET_DIR of data/mini_imagenet.py to this folder.
  • tieredImageNet [Google Drive]
    • Download and extract it in a certain folder, let's say /data/FSLDatasets/tieredImageNet, then set _TIERED_IMAGENET_DATASET_DIR of data/tiered_imagenet.py to this folder.
  • CIFAR-FS [Google Drive]
    • Download and extract it in a certain folder, let's say /data/FSLDatasets/CIFAR-FS, then set _CIFAR_FS_DATASET_DIR of data/CIFAR_FS.py to this folder.
  • CUB-FS [Google Drive]
    • Download and extract it in a certain folder, let's say /data/FSLDatasets/cub, then set _CUB_FS_DATASET_DIR of data/CUB_FS.py to this folder.

Note: the above datasets are the same as previous works (e.g. FewShotWithoutForgetting, DeepEMD) EXCEPT that we include additional semantic embeddings (GloVe word embeddings for the first 3 datasets and attributes embeddings for CUB-FS). Thus, remember to change the argparse arguments semantic_path in training and testing scripts.

3. Usage

Our training and testing scripts are all at scripts/ and are all in the form of jupyter notebook, where both the argparse arguments and output logs can be easily found.

Let's take training and testing paradigm on miniimagenet for example. For the 1st stage training, run all cells in scripts/01_miniimagenet_stage1.ipynb. And for the 2nd stage training and final testing, run all cells in scripts/01_miniimagenet_stage2_SEGA_5W1S.ipynb.

4. Results

The 1-shot and 5-shot classification results can be found in the corresponding jupyter notebooks.

5. Pre-trained Models

The pre-trained models for all 4 datasets after our first training stage can be downloaded from here.

Citation

If you find our paper or codes useful, please consider citing our paper:

@inproceedings{yang2022sega,
  title={SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning},
  author={Yang, Fengyuan and Wang, Ruiping and Chen, Xilin},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={1056--1066},
  year={2022}
}

Acknowledgments

Our codes are based on Dynamic Few-Shot Visual Learning without Forgetting and MetaOptNet, and we really appreciate it.

Further

If you have any question, feel free to contact me. My email is fengyuan.yang@vipl.ict.ac.cn

About

Codes for the WACV 2022 paper: "SEGA: Semantic Guided Attention on Visual Prototype for Few-Shot Learning"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published