This repository contains the PyTorch implementation for TNNLS paper "Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning"
The code is based on CVPR19_Incremental_Learning.
Pytorch 1.10 with cuda 10.0 and cudnn 7.4 in Ubuntu 18.04 system
We follow FSCIL setting to use the same data index_list for training.
For CUB200, you can download from this link. Please put the downloaded file under data/cub folder and unzip it.
We provide a trained model of the first session. Please download from this link and put it in ./checkpoint.
# train, the checkpoints will be save in ./checkpoint
CUDA_VISIBLE_DEVICES=0 python train_cub.py --resume --uncertainty_distillation --frozen_backbone_part --flip_on_means --adapt_lamda
If you find our project useful in your research, please cite:
@article{cui2023uncertainty,
title={Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning},
author={Cui, Yawen and Deng, Wanxia and Chen, Haoyu and Liu, Li},
journal={IEEE Transactions on Neural Networks and Learning Systems},
year={2023}
publisher={IEEE}
}