Skip to content


Repository files navigation

Few-Shot Classification with Feature Map Reconstruction Networks

This repository contains the reference Pytorch source code for the following paper:

Few-Shot Classification with Feature Map Reconstruction Networks

Davis Wertheimer*, Luming Tang*, Bharath Hariharan (* denotes equal contribution)

CVPR 2021 (video)

If you find our code or paper useful to your research work, please consider citing our work using the following bibtex:

    author    = {Wertheimer, Davis and Tang, Luming and Hariharan, Bharath},
    title     = {Few-Shot Classification With Feature Map Reconstruction Networks},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {8012-8021}

Code environment

This code requires Pytorch 1.7.0 and torchvision 0.8.0 or higher with cuda support. It has been tested on Ubuntu 16.04.

You can create a conda environment with the correct dependencies using the following command lines:

conda env create -f environment.yml
conda activate FRN

Setting up data

You must first specify the value of data_path in config.yml. This should be the absolute path of the folder where you plan to store all the data.

The following datasets are used in our paper:

There are two options to prepare data for few-shot classification:

  • DIRECT DOWNLOAD: Access the pre-processed few-shot datasets used in our experiments directly. You can do this automatically or manually:

    • Use the provided shell script to download and extract all datasets.
      cd data
    • Download individual tar files from this Google Drive Link and extract few-shot datasets into your chosen data_path folder one by one.
  • MANUAL CONFIGURATION: Download original datasets, then pre-process them into individual few-shot versions one by one. Allows for greater control over pre-processing and provides access to the original source data.

    1. Download the original datasets. Again, this can be done automatically or manually:

      • Use the provided shell script to download and extract all datasets.
        cd data
      • Download individual datasets using the download links provided above, and then extract them into your chosen data_path folder. Note that meta-iNat / tiered meta-iNat and tiered-ImageNet_DeepEMD require some extra processing steps:
        cd data_path
        # For meta-iNat / tiered meta-iNat:
        mkdir inat2017
        mv train_val_images inat2017/train_val_images
        mv train_2017_bboxes.json inat2017/train_2017_bboxes.json
        # For tiered-ImageNet_DeepEMD:
        mv tiered_imagenet tiered-ImageNet_DeepEMD
    2. Pre-process each dataset one-by-one into corresponding few-shot versions.

      cd data

After setting up few-shot datasets following the steps above, the following folders will exist in your data_path:

  • CUB_fewshot_cropped: 100/50/50 classes for train/validation/test, using bounding-box cropped images as input
  • CUB_fewshot_raw: class split same as above, using raw un-cropped images as input
  • Aircraft_fewshot: 50/25/25 classes for train/validation/test
  • meta_iNat: 908/227 classes for train/test.
  • tiered_meta_iNat: 781/354 classes for train/test, split by superclass.
  • mini-ImageNet: 64/16/20 classes for train/validation/test
  • tiered-ImageNet: 351/91/160 classes for train/validation/test, images are 84x84
  • tiered-ImageNet_DeepEMD: derived from DeepEMD's implementation, images are 224x224

Under each folder, images are organized into train, val, and test folders. In addition, you may also find folders named val_pre and test_pre, which contain validation and testing images pre-resized to 84x84 for the sake of speed.

You can use the jupyter notebook data/visualize.ipynb to explore and randomly visualize the images inside these few-shot datasets.

Train and test

For fine-grained few-shot classification, we provide the training and inference code for both FRN and our Prototypical Network (Proto) baseline, as they appear in the paper.

To train a model from scratch, simply navigate to the appropriate dataset/model subfolder in experiments/. Each folder contains 3 files:, and Running the shell script will train and evaluate the model with hyperparameters matching our paper. Explanations for these hyperparameters can be found in trainers/

For example, to train Proto on CUB_fewshot_cropped with Conv-4 as the network backbone under the 1-shot setting, run the following command lines:

cd experiments/CUB_fewshot_cropped/Proto/Conv-4_1-shot

For general few-shot classification on ImageNet variants, we provide code for FRN pre-training and subsequent episodic fine-tuning in the corresponding subfolders in experiments.

For example, to train FRN on mini-ImageNet, run the following command lines:

# first run pre-training
cd experiments/mini-ImageNet/FRN/ResNet-12_pretrain

# then run episodic fine-tuning
cd experiments/mini-ImageNet/FRN/ResNet-12_finetune

Pre-training is usually very slow, so we also provide pre-trained FRN model weights for each general few-shot dataset at this Google Drive Link. You can download the network weights and run the fine-tuning script directly, without pre-training from scratch. Directions for this are in the following section.

All training scripts log training and validation accuracies in both the std output and a generated *.log file. These logs can be visualized via tensorboard. The tensorboard summary is located in the log_* folder. The model snapshot with the current best validation performance is saved as model_*.pth.

After training concludes, test accuracy and 95% confidence interval are logged in the std output and *.log file. To re-evaluate a trained model, run, setting the internal model_path variable to the saved model *.pth you want to evaluate.

Trained model weights

We provide trained model weights for all FRN and Proto models with a ResNet-12 network backbone. You can download these either manually or automatically:

  • Download the tar file from this Google Drive Link and extract it into the trained_model_weights/ folder.
  • Use the provided shell script to download and extract the models automatically:
    cd trained_model_weights/

The directory structure for trained_model_weights/ mirrors experiments/. For example, the trained model.pth for 5-shot ResNet-12 Proto on raw image CUB is located at the following path:


For ImageNet variants, we provide both pre-trained and final (fine-tuned) weights. For example, the two sets of FRN weights model.pth for tiered-ImageNet_DeepEMD are located at the following paths:

# pre-trained model weights

# final (fine-tuned) model weights 

You can evaluate these trained models by changing the value of model_path in the corresponding files. For example, to evaluate our final FRN model on mini-ImageNet, navigate to the following file:


then change the value of model_path by overwriting the following code line (line 16) in


then run python in the command line, and you should be able to get the final evaluation results in the std output.

Every set of model weights contained in the folder trained_model_weights/ has this option for model_path available as a comment within the corresponding file, as in the example above.

Finetune with pre-trained model weigths

You can also skip the long pre-training stage by episodically fine-tuning the downloaded and pre-trained FRN model weights directly. For example, to fine-tune pre-trained FRN model on mini-ImageNet, navigate to the following file:


change the value of pretrained_model_path by overwriting the following code line (line 33):


then run ./ in the command line, and the finetuning process will start.

Every set of pre-trained weights contained in the folder trained_model_weights/ has this option for pretrained_model_path available as a comment within the corresponding file, as in the example above.

Selected few-shot classification results

Here we quote some performance comparisons from our paper on CUB, mini-ImageNet, tiered-ImageNet and mini-ImageNet → CUB.


We have tried our best to verify the correctness of our released data, code and trained model weights. However, there are a large number of experiment settings, all of which have been extracted and reorganized from our original codebase. There may be some undetected bugs or errors in the current release. If you encounter any issues or have questions about using this code, please feel free to contact us via and