Interpolating Convolutional Neural Networks Using Batch Normalization
This repository provides training and utility scripts to reproduce results reported in . For brevity, we will only mention the most important argument options in this readme file. Users wishing to use other options or adapt the techniques implemented here for their own work can reuse our modules and refer to the main scripts (
- Download ImageNet32.
- Extract both
map_clsloc.txtfrom here and save to
Experiment 1 (Learning CIFAR10 from ImageNet Template)
Simply run the following script to begin training models that were reported in Table 1 of .
This will run all experiments and will take some time to complete. By default, the script supports resuming by not overwriting models that have been trained (checkpointing is not supported yet).
If you wish to select only a few experiments to run, pass
--experiments as an argument, e.g.
python experiment1.py --experiments last full bn
This will only train models for "Last", "Full", and "BN" in Table 1.
Here are all the keywords for
last full bn combn pcbn bn_random combn_random pcbn_random
However, note that
pcbn requires models for
bn to have finished training, and similarly for
Running the following script will print out the test accuracies of all models that have finished training.
python experiment1.py --evaluate
As before, one can select which results to print:
python experiment1.py --evaluate --experiments last full bn
This will only print results for "Last", "Full", and "BN".
Experiment 2 (Few-shot Learning ImageNet32 from CIFAR10 Template)
The following script executes the necessary steps to reproduce results in Table 2 of .
By default, the script performs 1-shot experiments. To change this, use the
--shot argument, e.g.
python experiment2.py --shot 5
This causes the script to perform 5-shot experiments.
As before, specifying
--experiments allow choice in which experiments which will be run. The full list includes:
last full bn combn_loss_3 combn_loss_5 combn_loss_10 combn_accuracy_3 combn_accuracy_5 combn_accuracy_10 combn_threshold_0.75 pcbn_loss_3 pcbn_loss_5 pcbn_loss_10 pcbn_accuracy_3 pcbn_accuracy_5 pcbn_accuracy_10 pcbn_threshold_0.75 sgm l2
Note that the script attempts to parse experiments that contain
pcbn as a substring. The experiment string should follow the format
[module]_[component_selection]_[num_components]. For example,
combn_loss_3 means that 3 BN components will be selected by few-shot loss and combined using PCBN. This allows quick testing of different configurations without modifying the source code.
Note that the only valid entries for
[component_selection], the valid values are
loss (few-shot loss), and
threshold (max-shot accuracy). If
threshold is specified,
[num_components] instead changes the accuracy percentage threshold for component selection.
Running the following script will print out mean validation accuracy of all models per experiment that has finished training.
python experiment2.py --evaluate
 G. Wesley P. Data, Kirjon Ngu, David W. Murray, Victor A. Prisacariu, "Interpolating Convolutional Neural Networks Using Batch Normalization," in European Conference on Computer Vision, 2018