This repository contains all experiments that we show in our publication Can Functional Transfer Methods Capture Simple Inductive Biases?
The code underlying the experiments described in this repository can be found in orbit_transfer.
For details on how to run the experiment, please see nntransfer_recipes.
The experiments further require installation of:
The configuration of the following experiments can be found in orbit_transfer_recipes/_2021_09_24_aistats
mnist_1d_hypersearch.py
: Grid-search for initial hyperparameters across different transfer methods and a shifts in range [0,30]
mnist_1d_with_pooling_hypersearch.py
: Same as above, but with a student network that includes a pooling layer.
mnist_1d_shift.py
: For the hyperparameters we found in the grid-search, we train models across training setting with all possible shift settings.
mnist_1d_with_pooling_shift.py
: Same as above, but with a student network that includes a pooling layer.
mnist_2d_cnn_linear.py
: Comparison of different functional transfer methods on centered and translated MNIST.
mnist_2d_resnet_vit.py
: Same as above, but transferring between a ResNet18 and a small VIT.
mnist_2d_cnn_linear_loss_ablation.py
: Orbit transfer loss ablation.
mnist_2d_rotation.py
: Transferring from a rotation equivariant teacher to an MLP.
In case you find a bug, please create an issue or contact any of the contributors.