This repository is the official implementation of Feature Fusion for Online Mutual Knowledge Distillation (FFL). The source code is for reproducing the results of Table 1 of the original paper.
To install requirements using environment.yml refer to the documentation.
train_FFL.py is the code for training FFL with Mutual Knowledge Distillation (MKD). To train the model(s) in the paper, run this command:
#The results from the original paper can be reproducd by running :
python train_FFL.py --lr 0.1 --cu_num 0 --depth 32
Please refer to the following citation if this repository is useful for your research.
@article{kim2019feature,
title={Feature fusion for online mutual knowledge distillation},
author={Kim, Jangho and Hyun, Minsung and Chung, Inseop and Kwak, Nojun},
journal={arXiv preprint arXiv:1904.09058},
year={2019}
}