This repository contains the source code to train SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data (IJCB 2023). You can access arxiv version here.
The installation instructions are based on conda and Linux systems. Therefore, please install conda before continuing. For installation, please download the source code of this paper and unpack it. Then, you can create a conda environment with the following command:
$ cd synthdistill
# create the environment
$ conda env create -f environment.yml
# activate the environment
$ conda activate synthdistill
In our knoeledge distillation framework, we use StyleGAN as a pretrained face generator network. Therefore, you need to clone StyleGAN repository and download its model weights:
$ git clone https://github.com/NVlabs/stylegan3
NOTE: For downloading pretrained StyleGAN, please visit the official page and download stylegan2-ffhq-256x256.pkl
model.
To train models, you can use the following command:
$ python train.py --model TinyFaR_A --resampling_coef 1.0
Checkpoints of trained models (TinyFaR-A, TinyFaR-B, and TinyFaR-C) using SynthDistill are available in the official repository:
If you use this repository, please cite the following paper, which is published in the proceedings of 2023 IEEE International Joint Conference on Biometrics (IJCB 2023). The PDF version of the paper is available as pre-print on arxiv. The complete source code for reproducing all experiments in the paper (including evlauation instructions) is also publicly available in the official repository.
@inproceedings{synthdistill_IJCB2023,
title={SynthDistill: Face Recognition with Knowledge Distillation from Synthetic Data},
author={Otroshi Shahreza, Hatef and George, Anjith and Marcel, S{\'e}bastien},
booktitle={2023 IEEE International Joint Conference on Biometrics (IJCB)},
year={2023},
organization={IEEE}
}