The repository contains code for reproducing experiments of uncoditional image generation with MMD GANs and other benchmark GAN models.
If you're only interested in the new KID metric, check out
- Uses gradient penalty, analoguous to WGAN-GP (Gulrajani et al Improved Training of Wassersein GAN.
- Evaluates models using three different methods: Inception Score, Fréchet Inception Distance (FID), and proposed Kernel Inception Distance (KID).
- Adaptively decreses the learning rate using 3-sample test. If KID does not improve (as compared to evaluation 20k steps earlier) three times in a row, learning rate is halved.
- python >= 3.6
- tensorflow-gpu >= 1.3
- PIL, lmdb, numpy, matplotlib
- machine with GPU(s). At least 2 GPUs are needed for experiments with Celeb-A dataset.
The code works with several common datasets with different resolutions. The experiments include
- 28x28 MNIST,
- 32x32 Cifar10,
- 64x64 LSUN Bedrooms,
- 160x160 Celeb-A.
LSUN, MNIST and Celeb-A datasets can be downloaded using the script.
Running the code
Each of the following scripts launches the training of MMD GAN on respective dataset:
celeba.sh. To train the benchmark models, change the variable
CRAMER. To train all three models set
Feel free to contact Mikołaj Bińkowski (
mikbinkowski at gmail.com) with any
questions and issues.