Skip to content
Two time-scale update rule for training GANs
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
BEGAN_FID_batched fix typo Aug 25, 2017
DCGAN_FID_batched Updated Usage Sep 14, 2017
FID_vs_Inception_Score add tables Nov 9, 2017
FIDvsINC Implementation FID vs Inception score Feb 22, 2018
Poster print Nov 29, 2017
Results/figures Delete dummy.txt Oct 25, 2017
Toy_Data_Example fix y gradient in description Nov 25, 2017
WGAN_GP Comment regarding 32x32x3 dim support Jan 10, 2019
LICENSE import warnings Jan 10, 2019 Fix typo Nov 15, 2017 typo and some progress prints Sep 19, 2017

Two time-scale update rule for training GANs

This repository contains code accompanying the paper GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium.

Fréchet Inception Distance (FID)

The FID is the performance measure used to evaluate the experiments in the paper. There, a detailed description can be found in the experiment section as well as in the the appendix in section A1.

In short: The Fréchet distance between two multivariate Gaussians X_1 ~ N(mu_1, C_1) and X_2 ~ N(mu_2, C_2) is

                   d^2 = ||mu_1 - mu_2||^2 + Tr(C_1 + C_2 - 2*sqrt(C_1*C_2)).

The FID is calculated by assuming that X_1 and X_2 are the activations of the coding layer pool_3 of the inception model (see below) for generated samples and real world samples respectivly. mu_n is the mean and C_n the covariance of the activations of the coding layer over all real world or generated samples.

IMPORTANT: The number of samples to calculate the Gaussian statistics (mean and covariance) should be greater than the dimension of the coding layer, here 2048 for the Inception pool 3 layer. Otherwise the covariance is not full rank resulting in complex numbers and nans by calculating the square root.

We recommend using a minimum sample size of 10,000 to calculate the FID otherwise the true FID of the generator is underestimated.

Compatibility notice

Previous versions of this repository contained two implementations to calculate the FID, a "unbatched" and a "batched" version. The "unbatched" version should not be used anymore. If you've downloaded this code previously, please update it immediately to the new version. The old version included a bug!

Provided Code

Requirements: TF 1.1+, Python 3.x

This file contains the implementation of all necessary functions to calculate the FID. It can be used either as a python module imported into your own code, or as a standalone script to calculate the FID between precalculated (training set) statistics and a directory full of images, or between two directories of images.

To compare directories with pre-calculated statistics (e.g. the ones from, use: /path/to/images /path/to/precalculated_stats.npz

To compare two directories, use /path/to/images /path/to/other_images

See --help for more details.

Example code to show the usage of in your own Python scripts.

Example code to show how to calculate and save training set statistics.


Improved WGAN (WGAN-GP) implementation forked from with added FID evaluation for the image model and switchable TTUR/orig settings. Lanuage model with JSD Tensorboard logging and switchable TTUR/orig settings.

Precalculated Statistics for FID calculation

Precalculated statistics for datasets

are provided at:

Additional Links

For FID evaluation download the Inception modelf from

The cropped CelebA dataset can be downloaded here

To download the LSUN bedroom dataset go to:

The 64x64 downsampled ImageNet training and validation datasets can be found here

You can’t perform that action at this time.