Compare GAN code.
Switch branches/tags
Nothing to show
Clone or download
Latest commit cb8dee9 Sep 4, 2018
Permalink
Failed to load latest commit information.
compare_gan Moving multi-gan code into compare_gan third_party. Sep 4, 2018
.gitignore internal change Feb 23, 2018
AUTHORS internal change Jul 5, 2018
CONTRIBUTING.md internal change Feb 23, 2018
LICENSE internal change Feb 23, 2018
README.md internal change Jul 14, 2018
setup.py internal change Sep 3, 2018

README.md

Compare GAN code.

This is the code that was used in "Are GANs Created Equal? A Large-Scale Study" paper (https://arxiv.org/abs/1711.10337) and in "The GAN Landscape: Losses, Architectures, Regularization, and Normalization" (https://arxiv.org/abs/1807.04720).

If you want to see the version used only in the first paper - please see the v1 branch of this repository.

Pre-trained models

The pre-trained models are available on TensorFlow Hub. Please see this colab for an example how to use them.

Best hyperparameters

This repository also contains the values for the best hyperparameters for different combinations of models, regularizations and penalties. You can see them in generate_tasks_lib.py file and train using --experiment=best_models_sndcgan

Installation:

To install, run:

python -m pip install -e . --user

After installing, make sure to run

compare_gan_prepare_datasets.sh

It will download all the necessary datasets and frozen TF graphs. By default it will store them in /tmp/datasets.

WARNING: by default this script only downloads and installs small datasets - it doesn't download celebaHQ or lsun bedrooms.

  • Lsun bedrooms dataset: If you want to install lsun-bedrooms you need to run t2t-datagen yourself (this dataset will take couple hours to download and unpack).

  • CelebaHQ dataset: currently it is not available in tensor2tensor. Please use the ProgressiveGAN github for instructions on how to prepare it.

Running

compare_gan has two binaries:

  • generate_tasks - that creates a list of files with parameters to execute
  • run_one_task - that executes a given task, both training and evaluation, and stores results in the CSV file.
# Create tasks for experiment "test" in directory /tmp/results. See "src/generate_tasks_lib.py" to see other possible experiments.
compare_gan_generate_tasks --workdir=/tmp/results --experiment=test

# Run task 0 (training and eval)
compare_gan_run_one_task --workdir=/tmp/results --task_num=0 --dataset_root=/tmp/datasets

# Run task 1 (training and eval)
compare_gan_run_one_task --workdir=/tmp/results --task_num=1 --dataset_root=/tmp/datasets

Results (FID and inception scores for checkpoints) will be stored in /tmp/results/TASK_NUM/scores.csv.