An easily usable, performant and extensible cross-platform interface for all your GAN needs.
Training Generative Adversarial Networks (GANs) is a tiresome and hard task. This is primarily because GAN training has not been well understood yet even after a lot of research went into creating, understanding and implementing new methods for regularizing and improving GAN training (such as virtual batch normalization, spectral normalization or boundary equilibria).
In an effort to improve and accelerate research in GAN training, this repository collects state of the art approaches and best practices in a modular fashion to enable users to quickly switch between and try out different methods to find out what works best for their current situation. For that, I provide a Jupyter Notebook. A pure Python version can be generated from the notebook where parameters are supplied through command line arguments making automated experiments much easier.
This framework resulted in an analysis on combinations of normalization functions for GANs. In one day, 72 GANs in total were trained on 36 combinations of normalization functions (a small and a large GAN for each combination) and the Fréchet Inception Distance of their generations was compared. The report is available here.
Only late versions of Python 3 are supported.
Supported framework:
- PyTorch
Other:
- True and tested normalizations for GANs (including a virtual batch normalization layer)
- Experimental setup for testing and saving many different parameters
- Many GAN hacks
- Generate Python source code from the notebook
- Cross platform
This repository offers working multiprocessing on multiple GPUs and CPUs, a collection of smart parameters that are documented and easily usable. In the end, you have an understandable environment for GAN training working out of the box.
You will need NumPy, Matplotlib and PyTorch (including Torchvision). If you want to use the Jupyter notebook, install that as well (otherwise generate the source code as described below).
To save animations of your GAN training, you may want either FFmpeg (mp4) or Pillow (gif).
Using Anaconda, everything except for PyTorch and FFmpeg can be installed in one line:
conda install numpy matplotlib jupyter pillow
I do not offer installation instructions or a setup file due to the differences between CPU and GPU packages and CUDA versions for PyTorch. Please look that stuff up yourself in the following links.
Installation instructions for:
You can generate source code from the Jupyter notebook. Any parameter will be passable as command line arguments, where the default values are the ones currently in the notebook.
For the parameters, there are some rules you need to follow to generate source
code. These are listed in src/ipynb_to_py.py
.
To generate, simply execute the following:
./src/ipynb_to_py.py
If you then want to start the experiments, modify and execute
src/run_experiments.py
. Edit the test_params
dictionary in that file and
enter the following in your command line of choice:
./src/run_experiments.py --debug
That only started a dry run. To start the tests for real, omit the --debug
argument and execute again to see your computer go to work.
A --distributed
flag is planned that will start each test on a separately
created cloud machine. That, however, is still a TODO and will only support
one cloud service provider out of the box.
The system was set up using CelebA, but for the experiments, FFHQ will be used.
At the moment, in the notebook, I do not adhere to the PEP 8 line size guideline. I feel that notebooks are in most cases supposed to be in fullscreen and readability would suffer, especially concerning documentation. Sorry! If there are enough complaints or I change my mind, I will modify the code accordingly.
Sorry for not enough documentation, that will change with time.
Methods to implement (always growing):
- feature matching
- minibatch discrimination
- historical averaging
- bounded equilibrium GANs
- some missing stuff from ganhacks
Evaluation methods:
- semi-supervised learning
Other:
- more documentation of methods
- automatic experiments for parameter combinations
- convolutional layer visualization
- distributed computing version
- grid search supporting cloud computing providers
- more datasets
- data other than images (sound, 3D objects, ...)
- arbitrary data (anything else)
- CelebA-HQ generation
- perhaps more frameworks
- Julia version
This is a collection of the papers used for this project.
- Generative Adversarial Nets
- Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
- Improved Techniques for Training GANs
- BEGAN: Boundary Equilibrium Generative Adversarial Networks
- Spectral Normalization for Generative Adversarial Networks
- StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
- Progressive Growing of GANs for Improved Quality, Stability, and Variation
- Conditional Generative Adversarial Networks
- Large Scale GAN Training For High Fidelity Natural Image Synthesis
- PyTorch DCGAN Tutorial (which a lot is based on)
- ganhacks