TensorFlow implementation of Generative Adversarial Networks
- This repository contains TensorFlow implementations of GANs inspired by several other repositories of GANs or generative models (generative-models, tensorflow-generative-model-collections).
- This repository is used for me to learn and experiment on various GANs.
- All the GANs are tested on MNIST and CelebA and the architecture of each GAN is the same as or slightly modified from the original paper to make them compatible for images with size 28 x 28 and 64 x 64.
- Results of the GAN models implemented in this repository are briefly shown in this page. Implementation details and full results of each GAN can be found in indvidual pages for each model.
Here are my other implementations related to GAN:
- Python 3.3+
- Tensorflow 1.10+
- TensorFlow Probability
- imageio 2.4.1+
|DCGAN||paper||details||DCGAN improves the GAN performance by using a more advanced architecture than the original GAN, including batchnorm, fully convolutional structure, ReLU and LeakyReLU activations, and removing pooling layers.|
|LSGAN||paper||details||LSGAN uses least squares losses instead of original cross entropy losses to pull the generated data close to the real data.|
|InfoGAN||paper||details||InfoGAN is able to learn disentangled representations from data in a completely unsupervised manner by maximize the mutual information between a small subset of input latent codes and the generated data.|
- GAN models are defined in
- The script
example/gans.pycontains experiments of all the GAN models.
- Download the MNIST dataset from here and CelebA from here.
- Setup path in
MNIST_PATHis the directory to put MNIST dataset and
CELEBA_PATHis the directory to put CelebA dataset.
SAVE_PATHis the directory to save output images and trained model.
Run the script
examples/gans.py to train GAN models. Here are all the arguments:
--train: Train the model.
--generate: Randomly sample images from trained model.
--load: The epoch ID of pre-trained model to be restored.
--gan_type: Type of GAN for experiment. Default:
dcgan. Other options:
--dataset: Dataset used for experiment. Default:
mnist. Other options:
--zlen: Length of input random vector z. Default:
--lr: Initial learning rate. Default:
--keep_prob: Keep probability of dropout. Default:
--bsize: Batch size. Default:
--maxepoch: Max number of epochs. Default:
--ng: Number of times of training generator for each step. Default:
--nd: Number of times of training discriminator for each step. Default:
--w_mutual: Weight of mutual information loss for InfoGAN. Default:
- Go to
examples/, then run
python gans.py --train --gan_type GAN_NAME --dataset DATASET_NAME
The trained model and images sampled from the model during training will be saved in
Here are example results of each GAN model.
Details of the implementation and more results for each GAN model can be access by clicking
details under model names.