Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
src
 
 
 
 
 
 
 
 

TensorFlow implementation of Generative Adversarial Networks

  • This repository contains TensorFlow implementations of GANs inspired by several other repositories of GANs or generative models (generative-models, tensorflow-generative-model-collections).
  • This repository is used for me to learn and experiment on various GANs.
  • All the GANs are tested on MNIST and CelebA and the architecture of each GAN is the same as or slightly modified from the original paper to make them compatible for images with size 28 x 28 and 64 x 64.
  • Results of the GAN models implemented in this repository are briefly shown in this page. Implementation details and full results of each GAN can be found in indvidual pages for each model.

Related implementations:

Here are my other implementations related to GAN:

Requirements

Models

Name Paper Implementation Details Description
DCGAN paper details DCGAN improves the GAN performance by using a more advanced architecture than the original GAN, including batchnorm, fully convolutional structure, ReLU and LeakyReLU activations, and removing pooling layers.
LSGAN paper details LSGAN uses least squares losses instead of original cross entropy losses to pull the generated data close to the real data.
InfoGAN paper details InfoGAN is able to learn disentangled representations from data in a completely unsupervised manner by maximize the mutual information between a small subset of input latent codes and the generated data.

Usage

Preparation

  • Download the MNIST dataset from here and CelebA from here.
  • Setup path in examples/gans.py: MNIST_PATH is the directory to put MNIST dataset and CELEBA_PATH is the directory to put CelebA dataset. SAVE_PATH is the directory to save output images and trained model.

Argument

Run the script examples/gans.py to train GAN models. Here are all the arguments:

  • --train: Train the model.
  • --generate: Randomly sample images from trained model.
  • --load: The epoch ID of pre-trained model to be restored.
  • --gan_type: Type of GAN for experiment. Default: dcgan. Other options: lsgan, infogan.
  • --dataset: Dataset used for experiment. Default: mnist. Other options: celeba.
  • --zlen: Length of input random vector z. Default: 100
  • --lr: Initial learning rate. Default: 2e-4.
  • --keep_prob: Keep probability of dropout. Default: 1.0.
  • --bsize: Batch size. Default: 128.
  • --maxepoch: Max number of epochs. Default: 50.
  • --ng: Number of times of training generator for each step. Default: 1.
  • --nd: Number of times of training discriminator for each step. Default: 1.
  • --w_mutual: Weight of mutual information loss for InfoGAN. Default: 1.0

Train models

  • Go to examples/, then run
python gans.py --train --gan_type GAN_NAME --dataset DATASET_NAME

The trained model and images sampled from the model during training will be saved in SAVE_PATH.

Result

Here are example results of each GAN model. Details of the implementation and more results for each GAN model can be access by clicking details under model names.

MNIST

Name Random Sampling Interpolation
DCGAN
details
LSGAN
details
InfoGAN
details

CelebA

Name Random Sampling Interpolation
DCGAN
details
LSGAN
details
InfoGAN
details

Author

Qian Ge

About

Tensorflow implementation of Generative Adversarial Networks

Topics

Resources

License

Releases

No releases published

Packages

No packages published

Languages