Skip to content

nardeas/MHGAN-Tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Metropolis-Hastings GAN (MHGAN)

HitCount

MHGAN implemented in Tensorflow (mostly) as described in the original paper:

https://arxiv.org/pdf/1811.11357.pdf

Overview

The base network is a WGAN with DCGAN generator and discriminator. As opposed to the standard LeakyReLU activation we are using GELU as this is shown to generally improve performance:

https://arxiv.org/pdf/1606.08415.pdf

Metropolis-Hastings GAN refers to the functionality of improving trained GANs by drawing k samples from the generator in MCMC fashion and using the discriminator (or critic) probabilities for calculating an acceptance ratio to obtain the best possible sample. The original paper argues that given perfect discriminator, and k approaching infinity, we can obtain samples from the true data distribution.

Thus, even if the generator doesn't converge optimally, we can use the discriminator to draw enhanced samples from the network.

The mhgan.py module provides a wrapper for a trained generator/discriminator pair with utility methods to draw better samples. The chain is calibrated using a score from real data as starting point to avoid the need of burn-in periods.

Training

Generator Discriminator

Examples

After 1500 epochs

Basic sample Enhanced sample (k=1000)
Sample MH-Sample

Convergence on MNIST subset:

Notes

Check the test_mnist.ipynb notebook for examples. The basic flow is this:

Train a (W)GAN:

gan = WGAN(
    Generator(
      input_shape=noise_dimensions,
      output_shape=real_dimensions
    ),
    Discriminator()
)
gan.train(
    sess,
    data_sampler,
    noise_sampler,
    batch_size=32,
    n_epochs=100,
    n_accumulate=1
)

Wrap the GAN in MHGAN instance and draw enhanced samples:

mhgan = MHGAN(gan)
mhgan.generate_enhanced(
    sess,
    data_sampler,
    noise_sampler,
    count=16,
    k=1000
)

Future

Experiment with weight normalization vs. batch normalization:

https://arxiv.org/pdf/1704.03971.pdf

Releases

No releases published

Packages

No packages published