Skip to content

grantsrb/PytorchWGAN

Repository files navigation

Convolutional Wasserstein GAN

May 11th, 2018

Description

Generative Adversarial Networks are a deep learning model architecture that fabricate realistic data in the image of a real data set.

The basic setup of a GAN consists of two networks. One of the two networks, known as the discriminator, tries to distinguish between real and generated images. The other network, known as the generator, generates images with the goal of fooling the first network.

Wasserstein GANs use the Wassertein Distance as the optimization metric between real and generated distributions. This makes the GAN more stable during training, improves the diversity of the generated images, and reduces the sensitivity to hyperparameters.

A reason for these benefits is that the Wasserstein Distance is continuous and defined even when the two distributions are equal to 0 (unlike the KL or JS Divergences and many others). This means that we can get a meaningful gradient even when the two distributions are completely different.

WGAN Results

CIFAR10

cifar10 image1 cifar10 image2 cifar10 image3 cifar10 image4 cifar10 image5 cifar10 image6 cifar10 image7

German Traffic Signs

traffic data image1 traffic data image2 traffic data image3 traffic data image4 traffic data image5 traffic data image6 traffic data image7

MNIST

mnist image1 mnist image2 mnist image3 mnist image4 mnist image5 mnist image6 mnist image7

MNIST Loss Figures

MNIST Losses MNIST Loss Difference

Loss difference is calculated as the difference between the loss from the real data and the loss from the generated data.

Sources:

Wasserstein GAN Improved Training of Wasserstein GANs Improved Training of Wasserstein GANs Github Repo Read-Through: Wasserstein GAN

About

Recreation of WGAN and improved WGAN papers.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages