Skip to content

GAN, from the field of unsupervised learning, was first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio’s lab. Generative Adversarial Network is composed of two neural networks, a generator G and a discriminator D.

Notifications You must be signed in to change notification settings

Diwas524/GAN-IMPLEMENTATION-ON-MNIST-DATASET-PyTorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

GAN-IMPLEMENTATION-ON-MNIST-DATASET-PyTorch

GAN, from the field of unsupervised learning, was first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio’s lab. Generative Adversarial Network is composed of two neural networks, a generator G and a discriminator D.

GAN on MNIST PYTORCH

Generator:

Generator is the first Neural Network of GAN which tries to generate fake data similar to the real one from the randomly generated noise which is called G(z). Generator is used for generating fake images. In each iteration, generator learns to create an image similar to the real image so that discriminator can’t distinguish it as fake anymore.

Discriminator:

The data generated by Generator is then passed into Discriminator. Discriminator model is used to distinguish whether the generated data is real or fake. Training continues until the Generator succeeds in creating realistic data or when Discriminator can’t distinguish it as a fake image.

MINMAX GAME BY G & D

Above is the loss function of GAN. The loss/error function used maximizes the function D(x), and it also minimizes D(G(z)) where x is the real image, and G(z) is the generated image.

Let’s look above loss function from Generator perspective: since x is the actual image, we want D(x) be 1, and Generator tries to increase the value of D(G(z)) i.e. probability of being real. The training procedure for G is to maximize the probability of D making a mistake by generating data as realistic as possible.

Let’s look above loss function from Discriminator perspective: since x is the actual image, we want D(x) be 1, and Discriminator tries to decrease the value of D(G(z)) as 0 i.e fake image.

After training, the Generator and Discriminator will reach a point at which both cannot improve anymore. This is the state where the Generator produces more realistic images and Discriminator can’t distinguish it as fake.

Read More...

About

GAN, from the field of unsupervised learning, was first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio’s lab. Generative Adversarial Network is composed of two neural networks, a generator G and a discriminator D.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published