Skip to content
Permalink
master
Switch branches/tags
Go to file
 
 
Cannot retrieve contributors at this time

GAN

I am experimenting with different known GAN architectures and trying out new ideas here

What has been tried yet

  1. Co-Operative GANs Multiple generators are trained together and best performing generator's weights are copied over for next iteration

AutoEncoders

  • Basic AutoEncoder on MNIST

AutoEncoder Output

VAE

Vanilla GAN

  • Simple Fully connected Generative Adversarial Network

Vanilla Min_GAN

Co-Operative GAN on top of Vanilla GAN

  • Multiple Generators are trained simultaneously and one performing better is choosen and copied by other Generators for next iteration.
  • Idea is to Co-Operatively improve over a period of time
  • Best Performing Generator can be with Min/Max Loss or can be choosen randomly
  • Winner takes all strategy

Results

With Min Loss:

Vanilla Min_GAN

With Max Loss:

Vanilla Max GAN Loss

With Random selection:

Vanilla Random GAN Loss

With Min Loss but Generators input is same noise:

Vanilla Min loss same noise loss

Min vs Max vs Random (200th epoch output)

200th Epoch Image Min 200th Epoch Image Max 200th Epoch Image Random
Min Max Random

Why Max loss out performs Min loss

  • All generators loss is eventually decreasing
  • Taking Max everytime avoids trapping in saddle point and local minima as highe learning rate configuration will help come out of it
  • Taking max is safer approach to eventually reach to final solution

Why Min loss does not work well enough

  • If one generator collapses, then it will generate minimum loss and fails

DC-GAN