Skip to content
Tensorflow implementation of Energy Based Generative Adversarial Networks (EBGAN)
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Dataset_Reader Delete __init__.pyc Sep 17, 2016
EBGAN
logs/images adding results Sep 22, 2016
.gitignore
LICENSE Initial commit Sep 17, 2016
README.md
TensorflowUtils.py
__init__.py Adding EBGAN files Sep 17, 2016

README.md

EBGAN.tensorflow

Tensorflow implementation of Energy Based Generative Adversarial Networks (EBGAN).

Disclaimer: Still experimenting with higher resolution GAN 😕 and the repo is still under edit ...

##Results Randomly generated images - no cherry picking here❗️

Both results are after 8 epochs - will update results for longer epochs later.

With Pull Away Loss term

Without Pull Away Loss term

##Observations

  • Setting up a energy based objective did not make training GAN easier or any better by my observation. I felt the same way after reading the paper as well - The idea of using energy was the only novel idea that was presented, the comparitive results, details on the experimentation all seemed weak.
  • In fact with margin values, I had no idea how my model was doing looking at the loss term - original GAN had a nice probablistic interpretation.
  • Also the model seemed to collapse suddenly and what was more interesting was it was able to recover later - this was suprising but this doesn't really mean we can train GANs in a easier manner.
  • The margin term introduced in the loss was important to avoid the GAN from collapsing. I believe the folks who wrote the paper started with low margin and went in steps of 10 to avoid model failure. The loss value of the discriminator fake when the model collapses also helps in choosing the margin.
  • One more thing the intoduction of margin does is, it doesn't allow the autoencoder to achieve zero reconstruction loss which can be noticed with the imperfections in the decoded image.
  • Pull away term with weight 0.1 seems to affect the model minimally.

Failed Attempts

  • Low Margin values and very low learning rate results in model failure

Successful Attempts

  • Results for without pullaway term and with pull away term. (The graphs seem a lot different because of the y-scale)

  • Example of autoencoder reconstruction - the margin term pulls the autoencoder from achieving zero loss. Also the number of layers we use for autoencoder is limited by the choice for generator which is another problem for not getting good decoded image.

##Useful links

You can’t perform that action at this time.