Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InfoGAN: Looking for a Colab implementing a GAN for MNIST, with both the saturating and non-saturating GAN loss” #3

Open
avital opened this issue Jun 4, 2018 · 6 comments

Comments

@avital
Copy link
Contributor

avital commented Jun 4, 2018

(Just a GAN implementation, no InfoGAN).

I'd like to show the specific differences in training for the two losses described in the original GAN paper.

A good Colab would include an abundance of text cells explaining exactly what each part is doing. You can put math in text cells, followed by TensorFlow code implementing that math.

@maxisawesome
Copy link
Contributor

Are you still looking for someone to do this? There's an example of a DCGAN on MNIST here: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb

I'd be willing to with saturating and non-saturating loss if you'd like. I would likely take code from said example, though I'd add stuff to demonstrate the differences in the loss functions. I can also simplify it - use dense instead of DCGAN, remove batch_norm, etc.

@cinjon
Copy link
Contributor

cinjon commented Oct 8, 2018

That sounds great! Make it so and issue a PR when you are done 👍

@MicPie
Copy link
Contributor

MicPie commented Feb 24, 2019

Thanks for setting up the colab notebook!

However, when running the colab notebook I stumbled over the following points:

  • generator and discriminator classes are not instanced (should be defined before the tf.contrib.eager.defun calls)
  • a close parenthesis is missing at the end of the training function: print("Done! %d epochs completed in %.2f minutes." %(EPOCHS, (time.time()-start)/60))
  • after approx 1500 epochs I get the following error during plotting the generated images which looks like some under-/overflow:
/usr/local/lib/python3.6/dist-packages/matplotlib/image.py:395: UserWarning: Warning: converting a masked element to nan.
  dv = (np.float64(self.norm.vmax) -
/usr/local/lib/python3.6/dist-packages/matplotlib/image.py:396: UserWarning: Warning: converting a masked element to nan.
  np.float64(self.norm.vmin))
/usr/local/lib/python3.6/dist-packages/matplotlib/image.py:403: UserWarning: Warning: converting a masked element to nan.
  a_min = np.float64(newmin)
/usr/local/lib/python3.6/dist-packages/matplotlib/image.py:408: UserWarning: Warning: converting a masked element to nan.
  a_max = np.float64(newmax)
/usr/local/lib/python3.6/dist-packages/matplotlib/colors.py:918: UserWarning: Warning: converting a masked element to nan.
  dtype = np.min_scalar_type(value)
/usr/local/lib/python3.6/dist-packages/numpy/ma/core.py:716: UserWarning: Warning: converting a masked element to nan.
  data = np.array(a, copy=False, subok=subok)

Unfortunately, I am not so familiar with colab and Tensorflow and maybe I am doing something wrong?

What brought me here in the first place, was that I was looking into the loss functions and visualized them and their derivations. Therefore, I guess the implementation in the colab notebook of the saturating loss should be:

def s_generator_loss(generated_output):
    return tf.reduce_mean(tf.log(1-generated_output))

I.e., put the "1 -" into the log function.

I will try to implement a similar basic GAN example in PyTorch and get back to this thread when I have carried out further tests.

Kind regards
Michael

@MicPie
Copy link
Contributor

MicPie commented Mar 3, 2019

Hello,

I have now a first draft of the notebook on GitHub: https://nbviewer.jupyter.org/github/MicPie/DepthFirstLearning/blob/master/InfoGAN/DCGAN_MNIST_v2.ipynb
It is heavily based on the PyTorch tutorial notebook and has some nice visualizations.
The plotted gradient standard deviations seems to look ok, so it should work.

I will now polish the stuff and then contribute my notes back.

Kind regards
Michael

@avital
Copy link
Contributor Author

avital commented Mar 3, 2019

@MicPie Wow, this looks really good! Looking forward to putting your notebook into our content, once you're comfortable with the level of polish.

BTW once you're done, could you copy it over to Colab? That makes it easier for others to try it out and fork it to run their own experiments.

@MicPie
Copy link
Contributor

MicPie commented Mar 14, 2019

Hey @avital, I polished the notebook and uploaded it to GitHub:
https://nbviewer.jupyter.org/github/MicPie/DepthFirstLearning/blob/master/InfoGAN/DCGAN_MNIST_v5.ipynb

I also found an easy way to "colabify" GitHub notebooks just with a link:
https://colab.research.google.com/github/MicPie/DepthFirstLearning/blob/master/InfoGAN/DCGAN_MNIST_v5.ipynb

The explanation in the notebook is based on my opened issue.

If you have suggestions etc. just let me know! :-)

kumarkrishna pushed a commit to kumarkrishna/depthfirstlearning.com that referenced this issue Mar 19, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants