New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InfoGAN: Looking for a Colab implementing a GAN for MNIST, with both the saturating and non-saturating GAN loss” #3
Comments
Are you still looking for someone to do this? There's an example of a DCGAN on MNIST here: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/generative_examples/dcgan.ipynb I'd be willing to with saturating and non-saturating loss if you'd like. I would likely take code from said example, though I'd add stuff to demonstrate the differences in the loss functions. I can also simplify it - use dense instead of DCGAN, remove batch_norm, etc. |
That sounds great! Make it so and issue a PR when you are done 👍 |
Thanks for setting up the colab notebook! However, when running the colab notebook I stumbled over the following points:
Unfortunately, I am not so familiar with colab and Tensorflow and maybe I am doing something wrong? What brought me here in the first place, was that I was looking into the loss functions and visualized them and their derivations. Therefore, I guess the implementation in the colab notebook of the saturating loss should be:
I.e., put the "1 -" into the log function. I will try to implement a similar basic GAN example in PyTorch and get back to this thread when I have carried out further tests. Kind regards |
Hello, I have now a first draft of the notebook on GitHub: https://nbviewer.jupyter.org/github/MicPie/DepthFirstLearning/blob/master/InfoGAN/DCGAN_MNIST_v2.ipynb I will now polish the stuff and then contribute my notes back. Kind regards |
@MicPie Wow, this looks really good! Looking forward to putting your notebook into our content, once you're comfortable with the level of polish. BTW once you're done, could you copy it over to Colab? That makes it easier for others to try it out and fork it to run their own experiments. |
Hey @avital, I polished the notebook and uploaded it to GitHub: I also found an easy way to "colabify" GitHub notebooks just with a link: The explanation in the notebook is based on my opened issue. If you have suggestions etc. just let me know! :-) |
Miscellaneous small edits.
(Just a GAN implementation, no InfoGAN).
I'd like to show the specific differences in training for the two losses described in the original GAN paper.
A good Colab would include an abundance of text cells explaining exactly what each part is doing. You can put math in text cells, followed by TensorFlow code implementing that math.
The text was updated successfully, but these errors were encountered: