New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QUESTION] Chapter 17: Why does the optimization loop keep failing for the DCGAN? #514
Comments
Hi @Reisa14 , from sklearn import show_versions
import tensorflow as tf
show_versions()
print(tf.__version__)
print(tf.config.list_physical_devices()) |
Hi @ageron, this is the output that I get (I should also add that I am using Kaggle with the GPU enabled):
|
Thanks @Reisa14. Mmmh, this might be a TF bug, I see nothing wrong with your code. Could you please file a bug with TensorFlow? |
Hi, I'm having the same issue. did you find a solution? |
In my case, the problem was that the batch size was 1 and the batch normalization layer was used. Removing this layer solved the problem. Also, increasing the batch size solved the problem, but it causes an increase in gpu memory consumption. |
Running the code from the notebook for the DCGAN produces the reconstructed images after each epoch but only after saying that the optimization loop has repeatedly failed. The happens on every epoch and is slowing it down a lot. What am I missing? Thanks!
To Reproduce
Exception
Expected behavior
The run produces the reconstructed images but only after saying that the optimization loop failed and it does this for every epoch so the run takes a long time.
Versions
Additional context
This is the code for the generator and discriminator:
And here's the code for the train_gan function:
The text was updated successfully, but these errors were encountered: