Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential bug of GAN example #56

Closed
dolaameng opened this issue Jul 26, 2017 · 2 comments
Closed

Potential bug of GAN example #56

dolaameng opened this issue Jul 26, 2017 · 2 comments

Comments

@dolaameng
Copy link

Hi,

First thank you for sharing the tutorials!

I have a quick question on the GAN example, where 'detaching' the variable should be used in my opinion like fake_images = G(z).detach(), to avoid the training of Discriminator influences the Generator's behaviour at this stage. Or you have specific considerations here?

Thank you!

@yunjey
Copy link
Owner

yunjey commented Jul 26, 2017

@dolaameng No, it doesn't matter because we only train the discriminator using d_optimizer.step() and reset the grad buffer before calling g_loss.backward(). In the case of fake_images =Variable(G(z).detach().cuda()), it saves memory and speed up the training, but in my experience this is trivial.

@yunjey yunjey closed this as completed Jul 26, 2017
@dolaameng
Copy link
Author

Hi @yunjey yeah it makes sense. Thank you for clarification.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants