New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why "no required computing gradients"? #4
Comments
Can you show your code totally? This problem may because of the type of the network. So can you paste your code, please? |
I pasted below. I am checking variables' requires_grad attribute. The returned variable from
|
Now, the ConvNd doesn't support calculating high order gradient(This problem is on progress in pytorch). So this code shouldn't work. You can change the ConvNd in Discriminator to Linear to test the code. |
OK, thanks. Do you have any estimated time for convNd to be supported in master branch?
|
Sorry, this is one of bug existing, but my methods of fixing this bug is approved in pytorch. So I can give you an fix to make this error clear. torch/nn/_functions/thnn/activation.py
else:
+ mask = input > ctx.threshold
+ grad_input = mask.type_as(grad_output) * grad_output
- grad_input = grad_output.masked_fill(input > ctx.threshold, 0)
return grad_input, None, None, None |
this is a bug in pytorch pytorch/pytorch#1517 |
I think I have this same bug. When I try to run the MNIST GAN, I get this stack trace:
|
I used the same
calc_GradientPenalty
method as yours and the latest master branch of pytorch('0.1.12+625850c'). But it stuck atpenalty.backward()
with an error. I used
requires_gradient = True
for theinterpolates
variable.Thanks!
The text was updated successfully, but these errors were encountered: