Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: the derivative for 'target' is not implemented. When I use F.smooth_l1_loss(x, y, reduce=False) #3933

Closed
jiayi-wei opened this issue Nov 29, 2017 · 6 comments

Comments

@jiayi-wei
Copy link

When I calculate the smooth L1 Loss between two Variable with F.smooth_l1_loss(x, y, reduce=False), I will the get this:
image
However, when switching to nn.SmoothL1Loss(x, y, reduce=False), I have no problem. It's weird man

@jiayi-wei
Copy link
Author

But F.smooth_l1_loss works well when using it with two random tensor. I am really confused.

@zou3519
Copy link
Contributor

zou3519 commented Nov 29, 2017

That sounds like a bug. The Variable you pass in for target should have requires_grad=False (this is asserted in the case of other losses, I think). For now, a workaround is to ensure that your y (the second arg to F.smooth_l1_loss) has requires_grad=False:

import torch
from torch.autograd import Variable
import torch.nn.functional as F

x = Variable(torch.randn(2, 2), requires_grad=True)
t = Variable(torch.randn(2, 2), requires_grad=False)
F.smooth_l1_loss(x, t, reduce=False)

@wadimkehl
Copy link

Are you sure that the target is the second argument and not the first? You feed (GT, PRED) but it should be (PRED, GT)...

@jiayi-wei
Copy link
Author

@zou3519 you are right, cause I pass the parameters in wrong order.

@jiayi-wei
Copy link
Author

@wadimkehl Thx, man. You are totally right! My bad. I will close this issue

@pnsoni3
Copy link

pnsoni3 commented Apr 22, 2019

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:2016: UserWarning: Using a target size (torch.Size([32, 784])) that is different to the input size (torch.Size([32, 1, 28, 28])) is deprecated. Please ensure they have the same size.
"Please ensure they have the same size.".format(target.size(), input.size()))

RuntimeError Traceback (most recent call last)
in ()
----> 1 train(encoder, decoder, train_loader, vae_loss, optimizer, num_epochs = 10)

in train(encoder, decoder, train_loader, loss_func, optimizer, num_epochs)
14 pred = decoder(z)
15
---> 16 loss = loss_func(pred, Variable(xb, requires_grad=False), mu, logvar)
17
18 acc = accuracy(yb, pred)

in vae_loss(x, x_hat, mu, logvar)
3 ## YOUR CODE HERE ##
4 # MSE LOSS + KL DIVERGENCE
----> 5 BCE = F.binary_cross_entropy(x_hat, x.view(-1, 784))
6 KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
7 # Normalise by same number of elements as in reconstruction

/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in binary_cross_entropy(input, target, weight, size_average, reduce, reduction)
2025
2026 return torch._C._nn.binary_cross_entropy(
-> 2027 input, target, weight, reduction_enum)
2028
2029

RuntimeError: the derivative for 'target' is not implemented

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants