Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

A problem about elu_backward #47671

Closed
linccnu opened this issue Nov 10, 2020 · 4 comments
Closed

A problem about elu_backward #47671

linccnu opened this issue Nov 10, 2020 · 4 comments
Labels
high priority module: autograd Related to torch.autograd, and the autograd engine in general module: bc-breaking Related to a BC-breaking change triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@linccnu
Copy link

linccnu commented Nov 10, 2020

馃悰 Bug

Hi, when run the elu_backward operator锛孖 have some confusion with the result when param alpha is negative. I am not sure whether that is one bug inside.

To Reproduce

Steps to reproduce the behavior:

import torch
x = torch.tensor([-2, -1, 0, 1, 2], dtype=torch.float32, requires_grad=True)
y = torch.nn.functional.elu(x, alpha=-2)
print(y)
grads = torch.tensor(torch.ones_like(y))
y.backward(grads)
print(x.grad)

Expected behavior

When param alpha is negative and the input x is less than zero, the elu_backward operator result should in range [alpha, 0] if we set the grads to ones. In other words, the result maybe alpha*exp(x) in this case.

Actual behavior

tensor([1.7293, 1.2642, -0.0000, 1.0000, 2.0000], grad_fn=<EluBackward>)
tensor([ 1.,  1., -2.,  1.,  1.])

image

Environment

  • PyTorch Version (e.g., 1.0): 1.5
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, source): pip3
  • Build command you used (if compiling from source):
  • Python version: 3.6
  • CUDA/cuDNN version: CPU version
  • GPU models and configuration:
  • Any other relevant information:

Additional context

Torch CPU code implementation link.
Torch GPU code implementation link.

cc @ezyang @gchanan @zou3519 @bdhirsh @albanD @gqchen @pearu @nikitaved

@gchanan
Copy link
Contributor

gchanan commented Nov 11, 2020

it looks like a numerical precision issue -- you see the output is -0, not 0. I'm not sure if the right solution is to clamp the output, fix the calculation, etc.

@ailzhang ailzhang added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: autograd Related to torch.autograd, and the autograd engine in general labels Nov 11, 2020
@linccnu
Copy link
Author

linccnu commented Nov 12, 2020

Sorry, maybe i have not describe the issue clearly. and i think it's not the numerical precision issue. The result is wrong in the below case.

x = torch.tensor([-4, -3, -2, -1], dtype=torch.float, requires_grad=True)
y = torch.nn.functional.elu(x, alpha=-2)
print(y) # tensor([1.9634, 1.9004, 1.7293, 1.2642], grad_fn=)
grads = torch.tensor(torch.ones_like(y))
y.backward(grads)
print(x.grad) # tensor([1., 1., 1., 1.])
Here in this case, if the input tensor value is less than zeros and the param alpha is also less than zero, the forward result of elu is ok, but the backward result is unresonable. they are all equal to one? i think the result should be alpha*exp(x), namely [-2exp(-4), -2exp(-3), -2exp(-2), -2exp(-1)].

@linccnu
Copy link
Author

linccnu commented Nov 12, 2020

#43389
According to the initial paper that introduced ELU (Section 3), a negative alpha is not valid. And tf keras doc here it seems to only support alpha > 0.

@albanD
Copy link
Collaborator

albanD commented Nov 12, 2020

Hi,

This is a similar issue as #31938

Basically, we use the positivity of the output to check if the input was positive or negative.
But when you give a negative slope, the sign of the result is not the same as the sign of the input.

We should update the function to forbid negative alphas I think.

Bumping priority for silently wrong gradients

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
high priority module: autograd Related to torch.autograd, and the autograd engine in general module: bc-breaking Related to a BC-breaking change triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants