-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BE] More informative error messages in THPVariable_set_grad
#100174
Comments
Following up on this, consider the following two asserts: pytorch/torch/csrc/autograd/python_variable.cpp Lines 920 to 923 in 31f311a
pytorch/torch/csrc/autograd/python_variable.cpp Lines 924 to 929 in 31f311a
Can the second one (which is checked after the first) ever be triggered? If I look at pytorch/aten/src/ATen/core/TensorBase.h Lines 557 to 561 in 31f311a
|
I would expect We would definitely be happy with a PR improving these errors! |
pytorch/c10/core/TensorOptions.h Lines 364 to 367 in e6f9bc5
Maybe the first clause on the dispatch key equality is causing the non-intuitive behavior since it seems to be a function of the device? pytorch/c10/core/TensorOptions.h Lines 439 to 442 in e6f9bc5
|
It looks like For the first check involving |
pytorch/torch/csrc/autograd/python_variable.cpp
Line 896 in 31f311a
When the gradient metadata does not match the corresponding tensor's metadata,
THPVariable_set_grad()
raises an error, e.g.:Including what leads to the mismatch would be very helpful. For example, for the above message, if we could see something like:
A similar idea applies for mismatched devices and sizes.
cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @lezcano @Varal7 @malfet
The text was updated successfully, but these errors were encountered: