-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
scatter_ throwing a RunTimeError #40359
Comments
cc @nikitaved - is something missing in the shape checks? We should not be getting internal assert. |
I cannot reproduce on master. This is what I see In [1]: import torch
In [2]: y = torch.zeros([9])
In [3]: y.scatter_(0, (y == 0).nonzero(), 1)
/home/nik/miniconda3/envs/pytorch-cuda-dev/bin/ipython:1: UserWarning: This overload of nonzero is deprecated:
nonzero()
Consider using one of the following signatures instead:
nonzero(*, bool as_tuple) (Triggered internally at ../torch/csrc/utils/python_arg_parser.cpp:766.)
#!/home/nik/miniconda3/envs/pytorch-cuda-dev/bin/python
Out[3]: tensor([1., 1., 1., 1., 1., 1., 1., 1., 1.])
|
I can confirm this bug exists on 1.5.0 and 1.5.1 but appears to be fixed in master. I am closing this issue because it has been fixed on master. |
There's still something wrong with the shape checking it seems, the documentation says that the number of dimensions should be the same for src and idx, yet in this case idx has 2 dimensions (as a result of |
@ngimel, it works in this case because the shape check iterates over |
I'm not sure what exactly we should allow, but in any case it should be documented, and it also should be more or less general. E.g. in the cases you describe, why stop at one extra dimension and not allow [N, 1, 1]? Also, can singleton dimensions be appended only, or appended and prepended? Seems easier to me to just disallow all questionable cases, with good error messages :-) |
OK, it is much easier to fix the dim to be exactly the same for all inputs, otherwise we could allow broadcasting if needed, but this will change the documentation. All right, just a simple fix will do :) |
I'm a bit wary about allowing general broadcasting - would require some thought to get right. E.g. for in-place scatter self cannot be broadcasted, so values/indices should be broadcastable to self, right? For gather it's probably enough to require indices and self to be broadcastable to a common shape. Do you remember if there were enhancement requests to enable broadcasting for scatter/gather ops? In any case, imo it makes sense for now to bring behavior in agreement with the documentation, and discuss broadcasting separately. |
No, I do not remember the broadcasting requests, but I do remember expectations that |
馃悰 Bug
I'm trying to turn a tensor into three different tensors for a CNN with three channels. The following happened while testing.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
A tensor with
1
at every index.Environment
Additional context
So squeeze solves it, however, I'm leaving this issue since an internal assert failed:
The text was updated successfully, but these errors were encountered: