-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mysterious Tensor Indexing Problem #22013
Comments
Does |
Aha! Indeed it does. Thank you! That's an excellent workaround. I'm tempted to say this is still a bug. Indexing by native python lists seems like it should either be equivalent to indexing by a tensor or completely disallowed. Or, at the very least, the error message should mention something about indexing by native python lists being unsupported. |
In particular it does seem that we should have the same behavior regardless of whether the list has more or fewer than 32 elements. But I don't know the exact PyTorch indexing rules -- will discuss with the rest of the team and update this issue accordingly. |
Lists with 31 or fewer elements are interpreted as tuples in some cases. This is to match NumPy's behavior, which in turn is for backwards compatibility reasons. NumPy recently added a warning for this case. We should probably do the same. pytorch/torch/csrc/autograd/python_variable_indexing.cpp Lines 226 to 264 in 38c9bb8
NumPy warns:
|
We should definitely improve the error message here. |
I am surprised this bug is still silently active: `def bernoulli_i(p, size): c32 =bernoulli_i(0.5,32) c31 =bernoulli_i(0.5,31) returns a tensor([], size=(0, s) for an index created with size s < 32 |
🐛 Bug
Indexing into a tensor with a 2d list of indices seems to fail sometimes, with a critical point when the number of indices is less than 32.
To Reproduce
n=31
fails with:Expected behavior
I expected this indexing to work the same for any number of indices. This is problematic in my actual code as the size of the data varies.
Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 10.0.130
OS: Ubuntu 16.04.6 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609
CMake version: Could not collect
Python version: 3.6
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip3] numpy==1.16.4
[pip3] pytorch-pretrained-bert==0.6.2
[pip3] torch==1.1.0
[conda] Could not collect
Additional context
cc @ezyang @gchanan @zou3519 @mruberry @rgommers @heitorschueroff
The text was updated successfully, but these errors were encountered: