Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Trying to resize storage that is not resizable" when calling pin_memory() on some zero-dimensional tensors #15770

Closed
qbx2 opened this issue Jan 6, 2019 · 5 comments

Comments

Projects
None yet
7 participants
@qbx2
Copy link
Contributor

commented Jan 6, 2019

馃悰 Bug

"Trying to resize storage that is not resizable" error raises when calling pin_memory() on some zero-dimensional tensors

To Reproduce

$ python
Python 3.7.0 (default, Jun 28 2018, 13:15:42) 
[GCC 7.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> x = torch.randn(2, 1); x.pin_memory()
tensor([[-0.0747],
        [ 0.4300]])
>>> x = torch.randn(0); x.pin_memory()
tensor([])
>>> x = torch.randn(0, 0); x.pin_memory()
tensor([], size=(0, 0))
>>> x = torch.randn(1, 0); x.pin_memory()
tensor([], size=(1, 0))
>>> x = torch.randn(0, 1); x.pin_memory()
tensor([], size=(0, 1))
>>> x = torch.randn(0, 5); x.pin_memory()
tensor([], size=(0, 5))
>>> x = torch.randn(0, 50); x.pin_memory()
tensor([], size=(0, 50))
>>> x = torch.randn(50, 0); x.pin_memory()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Trying to resize storage that is not resizable at /home/qbx2/pytorch/aten/src/TH/THStorageFunctions.cpp:70
>>> x = torch.randn(2, 0); x.pin_memory()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: Trying to resize storage that is not resizable at /home/qbx2/pytorch/aten/src/TH/THStorageFunctions.cpp:70
>>> x = torch.randn(0, 2); x.pin_memory()
tensor([], size=(0, 2))

Expected behavior

All zero-dimensional tensors should behave in same way.

Environment

PyTorch version: 1.0.0a0+db5d313
Is debug build: No
CUDA used to build PyTorch: 10.0.130

OS: Ubuntu 18.04.1 LTS
GCC version: (Ubuntu 7.3.0-27ubuntu1~18.04) 7.3.0
CMake version: version 3.12.2

Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.0.130
GPU models and configuration: GPU 0: GeForce RTX 2080
Nvidia driver version: 410.48
cuDNN version: Probably one of the following:
/usr/local/cuda-10.0/lib64/libcudnn.so.7.4.1
/usr/local/cuda-10.0/lib64/libcudnn_static.a

Versions of relevant libraries:
[pip] Could not collect
[conda] magma-cuda100 2.4.0 1 pytorch
[conda] torch 1.0.0a0+db5d313

Additional context

@gchanan

This comment has been minimized.

Copy link
Contributor

commented Jan 8, 2019

CC @yf225.

@yf225

This comment has been minimized.

Copy link
Contributor

commented Jan 8, 2019

@qbx2 This bug seems to be fixed on master, could you try installing PyTorch via pip install torch_nightly -f https://download.pytorch.org/whl/nightly/cu100/torch_nightly.html and then run the code again?

@soumith soumith closed this Jan 8, 2019

@qbx2

This comment has been minimized.

Copy link
Contributor Author

commented Feb 16, 2019

Yes, it works. Thank you

@zmabzug

This comment has been minimized.

Copy link

commented Apr 2, 2019

Has this bugfix been integrated into a stable (i.e., non-nightly) release yet?

@xieshuaix

This comment has been minimized.

Copy link

commented Apr 19, 2019

Got this error too when constructing a float32 tensor from float64 numpy array, torch.FloatTensor(array), torch.from_numpy(array).type(torch.float32) and torch.from_numpy(array.astype(np.float32)) all give this error. Constructing a float64 tensor instead seems fine. This seem a device specific issue.

Any plan on releasing the fix?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can鈥檛 perform that action at this time.