-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot share cuda.Tensor on GPU 1 among processes. #1707
Labels
Comments
That happens because context is incorrectly set on the 0-th device. with torch.cuda.device(storage.get_device()):
metadata = storage._share_cuda_()
cache_key = metadata[1]
rebuild = rebuild_storage_cuda gets you further, but there's still an error when shared tensor gets freed (also because of an incorrectly set device):
|
colesbury
added a commit
to colesbury/pytorch
that referenced
this issue
Jun 5, 2017
The correct device must be set when getting the base allocation and when calling cudaIpcCloseMemHandle. Store the device in the allocators context, which was previously always NULL. Fixes pytorch#1707
houseroad
added a commit
to houseroad/pytorch
that referenced
this issue
Jan 4, 2019
…b18ba1 (pytorch#15739) Summary: Pull Request resolved: pytorch#15739 Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8 Included changes: - **[8384c78](onnx/onnx@8384c78)**: add constantofshape (pytorch#1582) <Rui Zhu> - **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (pytorch#1707) <Paul Jesse Hellemn> - **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (pytorch#1695)" (pytorch#1702) <Lu Fang> - **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (pytorch#1695) <Hector Li> - **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (pytorch#1699) <Dmitri Smirnov> - **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (pytorch#1696) <Ke Zhang> - **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (pytorch#1694) <Maik Riechert> - **[f612532](onnx/onnx@f612532)**: Fix typos (pytorch#1686) <Eundoo Song> Reviewed By: zrphercule Differential Revision: D13581674 fbshipit-source-id: a961667184b09d2822815ba5d3fa4198a4c57e88
facebook-github-bot
pushed a commit
that referenced
this issue
Jan 4, 2019
…b18ba1 (#15739) Summary: Pull Request resolved: #15739 Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8 Included changes: - **[8384c78](onnx/onnx@8384c78)**: add constantofshape (#1582) <Rui Zhu> - **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (#1707) <Paul Jesse Hellemn> - **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (#1695)" (#1702) <Lu Fang> - **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (#1695) <Hector Li> - **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (#1699) <Dmitri Smirnov> - **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (#1696) <Ke Zhang> - **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (#1694) <Maik Riechert> - **[f612532](onnx/onnx@f612532)**: Fix typos (#1686) <Eundoo Song> Reviewed By: zrphercule Differential Revision: D13581674 fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464
mrshenli
pushed a commit
to mrshenli/pytorch
that referenced
this issue
Jan 6, 2019
…b18ba1 (pytorch#15739) Summary: Pull Request resolved: pytorch#15739 Previous import was 765f5ee823a67a866f4bd28a9860e81f3c811ce8 Included changes: - **[8384c78](onnx/onnx@8384c78)**: add constantofshape (pytorch#1582) <Rui Zhu> - **[9afc06c](onnx/onnx@9afc06c)**: Set symbol visibility to hidden for non-Windows (pytorch#1707) <Paul Jesse Hellemn> - **[6f8a9f0](onnx/onnx@6f8a9f0)**: Revert "Add NonMaxSupression operator (pytorch#1695)" (pytorch#1702) <Lu Fang> - **[8b89544](onnx/onnx@8b89544)**: Add NonMaxSupression operator (pytorch#1695) <Hector Li> - **[0a7cc48](onnx/onnx@0a7cc48)**: Add bfloat16 support. (pytorch#1699) <Dmitri Smirnov> - **[da7c50c](onnx/onnx@da7c50c)**: ONNX does not maintain versions for experimental ops (pytorch#1696) <Ke Zhang> - **[0c8d857](onnx/onnx@0c8d857)**: Correct type of value_info in Graph (pytorch#1694) <Maik Riechert> - **[f612532](onnx/onnx@f612532)**: Fix typos (pytorch#1686) <Eundoo Song> Reviewed By: zrphercule Differential Revision: D13581674 fbshipit-source-id: 8f8ee86a05a86fe99bf94509148c559ea3df1464
jjsjann123
pushed a commit
to jjsjann123/pytorch
that referenced
this issue
May 17, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Here is a minimal example. Is there any solution?
THCudaCheck FAIL file=/py/conda-bld/pytorch_1493670682084/work/torch/csrc/generic/StorageSharing.cpp line=248 error=11 : invalid argument
Traceback (most recent call last):
File "/home/yuandong/anaconda3/lib/python3.5/multiprocessing/queues.py", line 241, in _feed
obj = ForkingPickler.dumps(obj)
File "/home/yuandong/anaconda3/lib/python3.5/multiprocessing/reduction.py", line 50, in dumps
cls(buf, protocol).dump(obj)
File "/home/yuandong/anaconda3/lib/python3.5/site-packages/torch/multiprocessing/reductions.py", line 104, in reduce_storage
metadata = storage.share_cuda()
RuntimeError: cuda runtime error (11) : invalid argument at /py/conda-bld/pytorch_1493670682084/work/torch/csrc/generic/StorageSharing.cpp:248
The text was updated successfully, but these errors were encountered: