shared torch.tensor with multiprocesses using python Queue cause coredump #56480
Labels
module: multiprocessing
Related to torch.multiprocessing
shadow review
Request the triage shadow to take a second look at your triage and see if they agree or not
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃悰 Bug
Our process crashed when send torch.tensor with Queue, but when the torch.tensor is converted to numpy, the process works good.
When we send small torch.tensor, after couple of times of success, the process crash again!
To Reproduce
Steps to reproduce the behavior:
1 .run the following code on linux with kernel 3.10.0-693.el7.x86_64
Expected behavior
the process with queue.put will crashed.
Environment
python 3.6
linux kernel 3.10.0-693.el7.x86_64
torch 1.2.0
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
conda
,pip
, source):Additional context
cc @ezyang
The text was updated successfully, but these errors were encountered: