You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The most important thing are the requirements of the CUDA API. I'd discourage its use unless you know some lower level detail of CUDA IPC. The most important limitation is that if you allocate a CUDA tensor X in process A and send it to process B, then X should never go out of scope and be freed in A until it's used in B (NOTE: it's automatically freed upon A's exit). So it might be ok for sharing CUDA parameters across processes, but not e.g. for data loading.
fixes the assertion from pytorch#1325 on our devel branch.
1. update alias information after graph mutation
2. patch unsqueeze: i. support negative dimension; ii. fixing range check
In pytorch/docs/source/notes/cuda.rst, it is mentioned that:
IMO, more details on the requirements would be helpful for this part.
The text was updated successfully, but these errors were encountered: