-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Closed
Labels
module: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Edit: I filed this issue because I believed the following code has a use-after-free.
def foo():
A = torch.rand(SIZE, device="cpu", pin_memory=True)
B = A.cuda(non_blocking=True)
return B
C = foo()
# do stuff with C
It turns out that PyTorch actually keeps the pinned memory alive until the memcpy event is complete. There is no bug. Pretty cool!
Metadata
Metadata
Assignees
Labels
module: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module