-
Notifications
You must be signed in to change notification settings - Fork 474
CUDA memory leak when interop with Pytorch #100
Copy link
Copy link
Closed
Description
I am utilizing Warp to implement a custom operator for my PyTorch neural network and encountered a CUDA memory leaking issue. I found the cause of that is the input I send to warp.launch function is from a PyTorch tensor with requires_grad=True. Is this considered a bug? I have attached the piece of the code to reproduce the error. If you run it, you will see the CUDA memory usage continue to grow until out of memory.
import warp as wp
import torch
@wp.kernel
def op_kernel(x: wp.array(dtype=wp.float32), y: wp.array(dtype=wp.float32)):
i = wp.tid()
y[i] = 2.0 * x[i]
wp.init()
i = 0
while True:
i += 1
print(i)
x = torch.rand((4096), dtype=torch.float32, device='cuda', requires_grad=True)
wp_x = wp.from_torch(x)
wp_y = wp.empty_like(wp_x)
wp.launch(kernel=op_kernel, dim=x.shape, inputs=[wp_x], outputs=[wp_y])
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels