You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I get CUDA memory blowing when doing larger datasets
return torch.pairwise_distance(x1, x2, p, eps, keepdim)
RuntimeError: CUDA out of memory. Tried to allocate 724.00 MiB (GPU 0; 11.00 GiB total capacity; 7.62 GiB already allocated; 190.31 MiB free; 8.77 GiB reserved in total by PyTorch)
Is there anything that could be done here? like moving this computation to CPU?
The text was updated successfully, but these errors were encountered:
Hi @opassos, yeah, as you said, we could make device type optional in the components. Currently almost all of them (ie., model, anomaly-map generators, metric computation) are on GPU, which may not be always efficient.
I get CUDA memory blowing when doing larger datasets
Is there anything that could be done here? like moving this computation to CPU?
The text was updated successfully, but these errors were encountered: