You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't have much experience using PyTorch, and I was wondering if Tangram could be easily modified to parallelize over multiple GPUs? I am trying to map onto a spatial dataset which is quite large (~500k cells) and am running into this error:
RuntimeError: CUDA out of memory.
Tried to allocate 52.38 GiB (GPU 0; 39.59 GiB total capacity;
860.74 MiB already allocated;
37.90 GiB free;
882.00 MiB reserved in total by PyTorch)
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The GPUs I am using have a 40GB capacity so this error makes sense to me. Is there a way to split across 2 GPUs in PyTorch? I also understand that using mode = "cluster" can help alleviate the processing resources required, but was curious about this issue nonetheless.
Thank you!
The text was updated successfully, but these errors were encountered:
Hello!
I don't have much experience using PyTorch, and I was wondering if Tangram could be easily modified to parallelize over multiple GPUs? I am trying to map onto a spatial dataset which is quite large (~500k cells) and am running into this error:
The GPUs I am using have a 40GB capacity so this error makes sense to me. Is there a way to split across 2 GPUs in PyTorch? I also understand that using
mode = "cluster"
can help alleviate the processing resources required, but was curious about this issue nonetheless.Thank you!
The text was updated successfully, but these errors were encountered: