Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about parallelizing over multiple GPUs #69

Open
brianysoong opened this issue Jul 28, 2022 · 2 comments
Open

Question about parallelizing over multiple GPUs #69

brianysoong opened this issue Jul 28, 2022 · 2 comments

Comments

@brianysoong
Copy link

brianysoong commented Jul 28, 2022

Hello!

I don't have much experience using PyTorch, and I was wondering if Tangram could be easily modified to parallelize over multiple GPUs? I am trying to map onto a spatial dataset which is quite large (~500k cells) and am running into this error:

RuntimeError: CUDA out of memory. 
Tried to allocate 52.38 GiB (GPU 0; 39.59 GiB total capacity; 
860.74 MiB already allocated; 
37.90 GiB free; 
882.00 MiB reserved in total by PyTorch) 
If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

The GPUs I am using have a 40GB capacity so this error makes sense to me. Is there a way to split across 2 GPUs in PyTorch? I also understand that using mode = "cluster" can help alleviate the processing resources required, but was curious about this issue nonetheless.

Thank you!

@caiquanyou
Copy link

I have the same question , how to deploy on muti GPU

@HeesooSong
Copy link

HeesooSong commented Aug 3, 2023

Same! It would be awesome to have an option to parallelize the calculation over multiple GPUs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants