-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-GPU support for the PyTorch bindings? #63
Comments
Hi there, yes, unfortunately tiny-cuda-nn does not support multi-GPU operation as of now. This is something that'd be cool to have in the future, but currently is not a high priority. I'm going to leave this issue open to serve as a TODO marker. Cheers! |
@Tom94 Thanks for the quick response! Hopefully it could be implemented one day. But I think even if we do not support multi-GPU for now, it should still be possible to support the single GPU case where both input & the network is on I am guessing that the error was caused by a mismatch between the default GPU and the |
tcnn uses whichever CUDA device is "current" on the CPU thread, i.e. the device returned by If |
Yes, you are absolutely right. It's "current" device instead of "default" device. |
Hi, do you have any plans on supporting multi-GPU training recently? Or do you have some hints about why the current code prevents using multiple GPUs with Pytorch distributed training? I think it will be super useful to have tinycudann used to train large-scale models with tiny MLPs. Thanks |
Hi,
I found that the following code would fail:
It seems the module does not have proper support to run on a different gpu even if we have called
.to(device)
. Is it possible to fix this?In addition, I also tried using
torch.nn.DataParallel
together with the hash encoding & tiny mlp. They seem to fail in such use cases. Is it possible to fix this? Thanks a lot!The text was updated successfully, but these errors were encountered: