-
Notifications
You must be signed in to change notification settings - Fork 440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi GPU support #20
Comments
What error messages are you getting? Seems as though a similar issue was discussed here in another project: ultralytics/ultralytics#1971 Try adjusting the batch size |
Adjusting batch size doesn't work for me, I think i need to use dataparallel in pytorch to work |
@D3lik I am running on single RTX 4090 but it gives an error CUDA out of memory, do I need two gpus? |
No. Please check issue 2 for solutions. |
Hi. I have a desktop that has 2x Tesla T4s, and it should be working because it has 32G VRAM in total, while other people reported to have a 27G VRAM usage when inferring. It should work but when inferring, only 1 gpu has being used, which caused a cuda out of memory error, what parts of code should i edit so that it can work on multiple GPUs? Thanks in advance
The text was updated successfully, but these errors were encountered: