Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi GPU support #20

Open
D3lik opened this issue Jun 2, 2024 · 4 comments
Open

Multi GPU support #20

D3lik opened this issue Jun 2, 2024 · 4 comments

Comments

@D3lik
Copy link

D3lik commented Jun 2, 2024

Hi. I have a desktop that has 2x Tesla T4s, and it should be working because it has 32G VRAM in total, while other people reported to have a 27G VRAM usage when inferring. It should work but when inferring, only 1 gpu has being used, which caused a cuda out of memory error, what parts of code should i edit so that it can work on multiple GPUs? Thanks in advance

@supraxylon
Copy link

What error messages are you getting? Seems as though a similar issue was discussed here in another project: ultralytics/ultralytics#1971

Try adjusting the batch size

@D3lik
Copy link
Author

D3lik commented Jun 3, 2024

Adjusting batch size doesn't work for me, I think i need to use dataparallel in pytorch to work

@khawar-islam
Copy link

@D3lik I am running on single RTX 4090 but it gives an error CUDA out of memory, do I need two gpus?

@D3lik
Copy link
Author

D3lik commented Jun 5, 2024

@D3lik I am running on single RTX 4090 but it gives an error CUDA out of memory, do I need two gpus?

No. Please check issue 2 for solutions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants