-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How many GPUs will be used for training the code? #24
Comments
When run the train codes,i use 2 gpus,the problem occur as follow: |
In experiment, I used 2 GPUs to train. You can use more GPU to train or reduce the batch size, but it may affect the result. |
OK,thank you very much! |
I used 4 1080Tis and reduced the batch size from 64 to 32 during fine-tuning, the result is not as good as the result paper reported. The mAP on VOC split1 is only 0.385 and it is 0.475 in paper. @Bohao-Lee |
I have not tried on the reduced batch size setting before, but I can reproduce the performance on two 3080 GPUs. |
Thanks, I will try again |
I also encountered this error:RuntimeError: CUDA out of memory. Tried to allocate 422.00 MiB (GPU 0; 10.76 GiB total capacity; 9.72 GiB already allocated; 179.69 MiB free; 84.55 MiB cached). but I only have two 2080ti, so what should I do? Reduce the batch_size? @xiaofeng-c @Bohao-Lee |
Maybe reducing the batch size can help you. But it may affect performance. @Jxt5671 |
I use two 3080 GPUs,butI also encountered this error:RuntimeError: CUDA out of memory. Tried to allocate 422.00 MiB,Could you please tell me the CUDA, torch and python versions you use? |
Thank you for your work,when use your codes for training,i want to know how many gpus will be used?
The text was updated successfully, but these errors were encountered: