-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU and Batch Size #11
Comments
By defaul we used 8 GPUs and set per-gpu batchsize as 2 for all experiments in the paper. However, when implementing the detector on MMDet3.x for the code release, we encounted issue on the training speed when using SyncBN. We sidestep this issue by using more GPUS (16), adjust the learning rate following the linear scaling rule, and train half of the original iterations. For the configs that are not using SyncBN, we retain the default setting of 8 GPUs. |
Thank you for the response. I am asking because I plan to run on fewer GPUs (4) and may need to change the batch size in the codebase. Do you know where I can adjust per-GPU batch size? By default, would the per-gpu batch size just remain 2 if using fewer GPUs? |
Hi! Please refer to this config file. |
And increase the per-gpu batch size when you use fewer gpus. Otherwise, linearly adjust the learning rate and increase the total number of iterations. |
Great, thank you for the guidance. |
Hello I was wondering if there is information on the GPU count, batch size, and GPU type for the results reported in the paper. Thanks!
The text was updated successfully, but these errors were encountered: