-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproudce DINO-R50 and get a higher result as 49.9 with batch_size=1 and nGPU=8 #150
Comments
wow! nice results~, would like to share your training log with us? And would you like to provide your checkpoints and config for us by creating a new |
@FelixCaae Your result is normal. You use a smaller total batch size of 8 and more training iterations which leads to a better performance in the early stage of training. However, if you continue to run it until convergence, the result should be no higher than the result with a total batch size of 16. Actually, we have observed the same phenomenon when training other models. |
As there is no more activity, I am closing the issue~ Feel free to reopen it if necessary. Or you can leave a new issue if you meet some other problems. |
I reproduce DINO with dino_r50_4scale_12ep.py and set batch_size=1. I use max_iter=90000 x 2 and drops learning rate at 165000th iteration. Then, I got a result higher than this repo reports. Since this result (49.9) is obviously better than the current result (49.2) so there may be something wrong with my setting? Or this may be a better training setting than the default one (batch size=2).
The text was updated successfully, but these errors were encountered: