New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about experiments #2
Comments
Hi, Very thanks for your suggestion! In fact, we observe there is no difference for selecting models by validation set or test set in our experiments. Further, for all re-implemented baselines, we use the same setting for comparisons. |
hi, do you run experiment on 4 gpus with total batch size 128? what's the running time? I find I can not reproduce the result with batch size 128 (single gpu). Much worse than the reported. |
Hi, |
Thanks for your response. 400 epochs, 128 batch size. That might be very time-consuming. My experiment has not been finished. I use 4 gpus with a batch size of 128*4 in 400 epochs. here is the log. here is the log of 2 gpu how many gpu hours your code run with 400 epochs on Imagenet-LT? |
Hi, I think that, with 4gpus, running with our scripts in this repo can reproduce our results reported in the paper. |
Thanks, I will check your released log. I want to use larger batch size to reduce the training time. |
Thanks for your ICCV work. However, I find you directly use the test set of ImageNet-LT to store the best models, which may lead to overfitting in practice and seems to be unfair to other compared methods. Could you please provide the results on ImageNet-LT using the validation set to store models? It would be easier for us to compare with PaCo in our work. Thanks very much.
The text was updated successfully, but these errors were encountered: