-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Effective batch size #41
Comments
another question: when training in multi gpus, the console output [ handled samples/total samples] of iteration summary often were wrong numbers |
The ImageNet-21k dataset is significantly larger than ImageNet-1k dataset. To train faster, we use larger batch size (similar to other works, e.g., ConvNext). Regarding your other question: this is not true. If dataset size is not multiple of batch size, we pad the batch whose size is not multiple of batch size. Also, we use variable batch size, wherein each iteration uses a different batch size. As a result, some epochs processes the entire data faster while others may process slightly slower. I recommend to read about variable batch sampler in the docs. |
Thanks for your reply. Regarding the another question, I got the output like following: 2022-08-04 23:06:03 - DEBUG - Training epoch 0 with 66072 samples Do you have any recommendation? |
Nothing is wrong with it. You are seeing one iteration for epoch because you are using a very high value of logging frequency . Entire epoch finishes before log frequency interval is reached. If you want to print logs more frequently, reduce the value of log frequency. |
Another small question in data/datasets/imagenet.py if input_img is None: when the image is corrupt, the code will raise an exception and stop loading data. I found self.img_type is not defined, so I chenge it to torch.float. Is it OK? |
Thanks again. I'm a beginner of machine learning. I have a primary question about top 1. which one should I choose? |
You should evaluate on the validation set using both best checkpoint and best ema checkpoints, and use the one with best performance on the test set. |
Note that we ignore corrupt samples in collate function. |
That's all. Thank you very much. |
in readme doc:
for imagenet 1k, the effective batch size is 1k?
for imagenet 21k, the effective batch size if 4k?
why?
The text was updated successfully, but these errors were encountered: