Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effective batch size #41

Closed
sdeven95 opened this issue Aug 9, 2022 · 9 comments
Closed

Effective batch size #41

sdeven95 opened this issue Aug 9, 2022 · 9 comments

Comments

@sdeven95
Copy link

sdeven95 commented Aug 9, 2022

in readme doc:

for imagenet 1k, the effective batch size is 1k?
for imagenet 21k, the effective batch size if 4k?

why?

@sdeven95
Copy link
Author

sdeven95 commented Aug 9, 2022

another question:

when training in multi gpus, the console output [ handled samples/total samples] of iteration summary often were wrong numbers

@sacmehta
Copy link
Collaborator

sacmehta commented Aug 9, 2022

The ImageNet-21k dataset is significantly larger than ImageNet-1k dataset. To train faster, we use larger batch size (similar to other works, e.g., ConvNext).

Regarding your other question: this is not true. If dataset size is not multiple of batch size, we pad the batch whose size is not multiple of batch size. Also, we use variable batch size, wherein each iteration uses a different batch size. As a result, some epochs processes the entire data faster while others may process slightly slower. I recommend to read about variable batch sampler in the docs.

@sdeven95
Copy link
Author

sdeven95 commented Aug 9, 2022

Thanks for your reply. Regarding the another question, I got the output like following:

2022-08-04 23:06:03 - DEBUG - Training epoch 0 with 66072 samples
2022-08-04 23:06:28 - LOGS - Epoch: 0 [ 1/10000000], loss: 5.1873, LR: [1e-06, 1e-06], Avg. batch load time: 24.739, Elapsed time: 25.24
2022-08-04 23:15:51 - LOGS - *** Training summary for epoch 0
loss=5.0682
2022-08-04 23:16:07 - LOGS - Epoch: 0 [ 100/ 22085], loss: 3.8622, top1: 37.0000, top5: 69.5000, LR: [0.000117, 0.000117], Avg. batch load time: 0.000, Elapsed time: 14.74
2022-08-04 23:16:51 - LOGS - *** Validation summary for epoch 0
loss=4.7565 || top1=4.5339 || top5=14.7240
2022-08-04 23:17:05 - LOGS - Epoch: 0 [ 100/ 22085], loss: 5.3559, top1: 0.0000, top5: 0.0000, LR: [0.000117, 0.000117], Avg. batch load time: 0.000, Elapsed time: 12.28
2022-08-04 23:17:35 - LOGS - *** Validation (Ema) summary for epoch 0
loss=5.3627 || top1=0.5837 || top5=2.6041
2022-08-04 23:17:35 - LOGS - Best checkpoint with score 4.53 saved at mobilevitv2_results/vireo_food/width_0_5_0/run_1/checkpoint_best.pt
2022-08-04 23:17:36 - LOGS - Best EMA checkpoint with score 0.58 saved at mobilevitv2_results/vireo_food/width_0_5_0/run_1/checkpoint_ema_best.pt
2022-08-04 23:17:36 - INFO - Checkpoints saved at: mobilevitv2_results/vireo_food/width_0_5_0/run_1

Do you have any recommendation?

@sacmehta
Copy link
Collaborator

sacmehta commented Aug 9, 2022

Nothing is wrong with it. You are seeing one iteration for epoch because you are using a very high value of logging frequency . Entire epoch finishes before log frequency interval is reached.

If you want to print logs more frequently, reduce the value of log frequency.

@sdeven95
Copy link
Author

sdeven95 commented Aug 9, 2022

Another small question in data/datasets/imagenet.py

if input_img is None:
logger.log("Img index {} is possibly corrupt.".format(img_index))
input_tensor = torch.zeros(
size=(3, crop_size_h, crop_size_w), dtype=torch.float
)
target = -1
data = {"image": input_tensor}

when the image is corrupt, the code will raise an exception and stop loading data. I found self.img_type is not defined, so I chenge it to torch.float. Is it OK?

@sdeven95
Copy link
Author

sdeven95 commented Aug 9, 2022

Thanks again. I'm a beginner of machine learning. I have a primary question about top 1.
the last epoch top 1,
the best validation performance epoch top 1,
the ema top 1

which one should I choose?

@sacmehta
Copy link
Collaborator

sacmehta commented Aug 9, 2022

You should evaluate on the validation set using both best checkpoint and best ema checkpoints, and use the one with best performance on the test set.

@sacmehta
Copy link
Collaborator

sacmehta commented Aug 9, 2022

Note that we ignore corrupt samples in collate function.

@sdeven95
Copy link
Author

sdeven95 commented Aug 9, 2022

That's all. Thank you very much.

@sacmehta sacmehta closed this as completed Aug 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants