Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About batch_size in training configuration? #46

Closed
xiao2mo opened this issue Dec 9, 2021 · 3 comments
Closed

About batch_size in training configuration? #46

xiao2mo opened this issue Dec 9, 2021 · 3 comments

Comments

@xiao2mo
Copy link

xiao2mo commented Dec 9, 2021

Hi Luo, thanks for your remarkable work!
I am here wondering why use batch 128 in 4gpu DDP training? That is to say 32 per GPU which only accounts for less than 1/2 GPU memory. Is that for some special purpose?

@ArrowLuo
Copy link
Owner

ArrowLuo commented Dec 9, 2021

Hi @xiao2mo, there is no special purpose. We set batch size 32 per GPU because some of our machines are 16G per card, and we need to test some other parameters like frame number. It is an appropriate batch size to finish the hyper-parameters study. If your card has more than 16G, a suggestion is to test a large frame number, then a large batch size.

@xiao2mo
Copy link
Author

xiao2mo commented Dec 13, 2021

I see it. The main problem in my exps is that batch size in DDP configuration may result in different results. Thank u.

@dengfenglai321
Copy link

dengfenglai321 commented Jan 10, 2022

hi
我这边是两张卡,每张卡是16G。
可是我能设置的batchsize是 16, 请问这是为什么呢?

训练配置如下

01/10/2022 16:18:00 - INFO -   ***** Running test *****
01/10/2022 16:18:00 - INFO -     Num examples = 497
01/10/2022 16:18:00 - INFO -     Batch size = 16
01/10/2022 16:18:00 - INFO -     Num steps = 32
01/10/2022 16:18:00 - INFO -   ***** Running val *****
01/10/2022 16:18:00 - INFO -     Num examples = 497
222
333
01/10/2022 16:18:12 - INFO -   ***** Running training *****
01/10/2022 16:18:12 - INFO -     Num examples = 130260
01/10/2022 16:18:12 - INFO -     Batch size = 16
01/10/2022 16:18:12 - INFO -     Num steps = 40705
01/10/2022 16:21:34 - INFO -   Epoch: 1/5, Step: 50/8141, Lr: 0.000000001, Loss: 0.455173, Time/step: 4.041784

用的资源如图:
企业微信截图_1641803088178

运行命令如下

python -m torch.distributed.launch --nproc_per_node=2 \
main_task_retrieval.py --do_train --num_thread_reader=0 \
--epochs=5 --batch_size=16 --n_display=50 \
--output_dir ckpts/ckpt_msrvtt_retrieval_looseType \
--lr 1e-4 --max_words 32 --max_frames 12 --batch_size_val 16 \
--datatype msrvtt --expand_msrvtt_sentences  \
--feature_framerate 1 --coef_lr 1e-3 \
--freeze_layer_num 0  --slice_framepos 2 \
--loose_type --linear_patch 2d --sim_header meanP \
--pretrained_clip_name ViT-B/16

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants