You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!thx to your work!My question is that when I train the spcl with default config from two 2080ti gpus to single 2080ti gpu,the result drops about 6% in mAP. So I want to know the reason about it, thank u !
The text was updated successfully, but these errors were encountered:
The number of GPUs is sensitive in re-ID experiments, and I guess the main reason is the batch_size for BN layers. I conducted all the UDA re-ID experiments on 4 GPUs with a batch_size of 64, which means a mini-batch of 16 images are fed into BN. I did not try to train with 2 GPUs or 1 GPU. If you want to achieve better performance on one GPU, I think you need to tune the batch_size, the iters, and the learning rate.
For example, we use the following setup on 4 GPUs,
Thx to your reply! Cuz as usual ,single GPU always get better result than multi gpus due to BN ,so I was confused about my result. I'll try your suggestion to train again, Thank U very much!
Yes. On fully-supervised re-ID experiments, single GPU seems better. However, on unsupervised re-ID tasks, I found that 4 GPUs with a batch_size of 64 generally perform better.
Hi!thx to your work!My question is that when I train the spcl with default config from two 2080ti gpus to single 2080ti gpu,the result drops about 6% in mAP. So I want to know the reason about it, thank u !
The text was updated successfully, but these errors were encountered: