Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gap between single 2080ti GPU and double 2080ti GPU #11

Closed
JackyZDHu opened this issue Aug 10, 2020 · 3 comments
Closed

gap between single 2080ti GPU and double 2080ti GPU #11

JackyZDHu opened this issue Aug 10, 2020 · 3 comments

Comments

@JackyZDHu
Copy link

Hi!thx to your work!My question is that when I train the spcl with default config from two 2080ti gpus to single 2080ti gpu,the result drops about 6% in mAP. So I want to know the reason about it, thank u !

@yxgeee
Copy link
Owner

yxgeee commented Aug 10, 2020

The number of GPUs is sensitive in re-ID experiments, and I guess the main reason is the batch_size for BN layers. I conducted all the UDA re-ID experiments on 4 GPUs with a batch_size of 64, which means a mini-batch of 16 images are fed into BN. I did not try to train with 2 GPUs or 1 GPU. If you want to achieve better performance on one GPU, I think you need to tune the batch_size, the iters, and the learning rate.
For example, we use the following setup on 4 GPUs,

batch size: 64
iters: 400
learning rate: 0.00035

When adapting to one GPU, it's better to use

batch size: 64/4
iters: 400*4
learning rate: 0.00035/4

@JackyZDHu
Copy link
Author

Thx to your reply! Cuz as usual ,single GPU always get better result than multi gpus due to BN ,so I was confused about my result. I'll try your suggestion to train again, Thank U very much!

@yxgeee
Copy link
Owner

yxgeee commented Aug 10, 2020

Yes. On fully-supervised re-ID experiments, single GPU seems better. However, on unsupervised re-ID tasks, I found that 4 GPUs with a batch_size of 64 generally perform better.

@yxgeee yxgeee closed this as completed Aug 23, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants