Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU and batch size? #30

Closed
whatsups opened this issue Jun 4, 2021 · 4 comments
Closed

GPU and batch size? #30

whatsups opened this issue Jun 4, 2021 · 4 comments

Comments

@whatsups
Copy link

whatsups commented Jun 4, 2021

Thanks for your great work!
I noticed that in your paper you mentioned: The model is trained on 4 TITAN-Xp GPUs with batch size 8 for 8 epochs.
However, I train the SEAM on 4 2080Ti GPUs with batch size 8, and find that each card only took up about 4G memory.
So I wonder, are 4×12G GPUs necessary?
Thanks for your reply.

@YudeWang
Copy link
Owner

YudeWang commented Jun 4, 2021

Hi @pigcv89
Maybe the number of GPUs is necessary because nn.BatchNorm2d is used instead of SynchronizedBatchNorm.

@whatsups
Copy link
Author

whatsups commented Jun 4, 2021

I follow the default experiment settings (bs=8 with 4 GPU cards),but it looks like each card only took up about 4G memory. Is this because I am wrong about something? Or 'bs=8' means 'bs=8 for each gpu'?

@YudeWang
Copy link
Owner

YudeWang commented Jun 4, 2021

@pigcv89
bs=8, not bs=8 for each GPU
You can reduce the GPU number and have a try.

@whatsups
Copy link
Author

whatsups commented Jun 4, 2021

Thanks for your replay. I close this issue.

@whatsups whatsups closed this as completed Jun 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants