Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BatchSize How to set in a single card? #39

Closed
anfengmin100 opened this issue Sep 17, 2021 · 7 comments
Closed

BatchSize How to set in a single card? #39

anfengmin100 opened this issue Sep 17, 2021 · 7 comments

Comments

@anfengmin100
Copy link

My test result is 6% lower than yours. How is your batch size set in 8 cards. How do you think batch size should be set in a single card?Thanks for your reply!

@yztongzhan
Copy link
Collaborator

As described in README, we use 8 GPUs for training and the total batch size is 64 (8x8). We didn't try to train our TDN in a single gpu but you can try this by yourself :)

@anfengmin100
Copy link
Author

anfengmin100 commented Sep 17, 2021 via email

@cbiras
Copy link

cbiras commented Sep 20, 2021

Hello @1145335145. Can you tell me on which dataset did your train results were lower?

@anfengmin100
Copy link
Author

anfengmin100 commented Sep 22, 2021 via email

@ZChengLong578
Copy link

@1145335145
Hello,I have made mistakes in using single-node multi-GPU training on STHv1 data set. Could you please tell me how you train on a single card? If possible, I hope you can give me your running command and relevant changes. Thank you very much!

@ZChengLong578
Copy link

@1145335145
Problem solved!

@anfengmin100
Copy link
Author

anfengmin100 commented Mar 31, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants