Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Need an intelligent way of picking batch size. #353

Closed
adamltyson opened this issue Feb 27, 2020 · 2 comments · Fixed by #432
Closed

[Feature] Need an intelligent way of picking batch size. #353

adamltyson opened this issue Feb 27, 2020 · 2 comments · Fixed by #432
Labels
enhancement New feature or request

Comments

@adamltyson
Copy link
Member

The default batch size for classification (32) was chosen so it would run on pretty much any good GPU but as @larsrollik has found, this can be increased a lot, reducing interference time (128 works with the default resnet on a RTX 2080Ti).

Maybe use tf to get available GPU memory and pick batch size based on that? Allow users to specify max GPU memory to use?

@adamltyson adamltyson added the enhancement New feature or request label Feb 27, 2020
@adamltyson adamltyson self-assigned this Feb 27, 2020
@adamltyson adamltyson removed their assignment Feb 14, 2022
@willGraham01 willGraham01 transferred this issue from brainglobe/cellfinder Dec 13, 2023
@adamltyson adamltyson transferred this issue from brainglobe/brainglobe-workflows Jan 4, 2024
@adamltyson
Copy link
Member Author

@IgorTatarnikov in your move to torch, have you found anything that could help here? Do you have an idea about GPU memory usage as a function of batch size?

@IgorTatarnikov
Copy link
Member

I haven't played with batch size during inference. In theory there should be a way to pre-calculate a maximal batch size to utilize as much VRAM as possible during inference. I'd want to do some profiling first though to see how much effect it has before going through the effort of implementing!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants