Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU out of memory error #24

Closed
humayunr7 opened this issue Aug 19, 2019 · 3 comments
Closed

GPU out of memory error #24

humayunr7 opened this issue Aug 19, 2019 · 3 comments

Comments

@humayunr7
Copy link

I am getting out of memory error. How much GPU mem is required to train with batch size 32?

@kentaroy47
Copy link
Owner

depends on your data, but bs=32 may take up about 16GB of memory.
try halving the batch size?

@alezanga
Copy link

I have a similar problem, but how can I change the batch size?
Thanks.

@kentaroy47
Copy link
Owner

If you run out of memory, try reducing the number of ROIs that are processed simultaneously. Try passing a lower -n to train_frcnn.py. Alternatively, try reducing the image size from the default value of 600 (this setting is found in config.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants