-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How much memory is needed for infer? #51
Comments
I have a similar problem. Just want to test the whole thing with my gtx 970 memory 4G. I get:
I tried halving the BATCH_SIZE_PER_IMAGE and IMS_PER_BATCH settings in the config but I still get memory problems. I dont want to make them too small, I think it would lead to bad results. Not an expert though. Did anyone find a solution? |
Ok so I continued trying to get it to work. I found success when setting the SOLVER.IMS_PER_BATCH to 1 in configs/fsod/Base-FSOD-C4.yaml I did not run a complete training process since it would have taken me 2 days and 11 hours, but it started training without issues. |
It depends on your support set. Maybe you can try to make RPN.POST_NMS_TOPK_TEST small. |
My graphics boards is gtx1660ti, memory 6G.
I run this code to report an error:
RuntimeError: CUDA out of memory. Tried to allocate 1.00 GiB (GPU 0; 5.81 GiB total capacity; 2.90 GiB already allocated; 420.50 MiB free; 3.84 GiB reserved in total by PyTorch)
The text was updated successfully, but these errors were encountered: