-
Notifications
You must be signed in to change notification settings - Fork 306
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
6G GPU memory,batch_size=1 with D1 network,still got CUDA out of memory #32
Comments
Same. 2080TI (11GB) with batch_size = 1 still not work. Here's the traceback:
|
You can try NVIDIA apex with |
Same problem. Two 2080TI (11GB*2) with batch_size = 6 . Here's the traceback: |
@AlexLuya @RayOnFire @shengyuqing |
Thanks! I have updated the code, but still the same problem. Very strange. |
@toandaominh1997 |
but, i want to use d0-d7, just one 2080Ti, and batch_size >=4 for any backbone, and input shape >=(448,448) or (640, 640) |
I don't understand the explicit way? |
have U solve the problem? |
have you solved the out of memory ? |
I got the same problem on my Titan rtx |
Your default batch size is 32,What GPU did you used for training?
The text was updated successfully, but these errors were encountered: