-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About stpls3d training error CUDA out of memory #9
Comments
Hi, I met the same problem before. I use the cloud GPU, so I can change the GPU easily. I used A40(48GB) * 1, the problems have been already solved. |
Ok, thanks for your reply, I will try the cloud GPU. |
You can solve this problem by modifying the batch_size<=16. I used 1, then make it. |
I encountered the same problem. But if I don't use cloud GPU, how should I solve this problem? |
Due to my poor hardware conditions, I only took the solution proposed by TobyZhouWei and used the cloud GPU to solve it. |
You can solve this problem by decreasing the number of batch_size. As for me, I set the batch_size=1. |
Hello!
Thanks for your excellent work on 3D instance segmentation. When I trained your network on STPLS, I used an NVIDIA A5000 25GB GPU. After training the first epoch, the validation error CUDA out of memory, even with the batch_size set to 1.
Can you give me some advice?
Waiting for your help
Thank you!
The text was updated successfully, but these errors were encountered: