Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About stpls3d training error CUDA out of memory #9

Closed
kellieda opened this issue May 15, 2023 · 6 comments
Closed

About stpls3d training error CUDA out of memory #9

kellieda opened this issue May 15, 2023 · 6 comments

Comments

@kellieda
Copy link

Hello!
Thanks for your excellent work on 3D instance segmentation. When I trained your network on STPLS, I used an NVIDIA A5000 25GB GPU. After training the first epoch, the validation error CUDA out of memory, even with the batch_size set to 1.
Can you give me some advice?
Waiting for your help
Thank you!

@kellieda kellieda changed the title 关于stpls3d training错误 CUDA out of memory About stpls3d training error CUDA out of memory May 15, 2023
@TobyZhouWei
Copy link

Hi, I met the same problem before. I use the cloud GPU, so I can change the GPU easily. I used A40(48GB) * 1, the problems have been already solved.

@kellieda
Copy link
Author

Hi, I met the same problem before. I use the cloud GPU, so I can change the GPU easily. I used A40(48GB) * 1, the problems have been already solved.

Ok, thanks for your reply, I will try the cloud GPU.

@xiaotiancai899
Copy link

You can solve this problem by modifying the batch_size<=16. I used 1, then make it.

@LinLin1031
Copy link

I encountered the same problem. But if I don't use cloud GPU, how should I solve this problem?

@kellieda
Copy link
Author

I encountered the same problem. But if I don't use cloud GPU, how should I solve this problem?

Due to my poor hardware conditions, I only took the solution proposed by TobyZhouWei and used the cloud GPU to solve it.

@xiaotiancai899
Copy link

I encountered the same problem. But if I don't use cloud GPU, how should I solve this problem?

You can solve this problem by decreasing the number of batch_size. As for me, I set the batch_size=1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants