-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training got stuck #20
Comments
Hi @Linya-Peng, thank you for your interest in our work! The problem you said is weird to me (I didn't meet that). What's your GPU card (and memory)? Could you just try a smaller batch size (e.g. 2) and see whether it will solve your problem? |
Hi ywyue, thank you for your reply! |
Hi @Linya-Peng, our code is tested on NVIDIA TITAN RTX with 24 GB memory! Do you also meet similar issues when running other programs with your current GPU? |
Thank you for your reply! I didn't meet similar issues when running other programs. I'm not sure if it's because I'm using WSL, but I've run other projects successfully with the same configuration. |
Hey, sorry for the late follow-up. I cannot reproduce your issue without WSL. Have you tried training on native Ubuntu or other Linux systems (not WSL)? |
Close for now. Feel free to reopen it if you have any concerns. |
Hi, ywyue!
Thank you for your wonderful work.
I tried to train on the Structured3D dataset, however, the training got stuck midway without any error being reported.
I tried to set --num_workers=0, but the problem hasn't been resolved.
I've tried terminating and resuming training multiple times, but the epoch at which it gets stuck varies each time. Do you have any suggestions for a solution?
I'm using 1.9.0+cu111 and running the main.py in a WSL2 Ubuntu 20.04
The text was updated successfully, but these errors were encountered: