Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training got stuck #20

Closed
Linya-Peng opened this issue Jan 10, 2024 · 6 comments
Closed

Training got stuck #20

Linya-Peng opened this issue Jan 10, 2024 · 6 comments

Comments

@Linya-Peng
Copy link

Hi, ywyue!
Thank you for your wonderful work.
I tried to train on the Structured3D dataset, however, the training got stuck midway without any error being reported.
image
I tried to set --num_workers=0, but the problem hasn't been resolved.
I've tried terminating and resuming training multiple times, but the epoch at which it gets stuck varies each time. Do you have any suggestions for a solution?

I'm using 1.9.0+cu111 and running the main.py in a WSL2 Ubuntu 20.04

@ywyue
Copy link
Owner

ywyue commented Jan 10, 2024

Hi @Linya-Peng, thank you for your interest in our work! The problem you said is weird to me (I didn't meet that). What's your GPU card (and memory)? Could you just try a smaller batch size (e.g. 2) and see whether it will solve your problem?

@Linya-Peng
Copy link
Author

Hi @Linya-Peng, thank you for your interest in our work! The problem you said is weird to me (I didn't meet that). What's your GPU card (and memory)? Could you just try a smaller batch size (e.g. 2) and see whether it will solve your problem?

Hi ywyue, thank you for your reply!
I've tried set batch size as 2, but the problem still existed. After several attemps and searching, I have not solve this problem.
I'm working on TRX 4070 Ti GPU, with 12GB GPU memory. Do you have any suggestions for the hardware capacity needed for training?

@ywyue
Copy link
Owner

ywyue commented Feb 8, 2024

Hi @Linya-Peng, our code is tested on NVIDIA TITAN RTX with 24 GB memory! Do you also meet similar issues when running other programs with your current GPU?

@Linya-Peng
Copy link
Author

Hi @Linya-Peng, our code is tested on NVIDIA TITAN RTX with 24 GB memory! Do you also meet similar issues when running other programs with your current GPU?

Thank you for your reply! I didn't meet similar issues when running other programs. I'm not sure if it's because I'm using WSL, but I've run other projects successfully with the same configuration.

@ywyue
Copy link
Owner

ywyue commented Jul 2, 2024

Hey, sorry for the late follow-up. I cannot reproduce your issue without WSL. Have you tried training on native Ubuntu or other Linux systems (not WSL)?

@ywyue
Copy link
Owner

ywyue commented Jul 21, 2024

Close for now. Feel free to reopen it if you have any concerns.

@ywyue ywyue closed this as completed Jul 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants