Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training duration #6

Open
Petrichor214 opened this issue May 17, 2024 · 3 comments
Open

About training duration #6

Petrichor214 opened this issue May 17, 2024 · 3 comments

Comments

@Petrichor214
Copy link

Thank you very much for your code sharing, which is very detailed and specific!

How many cards did you use for training and how long did it take.

I seem to need a lot of training time.

train

@lbc12345
Copy link
Owner

lbc12345 commented May 17, 2024

Hi,
Thank you for your interest in our work!
We use four 16G V100 GPUs to train our model for about 35 hours. I checked my training log, the time between each logs are about 80s. It seems that your time is much longer.

@YunYunY
Copy link

YunYunY commented Jun 2, 2024

Dear authors,
Thanks for the detailed training info. I use a single GPU with batch size = 32. But the time between every log is around 3 minutes. I checked there is no CPU overload issue. Could you please verify that 80s is measured with the following training config?

python train.py --opt options/train_rrdb_P+SeD.yml --resume pretrained/RRDB.pth

Thank you very much.

@lbc12345
Copy link
Owner

lbc12345 commented Jun 3, 2024

We use 4 16G V100 GPUs. If you use single GPU to train the model, this time duration is normal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants