You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
Thank you for your interest in our work!
We use four 16G V100 GPUs to train our model for about 35 hours. I checked my training log, the time between each logs are about 80s. It seems that your time is much longer.
Dear authors,
Thanks for the detailed training info. I use a single GPU with batch size = 32. But the time between every log is around 3 minutes. I checked there is no CPU overload issue. Could you please verify that 80s is measured with the following training config?
Thank you very much for your code sharing, which is very detailed and specific!
How many cards did you use for training and how long did it take.
I seem to need a lot of training time.
The text was updated successfully, but these errors were encountered: