-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training time too long #32
Comments
Please provide your data_time and train_time in log, Thanks |
Here are some of training logs: 2021-07-21 04:20:05 | INFO | yolox.core.trainer:248 - epoch: 10/300, iter: 370/925, mem: 34832Mb, iter_time: 0.692s, data_time: 0.000s, total_loss: 7.1, iou_loss: 2.6, l1_loss: 0.0, conf_loss: 2.8, cls_loss: 1.7, lr: 1.999e-02, size: 768, ETA: 2 days, 9:28:22 |
Hi, we reproduce the training setting and your training time seems normal. Actually, if you change yolox-s to yolox-l, the total training time is almost the same! We suppose the major time consuming comes from our data augment operation, and we also have a plan to accelerate it. |
@ruinmessi Hello, do you know what causes the difference in training time between iters? Like the record shown, the max consuming time is 0.940ms, and the min consuming time is 0.544ms. |
I train yolox-s with batch size 128 + 8 x V100, and the task takes about 2 days 6 hours for 300 epoch. Is the training time normal ?
The text was updated successfully, but these errors were encountered: