New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it normal for me to spend 22 hour per epoch of training? #6
Comments
Hello!
In my case it took 10 hours per epochs. but it different per gpu spec. so i recommand post-training a model about 5~10 times.
because that's enough to improve the performance. (but not SOTA)
…--
JangHoon Han
AI Research Engineer | Professional
Language Lab | LG AI Research, Seoul, Korea
Mobile : (+82)10-8591-1081
lgresearch.ai
-----Original Message-----
From: ***@***.***>
To: ***@***.***>;
Cc: ***@***.***>;
Sent: 2022-05-13 (금) 11:28:57 (GMT+09:00)
Subject: [hanjanghoon/BERT_FP] Is it normal for me to spend 1 hour per epoch of training? (Issue #6)
When I post-train with the ubuntu dataset, it takes 22 hours per epoch, 25 epochs means it takes close to a month, maybe the time is too long. Is this also the case for the author when training? Is this normal in my case? Thanks!
Iteration: 100%|██████████| 81865/81865 [22:53:38<00:00, 1.01s/it]
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you are subscribed to this thread.Message ID: ***@***.***>
|
Thanks for your reply! |
When I post-train with the ubuntu dataset, it takes 22 hours per epoch, 25 epochs means it takes close to a month, maybe the time is too long. Is this also the case for the author when training? Is this normal in my case? Thanks!
The text was updated successfully, but these errors were encountered: