Replies: 2 comments 5 replies
-
You do not have to train all 1000 epochs.
I've no experience with multi GPU training, but this might help improving training speed. |
Beta Was this translation helpful? Give feedback.
5 replies
-
IMHO 50k steps is too low. Maybe check quality at around 100k. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have been training a "recipes/ljspeech/tacotron2-DDC" model using 1 GPU of 16GB since Aug 15 and I noticed that I am at Epoch 83/1000.
At this speed, it is going to take months to hit 1000 epoch. I didn't change any parameters found in train_tacotron2-DDC. Everything as cloned from github.
Am I doing it correctly ? I have a 4 GPU system. Should I switch to more GPU training ?
Beta Was this translation helpful? Give feedback.
All reactions