You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can I continue training on one of my finetuned models? I'm using google colab so it was only about to run to 36k steps before stopping. I see that there is a generator and a discriminator model, how will this work when continuing fine tuning? Do I just load in the generator and lose the training for the discriminator?
I tried to start again, just feeding the path of my finetuned G model instead of ljs_base, but the quality is considerably worse and seems like it started training nearly from the beginning.
Thanks!
The text was updated successfully, but these errors were encountered:
TaoTeCha
changed the title
Labels for training output values?
How to continue fine tuning of model?
Jun 22, 2021
Nevermind. Since I am using google colab I had to change model_dir, and on the second run I pointed it to a different folder than the first. Just don't download the base model and use the same model_dir. It will pick up the checkpoints.
Hi !
I have trained on my computer a Vietnamese female voice model in 500k steps. and I found the voice quite clear. I want to train another male Vietnamese voice.
I learned there is a training method based on a previously trained model, which will shorten the training time.
Can you help me with that method.
How can I continue training on one of my finetuned models? I'm using google colab so it was only about to run to 36k steps before stopping. I see that there is a generator and a discriminator model, how will this work when continuing fine tuning? Do I just load in the generator and lose the training for the discriminator?
I tried to start again, just feeding the path of my finetuned G model instead of ljs_base, but the quality is considerably worse and seems like it started training nearly from the beginning.
Thanks!
The text was updated successfully, but these errors were encountered: