New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use pretrain.model for continuing training? #8
Comments
I think both are ok. You can use my pretrain model to train on the Chinese audios, it will be faster than training from the random init. I guess you just want to finetune, which means the number of Chinese utterances is much smaller than VoxCeleb2. So you need to start with a small learning rate. You can also download VoxCeleb2 and train it together. However, if your number of Chinese data is much smaller than VoxCeleb2, I do not suggest you to do that. |
Chinese audios are almost the same size of voxceb2. |
I think you can use the pretrain model, and reduce the initial learning rate. Then only train on your data. You can do experiments to compare. Here is my understanding:
I guess 2 and 3 might perform similar results, which might be better than 1. That is my understanding, you can do experiments to verify that. |
Thank you so much , it is very clear. |
That is used for augmentation, you can add it or not in all experiments, it can make the result better; you can also remove it. It can make training faster. |
Thank you so much , it is very clear. |
Get it~ |
Thanks for the information about continuous training. For a given N, say N=10, what value of X should be suitable for acceptable performance? Will size of X influence model size much? Thanks! |
I want to add some chinese audios to the training data.
Can I use your pretrain.model and continue to train using my data,
Or Do I have to download all the VoxCeleb1data plusing my data, and train it from the beginning?
Thank you for your reply.
The text was updated successfully, but these errors were encountered: