You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use the method mentioned in thie rep https://github.com/innnky/emotional-vits to try to implement emotion voice cloning, I finetuned the pretrained synthsizer on a small dataset that contains about 24 speakers, each with 100 audio, and these 100 pieces of audio are divided by into roughly five or four categories, with the same tetx in each category but with different emotions. I inference with the finetuned synthesizer and the pretrained encoder and vocoder, but it's not working very well, if anyone know what the problem is or how it should be trained?
The text was updated successfully, but these errors were encountered:
I am not sure about the quality either. If I use the samples provided, I can generate reasonably good speech. If I use my own (e.g., by recording it through the UI), I was not able to produce any valuable output.
I use the method mentioned in thie rep https://github.com/innnky/emotional-vits to try to implement emotion voice cloning, I finetuned the pretrained synthsizer on a small dataset that contains about 24 speakers, each with 100 audio, and these 100 pieces of audio are divided by into roughly five or four categories, with the same tetx in each category but with different emotions. I inference with the finetuned synthesizer and the pretrained encoder and vocoder, but it's not working very well, if anyone know what the problem is or how it should be trained?
The text was updated successfully, but these errors were encountered: