Skip to content
This repository has been archived by the owner on Jan 18, 2024. It is now read-only.

why do we need multiple languages & multiple speakers? #94

Open
thivux opened this issue Jan 17, 2024 · 2 comments
Open

why do we need multiple languages & multiple speakers? #94

thivux opened this issue Jan 17, 2024 · 2 comments

Comments

@thivux
Copy link

thivux commented Jan 17, 2024

hi there, thank you for the interesting work!

i want to train a model to perform code-switching TTS / voice conversion for only 2 languages: Vietnamese and English. i assume the model should perform well with training data of 1 speaker in Vietnamese and 1 in English, each with a decent number of utterances (~15hrs). my reason is that since there are only 2 speakers & a lot of data, the model should be able to learn the speaker embedding for each, even by remembering (overfitting). similarly for the language-dependent encoder. but I see some of your comments saying it's better to include other languages, each with multiple speakers in the training data, even if you don't use it in inference. why?

@tuetschek
Copy link
Collaborator

Hi @thivux , I can't speak for @Tomiinek and only remember this vaguely, but I believe the problem / reason you want to have more languages AND more speakers is that you need to model to dissociate the languages & speakers from each other, to be able to generalize and code-switch. If there's only 1 speaker per language, the speaker embeddings are too closely tied to the language and the model won't be very good at switching to the other language – you'll see how your training goes. I believe you shouldn't need more languages, but having more speakers in each of the two languages, even with less or lower-quality data, should help.

@Tomiinek
Copy link
Owner

@tuetschek is completely right, thank you!!!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants