Skip to content
This repository has been archived by the owner on Oct 1, 2021. It is now read-only.

Multi-Language Training #17

Open
satan17 opened this issue Sep 18, 2020 · 3 comments
Open

Multi-Language Training #17

satan17 opened this issue Sep 18, 2020 · 3 comments

Comments

@satan17
Copy link

satan17 commented Sep 18, 2020

Hi,

I'm slightly confused in G2P model. Let's suppose If I need to train a model which specifically translates from Russian to English (Only). Do I still need to add dictionary or train G2P model ?

Also, I'm not able to catch the significance of G2P model here. We have the synthesizer, which is already doing the same work.

Thanks!

@vlomme
Copy link
Owner

vlomme commented Sep 18, 2020

g2p do not translate from Russian to English. It translates graphemes into phonemes. The synthesizer translates any characters into a spectrum.

@satan17
Copy link
Author

satan17 commented Sep 18, 2020

Hi,

Sorry, I've mistakenly written the translation. By translation, I intend to ask whether I need to train a g2p model for only English output. I've trained my encoder on other language(German) and I want output in English.

Thanks

@vlomme
Copy link
Owner

vlomme commented Sep 18, 2020

if you only need synthesis in English, then you need to teach it in English. G2p won't help here. The encoder can be trained in any language, it doesn't matter. Use the original implementation

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants