Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Awesome in english but no support for other languages - please add an example for another language (german, italian, french etc) #41

Open
cmp-nct opened this issue Nov 20, 2023 · 80 comments
Labels
help wanted Extra attention is needed

Comments

@cmp-nct
Copy link

cmp-nct commented Nov 20, 2023

The readme makes it sound very simple: "Replace bert with xphonebert"
Looking a bit closer looks like it's quite a feat to make StyleTTS2 talk in non-english languages (#28)

StyleTTS2 looks like the best approach we have right now, but only english is a killer for many as it means any app will be limited to english without prospect for other users in sight.

Some help to get this going in foreign languages would be awesome.

It appears we need to change inference code and re-train text and phonetics. Any demo/guide would be great

Alternatively re-training the current PL-Bert for other languages, though that needs a corpus and I've no idea on the cost ?
(https://github.com/yl4579/PL-BERT)

@yl4579 yl4579 added the help wanted Extra attention is needed label Nov 20, 2023
@yl4579
Copy link
Owner

yl4579 commented Nov 20, 2023

The repo so far is a research project and its main purpose serves more as a proof of concept for the paper than a full-fledged open source project. I agree that PL-BERT is the major obstacle to generalize to other languages, but training large-scale language models particularly on multiple languages can be very challenging. With the resources I have in the school, training PL-BERT on English only corpus with 3 A40 took me a month, with all the ablation studies and experiment, I spent an entire summer on this project only for a single language.

I'm not affiliated with any company and I'm only a PhD student, and the GPU resources in our lab need to be prioritized for new research projects. I don't think I will have resources to train a multi-lingual PL-BERT model at the time being, so PL-BERT probably is not the best approach to multilingual models for StyleTTS 2.

I have never tried XPhoneBERT myself, but it seems to be a promising alternative PL-BERT. The only problem of it is that it uses a different phonemizer, which can also be related to #40 . The current phonemizer was taken from VITS, which also incurs license issues (MIT vs. GPL). It would be great if someone could help to switch the phoneimzer and BERT model to things like XPhoneBERT that is compatible with MIT license and also supports multiple languages.

The basic idea is to re-train the ASR model (https://github.com/yl4579/AuxiliaryASR) using the phonemizer of XPhoneBERT, and replace PL-BERT with XPhoneBERT and re-train the model from scratch. Since the models, especially the model LibriTTS, took about 2 weeks to train on 4 A100, I do not think I have enough GPU resources to work on this for the time being. If anyone is willing to sponsor GPUs and datasets for either multilingual PL-BERT or XPhoneBERT StyleTTS 2, I'm happy to extend this project towards the multilingual directions.

@cmp-nct
Copy link
Author

cmp-nct commented Nov 20, 2023

I think it would be doable to get the GPU time, 1 week of 8xA100 maybe in exchange of naming the resulting model after the sponsor. One of the cloud providers might be interested, or some guys from the ML discords who train a lot might have it spare.
I was offered GPU time once, could ask the guy.
But without datasets that wouldn't help
That said: If you need GPU time let me know, I'll ask

Datasets:
German: TTS dataset from a university (high quality, 6 main speakers, I think 40-50 hours of studio quality recordings)
https://opendata.iisys.de/dataset/hui-audio-corpus-german/ (https://github.com/iisys-hof/HUI-Audio-Corpus-German)
https://github.com/thorstenMueller/Thorsten-Voice (11 hours, one person)

Italian: TTS dataset, LJSpeech affiliated ?
https://huggingface.co/datasets/z-uo/female-LJSpeech-italian
https://huggingface.co/datasets/z-uo/male-LJSpeech-italian

Multilingual:
https://www.openslr.org/94/ (audiobook based libritts)
https://github.com/freds0/CML-TTS-Dataset (more than 3000 hours, CS licensed)

Sidenote: For detecing unclean audio, possibly "CLAP" from Laion could be used.

@yl4579
Copy link
Owner

yl4579 commented Nov 20, 2023

Multilingual speech datasets are more difficult to get than language datasets. XPhoneBERT for example was trained entirely on Wikipedia in 100+ languages, but getting 100+ languages of speech data with transcriptions is more difficult. XTTS has multilingual supports but the data used seems private. I believe the creator @erogol was once interested in StyleTTS but did not proceed to integrate this into Coqui API for some reason. It would be great if he could help for multilingual supports. I will ping him to see if he is still interested.

@cmp-nct
Copy link
Author

cmp-nct commented Nov 20, 2023

I found quite good datasets for Italian and German, will take another look for more. Will update the previous comment.
About how much data (length, # of speakers) is needed when training ?

@yl4579
Copy link
Owner

yl4579 commented Nov 20, 2023

If you want cross-lingual generalization, I think each language should be at least 100 hours. The data you provide probably is good for a single speaker model, but not enough for zero-shot models like XTTS. It is not feasible to get a model like that with publicly available data. We probably have to rely on something like multilingual librispeech (https://www.openslr.org/94/) and use some speech restoration models to remove bad samples. This is not a single person's effort, so everyone else is welcome to contribute.

@mzdk100
Copy link

mzdk100 commented Nov 21, 2023

It's a pity not supporting Chinese.

@hobodrifterdavid
Copy link

hobodrifterdavid commented Nov 21, 2023

I can make a 8x 3090 (24GB) machine available, if it's of use. 2x Xeon E5-2698 v3 cpus, 128GB ram. Alternatively: a 4x 3090 box with nvlinks, Epyc 7443p, 256GB, pcie 4.0. Send a mail to dioco@dioco.io

@tosunozgun
Copy link

I can support for training turkish model, just need a help for training pl-bert for turkish wikipedia dataset.

@yl4579
Copy link
Owner

yl4579 commented Nov 21, 2023

@hobodrifterdavid Thanks so much for your help. What you have now is probably good for multilingual PL-BERT training as long as you can keep this machine running for at least a couple of months or so. Just sent you an email for multilingual PL-BERT training.

@yl4579
Copy link
Owner

yl4579 commented Nov 21, 2023

I think the GPUs provided by @hobodrifterdavid would be a great start for multilingual PL-BERT training. Before proceeding though, I need some people who speak as many languages as possible (hopefully also have some knowledge in IPA) to help with the data preparation. I only speak English, Chinese and Japanese, so I can only help with these 3 languages.

My plan is to use this multilingual BERT tokenizer: https://huggingface.co/bert-base-multilingual-cased, tokenize the text, get the corresponding tokens, use phonemizer to get the corresponding phonemes, and align the phonemes with tokens. Since this tokenizer is subword, we cannot predict the subword grapheme tokens. So my idea is instead of predicting the grapheme tokens (which is not a full grapheme anyway, and we cannot really align half of a grapheme to some of its phonemes, like in English "phonemes" can be tokenized into phone#, #me#, #s, but the actual phonemes of it is /ˈfəʊniːmz/, which cannot be aligned perfectly with either phone# or #me# or #s) we predict the contextualized embeddings from a pre-trained BERT model.

For example, for the sentence "This is a test sentence", we get 5 tokens [this, is, a, test, sen#, #tence] and its corresponding graphemes. Particularly, these [sen#, #tence] two tokens correspond to ˈsɛnʔn̩ts. The goal is to map each of the grpaheme representation in ˈsɛnʔn̩ts to the average contextualized BERT embeddings of [sen#, #tence]. This requires running the teacher BERT model, but we can extract the contextualized BERT embeddings online (during training) and maximize the cosine similarity of the predicted embeddings of these words and the teacher model (multilingual BERT).

Now the biggest challenge is aligning the tokenizer output to the graphemes, which may require some expertise in the specific languages. There could be potential quirks, inaccuracy or traps for certain languages. For example, phonemizer doesn't work with Japanese and Chinese directly, you have to first phonemize the grapheme into alphabets and then use phonemizer. The characters in these languages do not always have the same pronunciations depending on the context, so expertise in these languages is needed when doing NLP with them. To make sure the data preprocessing goes as smooth and accurate as possible, any help from those who speaks any language in this list (or knows some linguistics about these languages) is greatly appreciated.

@SoshyHayami
Copy link

SoshyHayami commented Nov 21, 2023

I think the GPUs provided by @hobodrifterdavid would be a great start for multilingual PL-BERT training. Before proceeding though, I need some people who speak as many languages as possible (hopefully also have some knowledge in IPA) to help with the data preparation. I only speak English, Chinese and Japanese, so I can only help with these 3 languages.

My plan is to use this multilingual BERT tokenizer: https://huggingface.co/bert-base-multilingual-cased, tokenize the text, get the corresponding tokens, use phonemizer to get the corresponding phonemes, and align the phonemes with tokens. Since this tokenizer is subword, we cannot predict the subword grapheme tokens. So my idea is instead of predicting the grapheme tokens (which is not a full grapheme anyway, and we cannot really align half of a grapheme to some of its phonemes, like in English "phonemes" can be tokenized into phone#, #me#, #s, but the actual phonemes of it is /ˈfəʊniːmz/, which cannot be aligned perfectly with either phone# or #me# or #s) we predict the contextualized embeddings from a pre-trained BERT model.

For example, for the sentence "This is a test sentence", we get 5 tokens [this, is, a, test, sen#, #tence] and its corresponding graphemes. Particularly, these [sen#, #tence] two tokens correspond to ˈsɛnʔn̩ts. The goal is to map each of the grpaheme representation in ˈsɛnʔn̩ts to the average contextualized BERT embeddings of [sen#, #tence]. This requires running the teacher BERT model, but we can extract the contextualized BERT embeddings online (during training) and maximize the cosine similarity of the predicted embeddings of these words and the teacher model (multilingual BERT).

Now the biggest challenge is aligning the tokenizer output to the graphemes, which may require some expertise in the specific languages. Any help from those who speaks any language in this list (or knows some linguistics about these languages) is appreciated.

I can speak Persian, Japanese and a little bit of Arabic. (Have a friend fleunt in this as well). I would very much like to help you with this.
I'm also gathering Labeled Speech data for these languages as of right now. (I have a little less than 100 hours for Persian and a bit with the other two). So, Count me in please.

@yl4579
Copy link
Owner

yl4579 commented Nov 21, 2023

@SoshyHayami Thanks for your willingness to help.

Fortunately, I think most other languages that have whitespaces between words can be handled with the same logic. The only supported languages that do not have space between them are Chinese, Japanese (including Korean Hanja rarely), and Burmese. These are probably languages that need to be handled with their own logics. I can handle the first two languages, and we just need someone to handle the other two (Korean Hanja and Burmese).

@mzdk100
Copy link

mzdk100 commented Nov 21, 2023

It would be great if it could support Chinese language! I am a native Chinese, and I don't know what help I can provide?

@yl4579
Copy link
Owner

yl4579 commented Nov 21, 2023

Maybe I’ll create a new branch in the PL-BERT repo for multilingual processing scripts. Chinese and Japanese definitely needs to be processed separately with their own logics. @mzdk100 If you have some good Chinese phonemizer (Chinese characters to pinyin), you are welcome to contribute.

@SoshyHayami
Copy link

SoshyHayami commented Nov 21, 2023

in the case of Japanese, since it already has Kana which is basically an alphabet, can't we simply restrict it to just that for now?(Kana and Romaji should be easier to phonemize if I'm not mistaken here.)
Sorry it might be a stupid Idea but I was thinking about if we had another language model that would recognize the correct pronunciations based on the context and then would convert the text (and the converted text would be handed over to the phonemizer), maybe it could make things a bit easier here.

though It'll probably make inference a torture as well on low-performance devices.

@mzdk100
Copy link

mzdk100 commented Nov 22, 2023

@yl4579
There are two main libraries for handling Chinese tokens, jieba and pypinyin.
Jieba is based on Chinese word segmentation mode, while pypinyin is based on Chinese pinyin mode.

pip3 install jieba pypinyin
from pypinyin import lazy_pinyin, pinyin, Style
print(pinyin('朝阳')) # [['zhāo'], ['yáng']]
print(pinyin('朝阳', heteronym=True)) # [['zhāo', 'cháo'], ['yáng']]
print(pinyin('聪明的小兔子')) # ['cong', 'ming', 'de', 'xiao', 'tu', 'zi']
print(lazy_pinyin('聪明的小兔子', style=Style.TONE3)) # ['cong1', 'ming2', 'de', 'xiao3', 'tu4', 'zi']

There are many Chinese characters, and using pinyin can greatly reduce the number of vocabulary and potentially make the model smaller.

import jieba
print(list(jieba.cut('你好,我是中国人'))) # ['你好', ',', '我', '是', '中国', '人']
print(list(jieba.cut_for_search('你好,我是中国人'))) # ['你好', ',', '我', '是', '中国', '人']

If using word segmentation mode, the model can learn more natural language features, but the Chinese vocabulary is very large, and perhaps the model will be super large, and the computational power requirements are unimaginable.
It is highly recommended to use Pinyin mode, as the converted text looks more like English without the need to change too many training codes.

print(' '.join(lazy_pinyin('聪明的小兔子', style=Style.TONE3))) # 'cong1 ming2 de xiao3 tu4 zi'

@cmp-nct
Copy link
Author

cmp-nct commented Nov 22, 2023

If german ears are needed, I'd be happy to lend

@nicognaW
Copy link

https://github.com/rime/rime-terra-pinyin/blob/master/terra_pinyin.dict.yaml

From the industrial world, this is the characters-to-pinyin solution that the well-known input method editor Rime uses.

@dsplog
Copy link

dsplog commented Nov 23, 2023

any help from those who speaks any language in this list (or knows some linguistics about these languages) is greatly appreciated

keen to extend this to malayalam, dravidian language spoken in south india. will help for that.

@rjrobben
Copy link

I hope Cantonese or Traditional Chinese is also considered when training the multilingual system, I can definitely help regarding this language. Is there any cooperation channel for this task?

@fakerybakery
Copy link
Contributor

Multilingual speech datasets are more difficult to get than language datasets. XPhoneBERT for example was trained entirely on Wikipedia in 100+ languages, but getting 100+ languages of speech data with transcriptions is more difficult. XTTS has multilingual supports but the data used seems private. I believe the creator was once interested in StyleTTS but did not proceed to integrate this into Coqui API for some reason. It would be great if he could help for multilingual supports. I will ping him to see if he is still interested.

Personally, I do not support Coqui TTS. XTTS is not open-sourced according to OSI because of its ultra-restrictive license. I believe that the future of TTS lies in open-source models such as StyleTTS.

@yl4579
Copy link
Owner

yl4579 commented Nov 24, 2023

@rjrobben I have created a slack channel for this multilingual PL-BERT: https://join.slack.com/t/multilingualstyletts2/shared_invite/zt-2805io6cg-0ROMhjfW9Gd_ix_FJqjGmQ

@yl4579
Copy link
Owner

yl4579 commented Nov 24, 2023

Also yl4579/PL-BERT#22 this maybe helpful, if anyone could try it out.

@fakerybakery
Copy link
Contributor

@yl4579 Thanks for making the slack channel! Are you planning to make a slack channel for general StyleTTS 2-related discussions as well? Just because GH Discussions isn't realtime?

@yl4579
Copy link
Owner

yl4579 commented Nov 24, 2023

@fakerybakery I can make this channel generally StyleTTS2-related if it is better. I can change the title to StyleTTS 2 instead.

@fakerybakery
Copy link
Contributor

Great, thanks! Maybe make one chatroom just about BERT instead?

@yl4579
Copy link
Owner

yl4579 commented Nov 24, 2023

Yeah I've already done that. There's a channel about multilingual PLBERT.

@ardha27
Copy link

ardha27 commented Dec 5, 2023

Is it already pushed to current branch? Sorry, but how i can use it?

@yl4579
Copy link
Owner

yl4579 commented Dec 5, 2023

@ardha27 No, it is included in the training data for multilingual PL-BERT model. The training hasn't started yet. I'm still waiting for the 8 GPU machine from @hobodrifterdavid

@dsplog
Copy link

dsplog commented Dec 11, 2023

For example, for the sentence "This is a test sentence", we get 5 tokens [this, is, a, test, sen#, #tence] and its corresponding graphemes. Particularly, these [sen#, #tence] two tokens correspond to ˈsɛnʔn̩ts. The goal is to map each of the grpaheme representation in ˈsɛnʔn̩ts to the average contextualized BERT embeddings of [sen#, #tence]. This requires running the teacher BERT model, but we can extract the contextualized BERT embeddings online (during training) and maximize the cosine similarity of the predicted embeddings of these words and the teacher model (multilingual BERT).

@yl4579 : are the changes for the subword tokenizations available?

@yl4579
Copy link
Owner

yl4579 commented Dec 12, 2023

@dsplog I haven't implemented them yet. I'm done with most data preprocessing and just need people to fix the following languages. If no response for these languages before I come back from NeurIPS (Dec 18), I will proceed to training the multilingual PL-BERT. I will have to remove Thai and using phonemizer results for the following languages.

bn: Bengali (phonemizer seems less accurate than charsiuG2P)
cs: Czech (same as above)
ru: Russian (phonemizer is inaccurate for some phonemes, like tʃ/ʒ should be t͡ɕ/ʐ)
th: Thai (phonemizer totally broken)

@GayatriVadaparty
Copy link

I think the GPUs provided by @hobodrifterdavid would be a great start for multilingual PL-BERT training. Before proceeding though, I need some people who speak as many languages as possible (hopefully also have some knowledge in IPA) to help with the data preparation. I only speak English, Chinese and Japanese, so I can only help with these 3 languages.

My plan is to use this multilingual BERT tokenizer: https://huggingface.co/bert-base-multilingual-cased, tokenize the text, get the corresponding tokens, use phonemizer to get the corresponding phonemes, and align the phonemes with tokens. Since this tokenizer is subword, we cannot predict the subword grapheme tokens. So my idea is instead of predicting the grapheme tokens (which is not a full grapheme anyway, and we cannot really align half of a grapheme to some of its phonemes, like in English "phonemes" can be tokenized into phone#, #me#, #s, but the actual phonemes of it is /ˈfəʊniːmz/, which cannot be aligned perfectly with either phone# or #me# or #s) we predict the contextualized embeddings from a pre-trained BERT model.

For example, for the sentence "This is a test sentence", we get 5 tokens [this, is, a, test, sen#, #tence] and its corresponding graphemes. Particularly, these [sen#, #tence] two tokens correspond to ˈsɛnʔn̩ts. The goal is to map each of the grpaheme representation in ˈsɛnʔn̩ts to the average contextualized BERT embeddings of [sen#, #tence]. This requires running the teacher BERT model, but we can extract the contextualized BERT embeddings online (during training) and maximize the cosine similarity of the predicted embeddings of these words and the teacher model (multilingual BERT).

Now the biggest challenge is aligning the tokenizer output to the graphemes, which may require some expertise in the specific languages. There could be potential quirks, inaccuracy or traps for certain languages. For example, phonemizer doesn't work with Japanese and Chinese directly, you have to first phonemize the grapheme into alphabets and then use phonemizer. The characters in these languages do not always have the same pronunciations depending on the context, so expertise in these languages is needed when doing NLP with them. To make sure the data preprocessing goes as smooth and accurate as possible, any help from those who speaks any language in this list (or knows some linguistics about these languages) is greatly appreciated.

Hey, I would love to work on this. I really liked the model that you've created. I'm using it in my work, just checking with different TTS models and comparing voice overs. I've just got to know style TTS need multilingual support. I can help with Telugu language training. I know people who know Hindi as well. I'm from India.

@somerandomguyontheweb
Copy link

Hi @yl4579, thank you for this awesome project. Just wanted to clarify if there are any plans to add support for Belarusian, my native tongue. Apparently espeak-ng supports it, but when I attempted to process Belarusian Wikipedia with preprocess.ipynb, I saw that the phonemization quality is rather poor: in particular, word stress is often wrong, and numbers are not expanded properly into numerals, even though the numerals are listed in be_list. Could you please let me know if there is anything I could help with, in order to add Belarusian to multilingual PL-BERT? (E.g. providing a dictionary of stress patterns for espeak-ng, improving numeral conversion rules, etc.)

@iamjamilkhan
Copy link

Please add hindi support as well

@yl4579
Copy link
Owner

yl4579 commented Dec 17, 2023

@somerandomguyontheweb You can join the slack channel and make the dataset yourself if you believe the espeak is bad. I will upload all the dataset I have soon.

@yl4579
Copy link
Owner

yl4579 commented Dec 17, 2023

@iamjamilkhan @GayatriVadaparty Hindi and Telugu are already added in multilingual PL-BERT training. I will upload the dataset soon. You can check the quality and let me know if something needs to be fixed.

@GayatriVadaparty
Copy link

@yl4579 Sure, I’ll do that.

@yl4579
Copy link
Owner

yl4579 commented Dec 19, 2023

I have uploaded most of the data I have: https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert
Please check if there's anything missing or not not ideal. To check whether the IPA is phonemized correctly for your language, you will need to decode the tokens by using https://huggingface.co/bert-base-multilingual-cased tokenizer.
If something is wrong, please let me know. I probably will start multilingual PL-BERT training early next month (Jan 2024). The list of language correspond can be found here: https://github.com/espeak-ng/espeak-ng/blob/master/docs/languages.md

@SanketDhuri
Copy link

Please add Marathi support as well

@yl4579
Copy link
Owner

yl4579 commented Jan 8, 2024

@SanketDhuri It is already included: https://huggingface.co/datasets/styletts2-community/multilingual-pl-bert/tree/main/mr
You may want to check the quality of this data yourself because I don't speak this language.

@acalatrava
Copy link

@yl4579 Did you start the training? I may can help in Spanish (Spain) if needed.

@mkhennoussi
Copy link

I am here to help with French if needed !

@cmp-nct
Copy link
Author

cmp-nct commented Jan 24, 2024

@yl4579 Did you start the training? I may can help in Spanish (Spain) if needed.

My last status: Training of ML-PL-Bert is planned to start during January (did not start yet)
Once that is working the model itself can be trained

@paulovasconcellos-hotmart

Hello. I'm interested in helping train a PT-BR model. I have corporate resources to do so. Let me know how I can help.

@philpav
Copy link

philpav commented Feb 12, 2024

I'd love to see support for German accents like Austrian but I guess there's no dataset available.

@agonzalezd
Copy link

agonzalezd commented Feb 15, 2024

I could give linguistic support in most Iberian languages: Castilian Spanish, Basque, Catalan, Asturian and Galician.
However, due to the orthographic nature of their respective scripts, using a BERT model based on text could also be enough for synthesising these languages

@ashaltu
Copy link

ashaltu commented Feb 16, 2024

hello! also interested in adding support for the oromo (orm) language, espeak-ng has a phonemizer for it although it could be improved upon.

@SpanishHearts
Copy link

Any chances to include Bulgarian?

@rlenain
Copy link

rlenain commented Feb 28, 2024

Hi everyone -- I have trained a PL-BERT model on a 14 language dataset which was crowdsourced by the author of the paper. You can find this model open-sourced here: https://huggingface.co/papercup-ai/multilingual-pl-bert

Using this PL-BERT model, you can now train multilingual StyleTTS2 models. In my experiments, I have found that you don't need to train from scratch in order to train multilingual StyleTTS2, you can just finetune. Follow the steps outlined in the link I shared above!

Best of luck, and let me know what you make with this!

@m-toman
Copy link

m-toman commented Mar 5, 2024

Hi everyone -- I have trained a PL-BERT model on a 14 language dataset which was crowdsourced by the author of the paper. You can find this model open-sourced here: https://huggingface.co/papercup-ai/multilingual-pl-bert

Using this PL-BERT model, you can now train multilingual StyleTTS2 models. In my experiments, I have found that you don't need to train from scratch in order to train multilingual StyleTTS2, you can just finetune. Follow the steps outlined in the link I shared above!

Best of luck, and let me know what you make with this!

This is awesome.
Going to try it.

Unfortunately it seems we got no language embeddings so that we could really train a multilingual model with cross-lingual capabilities atm?

@rlenain
Copy link

rlenain commented Mar 5, 2024

I have actually trained a model which can speak multiple languages, without the need of a language embedding. I guess the model learns implicitly, either based on the phonemisation, or based on the references, to speak with a specific accent

@m-toman
Copy link

m-toman commented Mar 5, 2024

@rlenain interesting, yeah I assume this would work, just a little bit uncomfortable to rely on it doing the right thing when you want one voice in multiple languages.

I thought maybe I could additively augment the style embedding with some language Infos.
A bit like some early adapter models to keep English at +0 for the existing model and for the new training data in other languages add some linear layer result of a one hot encoding. Just a rough idea without much more thought yet ;)

@Smithangshu
Copy link

@dsplog I haven't implemented them yet. I'm done with most data preprocessing and just need people to fix the following languages. If no response for these languages before I come back from NeurIPS (Dec 18), I will proceed to training the multilingual PL-BERT. I will have to remove Thai and using phonemizer results for the following languages.

bn: Bengali (phonemizer seems less accurate than charsiuG2P)
cs: Czech (same as above)
ru: Russian (phonemizer is inaccurate for some phonemes, like tʃ/ʒ should be t͡ɕ/ʐ)
th: Thai (phonemizer totally broken)

I am a native Bengali speaker from India. Please let me what kind of help I can offer.

@Dmytro-Shvetsov
Copy link

@rlenain, thank you for your awesome work!
Do I understand correctly that the multilingual PL-BERT is just a starting point to building StyleTTS2 models other than English? Or should it work with other languages out of the box? If yes, could you share insights which parts of the code should be modified for inference pipeline (e.g I assume the phonemizer for the target language, maybe the style audio to be with the speaker of target language)?

@rlenain
Copy link

rlenain commented May 7, 2024

You need to further finetune or train from scratch with PL-BERT. It won't work in inference mode only. That's because if you change it, then the outputs of the PL-BERT module will not be "aligned" with other modules that expect the PL-BERT outputs as inputs.

This is generally true with any ML model -- if you change a module, then you need to further train / finetune to be able to get the model to work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests