-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about english cmu_lts_model.c and cmu_lex_data_raw.c #35
Comments
@zshy1205 I found the same issue. Not sure why(maybe for smaller footprint). But you can extend/create larger cmulex dictionary following this blog https://boredomed.wordpress.com/2019/03/07/festvox-to-flite-tts-conversion/. There do exist a certain amount of pronunciation errors, even in the latest cmudict. And many words are not covered in the dictionary(that's why we need to model letter-to-sound for pronunciation guess). I suppose you could manually update the cmudict to fix them. |
@fxwderrick Thanks, And I have one another question. There are some words have two or more Phoneme list, when you train the G2P model with cmudict, how to do with this case? |
@zshy1205 If you follow the steps in the blog, you will find the raw cmudict needs to be preprocessed first, like removing polyphone. Thus the input dict used for G2P training contains only one pronunciation for each word. |
@fxwderrick thank u very much, I will read this blog carefully. |
@zshy1205 @fxwderrick I delete the polyphone. But the error rate is still high. |
I want to know how to use allowable and cmu_dict(0.4) to reproduce the good result. (the same as ./bin/t2p) |
@zshy1205 I check the error rate about CART. In the training data, the word error rate is about 60%. I also use sklearn package to reproduce this process in Python. The word error rate is about 60% in training data. So I think the word error rate is normal. |
http://cmuflite.org/packed/flite-1.4/flite-1.4-release.tar.bz2 replacing tools/make_lts.scm with the corresponding file in the tarball before converting lts model to c format solves the problem. |
I have a question,can you help me?
why you cut the cmudict, and only 36964 english words in cmu_lex_data_raw.c
I know the cmudict contains 130000 english words, and I test the cmu_lts_model, it was performed poorly in cmu_lex_data_raw.c's 36964 words, about 90% word error rate. Why does this happen?(the cmu_lts_model is trained with cmudict which is removed the 36964 words? can you help me? thanks.
Forgive my poor English.
The text was updated successfully, but these errors were encountered: