You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the model trained on the 100h librispeech data set, the results did not change after adding LM, and after adding the LM pre-trained on the 960h data set, the results became worse. What is the reason for this? Are the results affected by the different dictionary files for the two recipes? How to properly use LM on 960h to improve results on 100h?
"inference_lm" is the LM of 960h librispeech downloaded from the official website
The text was updated successfully, but these errors were encountered:
after adding LM, and after adding the LM pre-trained on the 960h data set
I am not sure in this case, but in general, the transducer has an LM effect in the label prediction network, and the effectiveness of an external LM is limited.
(or simply your LM is not well-trained)
For the model trained on the 100h librispeech data set, the results did not change after adding LM, and after adding the LM pre-trained on the 960h data set, the results became worse. What is the reason for this? Are the results affected by the different dictionary files for the two recipes? How to properly use LM on 960h to improve results on 100h?
"inference_lm" is the LM of 960h librispeech downloaded from the official website
The text was updated successfully, but these errors were encountered: