New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vocab size to train LM and ASR #1370
Comments
We did not observe large improvement when we use large BPE size. So, you can just use our default librispeech BPE size (5,000)
We did not observe a big difference. I think either is fine. |
@sw005320 I tried 10000 vocab size for english dataset mixed with Librispeech and 5000 also (unigram unit) |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue is closed. Please re-open if needed. |
Have you observed the effect when we reduce nbpe or moving towards character based ASR ? |
@rajeevbaalwan not observed any improvement. Character based model suffers from contextual informations. |
what is the best vocab size to train huge english corpus+librispeech and which unit is better (unigram or bpe) ?
The text was updated successfully, but these errors were encountered: