Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The LM model expected is at word level or at token level? #13

Closed
kafan1986 opened this issue Apr 22, 2022 · 1 comment
Closed

The LM model expected is at word level or at token level? #13

kafan1986 opened this issue Apr 22, 2022 · 1 comment

Comments

@kafan1986
Copy link

I wanted to confirm whether the LM model is expected to be at word level or token level? Usually KenLM model is trained at word level and in our case, we are using tokenizer (n=1000), and should I need to train it at token level or word level?

@burchim
Copy link
Owner

burchim commented May 15, 2022

Hi,

The LM used for rescoring should have the same encoding as the Conformer model.
We used the NVIDIA NeMo toolkit to train a token level 6-gram for our models:
https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html

The trick is to tokenize the training corpus using the corresponding bpe tokenizer and then to replace each token by a special character to create a new corpus. This new corpus can be used to train a bpe n-gram:
https://github.com/NVIDIA/NeMo/blob/stable/scripts/asr_language_modeling/ngram_lm/train_kenlm.py

I added the missing 6-gram to the shared folders if you would like to recover the paper results.
You should be able to access it here.

Best,
Maxime

@burchim burchim closed this as completed Jun 3, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants