You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I hope to reproduce the results in this great paper: “Improved training of end-to-end attention models for speech recognition”.
I need to train 2 models: 1)the seq2seq model 2)an external LM,and they use the same vocabulary file.
I want to know which corpus is used to generate the vocabulary file("trans.bpe.vocab.lm.txt")
Is it from the voice tagging corpus(only 40M),
or from the corpus of external LM(eg:librispeech corpus file is 4G)
I hope I have expressed it clearly. Looking forward to your reply, thanks.
The text was updated successfully, but these errors were encountered:
Thanks for the interest in our work!
For LibriSpeech, the full setup to download the data, prepare it, create BPE codes + vocab, train the seq2seq model is here (excluding the LM currently; although the LM training config is here). That should answer how to train the seq2seq model and everything about the BPE.
The LM data is the official data for the LM. The same BPE as in training the seq2seq model is used for the LM. (@kazuki-irie might be able to give additional details, as he trained the LM.)
I hope to reproduce the results in this great paper: “Improved training of end-to-end attention models for speech recognition”.
I need to train 2 models: 1)the seq2seq model 2)an external LM,and they use the same vocabulary file.
I want to know which corpus is used to generate the vocabulary file("trans.bpe.vocab.lm.txt")
Is it from the voice tagging corpus(only 40M),
or from the corpus of external LM(eg:librispeech corpus file is 4G)
I hope I have expressed it clearly. Looking forward to your reply, thanks.
The text was updated successfully, but these errors were encountered: