Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems about HCLG.fst needed by "train_transformer_se.py" #20

Closed
SiyuanWei opened this issue Nov 29, 2019 · 2 comments
Closed

Problems about HCLG.fst needed by "train_transformer_se.py" #20

SiyuanWei opened this issue Nov 29, 2019 · 2 comments

Comments

@SiyuanWei
Copy link

SiyuanWei commented Nov 29, 2019

Hi, I'm implementing your paper "A TRANSFORMER WITH INTERLEAVED SELF-ATTENTION AND CONVOLUTION FOR HYBRID ACOUSTIC MODELS", and I'v got some questions bothering me.
In kaldi setup, sequence training need to create a phone-level language model and denominator fst, which is called HCP in this blog (https://desh2608.github.io/2019-05-21-chain/). In your code, I find that the script "train_transformer_se.py" needs a direction that contains HCLG.fst.
Is the HCLG.fst needed here is equal to the HCP builded based on a phone-level LM.?

@jzlianglu
Copy link
Owner

Hi, you are talking about LF-MMI training of Kaldi. The SE training script for transformer model is using lattice-based approach. For LF-MMI training, please refer to train_chain.py in the "bin" folder. Please note that, the chain model training script only uses batch size as 1. I will update the recipe to support large batch size in the near future.

@SiyuanWei
Copy link
Author

Oh, Thanks !!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants