-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
problem with interactive.py #5
Comments
When inferring a sentence, we do not touch the lm. First use nmt's encoder to generate hidden states and then work on nmt's decoder. You should look carefully at the fairseq decoding steps. And now i test a sentence and fixed a bug. You can use interactive.py like original fairseq without any other command parameters. |
@teslacool assuming that I have the following checkpoint models: I got these checkpoints by running the following preprocessing and training commands:
Note that all these scripts succedeed. |
run
you will get
if your model is deen task. |
after your fix, it works
thank you |
I have a few more questions:
|
yes, you need dict to get id for embedding matrix. |
|
|
so it is correct that my output looks like
|
actually I get
|
it looks like not correct. I do not know why your sentences consist of words following Because |
sorry, my fault. I close the issue by now, |
I follow the instruction to preprocess and train an engine with your code with and without srclm and trglm. And I succeded. I trained two models, one with srclm and tgtlm and one engine without.
Then, I tried to translate with any of the two models, but in both cases I failed.
Here are the two command I used
What's wrong?
Which is the correct command to activate both src and tgt LM, the command to disable them?
The text was updated successfully, but these errors were encountered: