You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to use the KenLM language model to improve my results, but every time I try to use it, it outputs garbage output. Since the alpha is supposed to weight the LM, I tried setting it lower, but even at 0 it still outputs garbage. The only way I can get it to behave is by removing the LM path. Shouldn't the output be identical with and without the LM if alpha is set to 0?
Here is the decoder initialization: decoder = CTCBeamDecoder(labels="".join([local_vocab.index2word[i][0] for i in range(local_vocab.num_words)]), model_path="test.arpa", alpha=0.5, beta=0.9, beam_width=100, blank_id=local_vocab.num_words - 1)
Here is the usage: beam_results, beam_scores, timesteps, out_lens = decoder.decode(F.softmax(output, dim=-1).transpose(0, 1))
Is this expected behavior?
The text was updated successfully, but these errors were encountered:
I am trying to use the KenLM language model to improve my results, but every time I try to use it, it outputs garbage output. Since the alpha is supposed to weight the LM, I tried setting it lower, but even at 0 it still outputs garbage. The only way I can get it to behave is by removing the LM path. Shouldn't the output be identical with and without the LM if alpha is set to 0?
Here is the decoder initialization:
decoder = CTCBeamDecoder(labels="".join([local_vocab.index2word[i][0] for i in range(local_vocab.num_words)]), model_path="test.arpa", alpha=0.5, beta=0.9, beam_width=100, blank_id=local_vocab.num_words - 1)
Here is the usage:
beam_results, beam_scores, timesteps, out_lens = decoder.decode(F.softmax(output, dim=-1).transpose(0, 1))
Is this expected behavior?
The text was updated successfully, but these errors were encountered: