You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
trg_words = [self.trg_vocab[w] for w in output_actions[1:]]
truncates the target sequence from the first unit. Correcting to output_actions[0:] solves my problem (I am dumping attention matrices in a plain text format), but I wonder if there is another report scenario in which the truncation is needed?
The text was updated successfully, but these errors were encountered:
When force decoding in a seq2seq model (config file similar to forced.yaml), it appears that the following line (https://github.com/neulab/xnmt/blob/master/xnmt/translator.py#L130)
truncates the target sequence from the first unit. Correcting to
output_actions[0:]
solves my problem (I am dumping attention matrices in a plain text format), but I wonder if there is another report scenario in which the truncation is needed?The text was updated successfully, but these errors were encountered: