You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am dealing with variable sequence length. So need to mask padding tokens. But the _viteribi_decode_nbest function produces same paths when the mask contain zeros. Here is the snippet to reproduce
importtorchfromtorchcrfimportCRFnum_tags=5# number of tags is 5model=CRF(num_tags)
seq_length=3# maximum sequence length in a batchbatch_size=2# number of samples in the batchemissions=torch.randn(seq_length, batch_size, num_tags)
# Computing log likelihoodtags=torch.tensor([[2, 3], [1, 0], [3, 4]], dtype=torch.long) # (seq_length, batch_size)model(emissions, tags)
mask=torch.tensor([[1, 1], [1, 1], [0, 1]])
# Decodingprint(model.decode(emissions, mask=mask)) # decoding the best pathprint(model.decode(emissions, mask=mask, nbest=3)) # decoding the top 3 paths
I am dealing with variable sequence length. So need to mask padding tokens. But the
_viteribi_decode_nbest
function produces same paths when the mask contain zeros. Here is the snippet to reproduceresult:
The text was updated successfully, but these errors were encountered: