We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hi @nlpyang
I have some questions.
equation(13)and equation(14)
I don't know why you do this? Can you give some explanation? In addition, the az and bz do not appear in your code.
scores = self.linear_keys(key) value = self.linear_values(value) scores = shape(scores, 1).squeeze(-1) value = shape(value) # key_len = key.size(2) # query_len = query.size(2) # # scores = torch.matmul(query, key.transpose(2, 3)) if mask is not None: mask = mask.unsqueeze(1).expand_as(scores) scores = scores.masked_fill(mask, -1e18)
You also not use the way from paper to compute your scores. Why? Best Wishes!
The text was updated successfully, but these errors were encountered:
equation 16 has a typo, where it should be used with \hat(a) not a.
you can think a and b as scores and values, which do appear in the code as
hiersumm/src/abstractive/attn.py
Line 241 in 476e6bf
Line 242 in 476e6bf
I can't see why this is different from the paper.
Sorry, something went wrong.
No branches or pull requests
hi @nlpyang
I have some questions.
equation (15)is not used, so why you propose that?
equation(13)and equation(14)
I don't know why you do this? Can you give some explanation? In addition, the az and bz do not appear in your code.
You also not use the way from paper to compute your scores. Why?
Best Wishes!
The text was updated successfully, but these errors were encountered: