You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a question about MASK tokens in the input examples (Figure 3, paper). Which role do they play in training and predictions. Unfortunately, I could not find the explanation in the paper. What is the role of mask tokens between subject tokens and relation tokens (ConceptNet) when only object tokens need to be predicted?
Thank you very much in advance.
The text was updated successfully, but these errors were encountered:
They let you start the generation at the same position for each example in a batch, making it easier to do text generation in batch mode. In the repo, though, generation is done example-by-example, though, so they wouldn't be necessary.
The method would work w/o them, but if you train a model with them, you CAN'T remove them at evaluation/generation time.
Hello,
I have a question about MASK tokens in the input examples (Figure 3, paper). Which role do they play in training and predictions. Unfortunately, I could not find the explanation in the paper. What is the role of mask tokens between subject tokens and relation tokens (ConceptNet) when only object tokens need to be predicted?
Thank you very much in advance.
The text was updated successfully, but these errors were encountered: