You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use max position embeddings in decoder of 256, which creates an issue in model.generate().
See stack trace:
Solution:
This can be fixed by setting it to a higher number. However it feels like only 256 positions are needed, corresponding to the inputs to the decoder (bos + 255 tokens) to predict the 256 outputs.
I'm currently fixing it with this commit and added this change directly into our model in commit ebac379 but ideally we could fix it directly in the transformers library.
Not sure if this has any negative impact @patil-suraj
The text was updated successfully, but these errors were encountered:
Issue
We use max position embeddings in decoder of 256, which creates an issue in
model.generate()
.See stack trace:
Solution:
This can be fixed by setting it to a higher number. However it feels like only 256 positions are needed, corresponding to the inputs to the decoder (bos + 255 tokens) to predict the 256 outputs.
I'm currently fixing it with this commit and added this change directly into our model in commit ebac379 but ideally we could fix it directly in the
transformers
library.Not sure if this has any negative impact @patil-suraj
The text was updated successfully, but these errors were encountered: