Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

default model.generate() creates an issue due to position embedding layer #136

Closed
borisdayma opened this issue Feb 10, 2022 · 1 comment
Closed

Comments

@borisdayma
Copy link
Owner

borisdayma commented Feb 10, 2022

Issue

We use max position embeddings in decoder of 256, which creates an issue in model.generate().

See stack trace:

image

image

image

image

Solution:

This can be fixed by setting it to a higher number. However it feels like only 256 positions are needed, corresponding to the inputs to the decoder (bos + 255 tokens) to predict the 256 outputs.

I'm currently fixing it with this commit and added this change directly into our model in commit ebac379 but ideally we could fix it directly in the transformers library.

Not sure if this has any negative impact @patil-suraj

@borisdayma
Copy link
Owner Author

This has been fixed with a decrease in dimension of pask_key_values as per mentioned commit above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant