-
Notifications
You must be signed in to change notification settings - Fork 25.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to enable tokenizer padding option in feature extraction pipeline? #9671
Comments
Hi! I think you're looking for |
Your result if of length 512 because you asked >>> text = "After stealing money from the bank vault, the bank robber was seen fishing on the Mississippi river bank."
... features = nlp([text, text * 2], padding="longest", truncation=True, max_length=40) returns features which are of size [42, 768]. |
Thank you very much! This method works! And I think the 'longest' padding strategy is enough for me to use in my dataset.
and get the fixed size padding sentence though... |
Well it seems impossible for now... I just tried
And the error message showed that: |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
I am trying to use our pipeline() to extract features of sentence tokens.
Because the lengths of my sentences are not same, and I am then going to feed the token features to RNN-based models, I want to padding sentences to a fixed length to get the same size features.
Before knowing our convenient pipeline() method, I am using a general version to get the features, which works fine but inconvenient, like that:
Then I also need to merge (or select) the features from returned hidden_states by myself... and finally get a [40,768] padded feature for this sentence's tokens as I want. However, as you can see, it is very inconvenient.
Compared to that, the pipeline method works very well and easily, which only needs the following 5-line codes.
Then I can directly get the tokens' features of original (length) sentence, which is [22,768].
However, how can I enable the padding option of the tokenizer in pipeline?
As I saw #9432 and #9576 , I knew that now we can add truncation options to the pipeline object (here is called nlp), so I imitated and wrote this code:
The program did not throw me an error though, but just return me a [512,768] vector...?
So is there any method to correctly enable the padding options? Thank you!
The text was updated successfully, but these errors were encountered: