Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update quicktour docs to showcase the use of truncation #8975

Merged
merged 1 commit into from
Dec 7, 2020

Conversation

navjotts
Copy link
Contributor

@navjotts navjotts commented Dec 7, 2020

What does this PR do?

Currently, running the tokenizer batch example on https://huggingface.co/transformers/quicktour.html gives an error

Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.

This PR fixes the above by passing the max_length param explicitly (instead of depending on it having a default, which might not be the case for all models).

The fix also adds clarity to the statement in the docs above this example

If your goal is to send them through your model as a batch, you probably want to pad them all to the same length, truncate them to the maximum length the model can accept and get tensors back

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).

Who can review?

@sgugger

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this looks good indeed!

@sgugger sgugger merged commit c108d0b into huggingface:master Dec 7, 2020
@navjotts navjotts deleted the update-quick-tour branch December 8, 2020 04:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants