Skip to content

Restore TokenizersBackend override for DeepSeek V3/R1 tokenizer dispatch#45681

Open
ArthurZucker wants to merge 1 commit intomainfrom
fix-deepseek-tokenizer-dispatch
Open

Restore TokenizersBackend override for DeepSeek V3/R1 tokenizer dispatch#45681
ArthurZucker wants to merge 1 commit intomainfrom
fix-deepseek-tokenizer-dispatch

Conversation

@ArthurZucker
Copy link
Copy Markdown
Collaborator

Fixes #45488. cd5bcad reordered AutoTokenizer dispatch to prefer the named tokenizer_class over TokenizersBackend, bypassing MODELS_WITH_INCORRECT_HUB_TOKENIZER_CLASS. DeepSeek V3/R1 round-trip lost spaces (Metaspace clobbered ByteLevel). Honor the override when the mapping is TokenizersBackend.

…NIZER_CLASS

Fixes #45488. Commit cd5bcad reordered AutoTokenizer.from_pretrained
dispatch to prefer the specialized class named in tokenizer_config.json
over TokenizersBackend. This silently broke the deliberate override for
models in MODELS_WITH_INCORRECT_HUB_TOKENIZER_CLASS — notably the
DeepSeek V3/R1 family — whose tokenizer_class field (LlamaTokenizerFast)
was added to that set precisely because LlamaTokenizerFast.__init__
overwrites the ByteLevel pre-tokenizer declared in tokenizer.json with
Metaspace, dropping all spaces from encode/decode round-trips.

When the model_type is pinned to 'TokenizersBackend' in
TOKENIZER_MAPPING_NAMES, skip the named-class branch and use
TokenizersBackend directly. NLLB and other models whose mapping points
to a real specialized class (the case cd5bcad targeted) are unaffected.
@github-actions
Copy link
Copy Markdown
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: auto

@HuggingFaceDocBuilderDev
Copy link
Copy Markdown

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

LlamaTokenizer in v5 overrides tokenizer.json's ByteLevel pre-tokenizer with Metaspace, silently breaks DeepSeek V3/R1 family

2 participants