The vocab.json of the qwen3 model does not contain "added_tokens" and cannot be loaded into the Tokenizer. <img width="420" height="647" alt="Image" src="https://github.com/user-attachments/assets/938bce80-4d52-46dd-bd90-39ab82a59095" />