Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama3 tokenizer decode is incorrect for ' ...' with leading space #36622

Closed
1 of 4 tasks
Naqu6 opened this issue Mar 9, 2025 · 1 comment
Closed
1 of 4 tasks

Llama3 tokenizer decode is incorrect for ' ...' with leading space #36622

Naqu6 opened this issue Mar 9, 2025 · 1 comment
Labels

Comments

@Naqu6
Copy link

Naqu6 commented Mar 9, 2025

System Info

  • transformers version: 4.49.0
  • Platform: Linux-5.4.0-187-generic-x86_64-with-glibc2.31
  • Python version: 3.12.9
  • Huggingface_hub version: 0.29.1
  • Safetensors version: 0.5.3
  • Accelerate version: not installed
  • Accelerate config: not found
  • DeepSpeed version: not installed
  • PyTorch version (GPU?): 2.6.0+cu124 (True)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?: no
  • Using GPU in script?: yes
  • GPU type: NVIDIA L40S

Who can help?

Hi @ArthurZucker @itazap (tagging you per instructions), when I use the Llama3 tokenizers and encode the string ' ...', and then decode the resulting token, I get the string '...' back instead of the string ' ...' (leading space is missing).

I believe that decode should be the inverse of encode in this case, and it's unclear to me why it isn't.

Sorry if I'm misunderstanding something! Thanks for your time :)

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")

tokenizer.decode(tokenizer.encode(" ...")[1:])

This outputs '...' (no leading space).

Expected behavior

I believe that decode should be the inverse of encode.

E.g

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B-Instruct")

tokenizer.decode(tokenizer.encode(" Hello world")[1:]) #[1:] to remove beginning of sequence token

outputs " Hello world", as expected.

@Naqu6 Naqu6 added the bug label Mar 9, 2025
@Naqu6 Naqu6 changed the title Llama3 tokenizer decode is incorrect for '...' with leading space Llama3 tokenizer decode is incorrect for ' ...' with leading space Mar 9, 2025
@Naqu6
Copy link
Author

Naqu6 commented Mar 9, 2025

See #35938 (comment)

@Naqu6 Naqu6 closed this as completed Mar 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant