We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
greedy_until
Some tokenizers (Llama, Alpaca) return more than one token for "\n" even with add_special_tokens=False. It causes a value error in:
add_special_tokens=False
lm-evaluation-harness/lm_eval/base.py
Line 420 in 72b7f0c
One can replace it with
primary_until = self.tok_encode(until[0])[0]
The text was updated successfully, but these errors were encountered:
hf-causal
Thanks for raising this, and apologies for the bug!
In our upcoming version release, we handle multi-token stop sequences in a more principled+unified way (see here).
I've patched this + added a warning for the hf-causal model in #628 , and confirmed it doesn't crop up in the hf-causal-experimental case.
hf-causal-experimental
Sorry, something went wrong.
Successfully merging a pull request may close this issue.
Some tokenizers (Llama, Alpaca) return more than one token for "\n" even with
add_special_tokens=False
. It causes a value error in:lm-evaluation-harness/lm_eval/base.py
Line 420 in 72b7f0c
One can replace it with
The text was updated successfully, but these errors were encountered: