-
Notifications
You must be signed in to change notification settings - Fork 9.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GGML_ASSERT: llama.cpp:3817: unicode_cpts_from_utf8(word).size() > 0 #6132
Comments
Still an issue. |
To confirm that we're also seeing the exact same issue. |
seeing this with https://huggingface.co/fblgit/UNA-ThePitbull-21.4-v1 which has the same \u0000 token wonder if the code needs a specific catch for it |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
I added some naive handling of the \u0000 token (to basically ignore it) but this wasn't sufficient, so obviously something more comprehensive is needed. |
Hi,
I am trying to convert and quantized this model: https://huggingface.co/saltlux/luxia-21.4b-alignment-v1.0/
But I get this error when I use it for inference:
I've never seen this error and I cannot find anything remotely similar to this issue. What could cause this issue?
The text was updated successfully, but these errors were encountered: