New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorrect token count with Cyrillic #7
Comments
Hello @vermorel ! Thanks for reaching out!! I checked with original tiktoken library, here is it's result: As far as I see, for model r50k_base, p50k_base, p50k_edit Number of tokens is 549, for model cl100k_base number of tokens is 219. I double checked in the original tiktoken source code, "gpt-35-turbo" is mapped to "cl100k_base" model. Here is a proof: https://github.com/openai/tiktoken/blob/5d970c1100d3210b42497203d6b5c1e30cfda6cb/tiktoken/model.py#L10 So I would say that on the website that you shared. it uses model "..50k.." for "gpt3". So I would say that SharpToken works correctly :) Btw, I added your example to the testplan: https://github.com/dmitry-brazhenko/SharpToken/blob/main/SharpToken.Tests/data/TestPlans.txt Please let me know if I am wrong :) |
Thank you very much! Looking at the TikToken code, I finally understand what is going on. The encoding depends on the mode, not just on the model. I am using the Azure OpenAI offers |
The online OpenAI tokenizer https://platform.openai.com/tokenizer counts 549 tokens for the piece of text below:
However,
SharpTokens
counts 219 tokens. There is something wrong going on.The text was updated successfully, but these errors were encountered: