Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect token count with Cyrillic #7

Closed
vermorel opened this issue Jun 24, 2023 · 2 comments
Closed

Incorrect token count with Cyrillic #7

vermorel opened this issue Jun 24, 2023 · 2 comments

Comments

@vermorel
Copy link

The online OpenAI tokenizer https://platform.openai.com/tokenizer counts 549 tokens for the piece of text below:

В цепочках поставок кейс-стадии, когда называются одна или несколько сторон, страдают от серьезных конфликтов интересов. Компании и их поддерживающие поставщики (программное обеспечение, консалтинг) имеют заинтересованность в представлении результата в положительном свете. Кроме того, фактические цепочки поставок обычно получают пользу или пострадают от случайных условий, которые никак не связаны с качеством их исполнения. Персонажи цепочки поставок - это методологический ответ на эти проблемы.

However, SharpTokens counts 219 tokens. There is something wrong going on.

@dmitry-brazhenko
Copy link
Owner

Hello @vermorel !

Thanks for reaching out!!

I checked with original tiktoken library, here is it's result:
2023-06-25_12-17-15

As far as I see, for model r50k_base, p50k_base, p50k_edit Number of tokens is 549, for model cl100k_base number of tokens is 219.

I double checked in the original tiktoken source code, "gpt-35-turbo" is mapped to "cl100k_base" model. Here is a proof: https://github.com/openai/tiktoken/blob/5d970c1100d3210b42497203d6b5c1e30cfda6cb/tiktoken/model.py#L10

So I would say that on the website that you shared. it uses model "..50k.." for "gpt3".

So I would say that SharpToken works correctly :)

Btw, I added your example to the testplan: https://github.com/dmitry-brazhenko/SharpToken/blob/main/SharpToken.Tests/data/TestPlans.txt

Please let me know if I am wrong :)

@vermorel
Copy link
Author

Thank you very much!

Looking at the TikToken code, I finally understand what is going on. The encoding depends on the mode, not just on the model. I am using the text mode (aka completion), and the encoding is p50k_base. If I were to use the chat mode instead, the encoding would be cl100k_base.

Azure OpenAI offers gpt-35-turbo in text mode (while seemingly OpenAI does not, offering only chat), this case is not listed in the TikToken code, but it stands to reason that it must also be p50k_base.

dmitry-brazhenko added a commit that referenced this issue Aug 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants