You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I send a message to gpt-4-1106-preview which is 6 tokens (in the official OpenAI tokenizer) or 19 characters, and GPT gives me a response of 71 tokens or 273 characters, and below GPT's response it says "gpt-4-1106-preview using 2919 tokens ~= $0. 030610", which is an increase of a x40 of tokens that I can't explain even adding the tokens in my message with the response. Is it a counting bug or am I losing 40 times more money unnecessarily?
Is this happening to anyone else?
Is there any way I can see the raw HTTP requests to the OpenAI endpoint?
You probably have a huge custom system prompt. I checked again and the tokens should be counted correctly. If it's more than what you typed/the context, it's in your system prompt.
I send a message to gpt-4-1106-preview which is 6 tokens (in the official OpenAI tokenizer) or 19 characters, and GPT gives me a response of 71 tokens or 273 characters, and below GPT's response it says "gpt-4-1106-preview using 2919 tokens ~= $0. 030610", which is an increase of a x40 of tokens that I can't explain even adding the tokens in my message with the response. Is it a counting bug or am I losing 40 times more money unnecessarily?
Is this happening to anyone else?
Is there any way I can see the raw HTTP requests to the OpenAI endpoint?
Thanks a lot and best regards.
PS: I am using the online version (https://niek.github.io/chatgpt-web)
The text was updated successfully, but these errors were encountered: