You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
According to https://platform.openai.com/docs/models/gpt-3-5 the gpt-3.5-turbo does not refer to the 16k window gpt-3.5-turbo-1106 yet. https://openai.com/pricing doesn't mention the older models, so we might switch to that for default.
It'd however make sense to determine the maximum token count on startup.
Determining the maximum token count can be done by setting max_tokens to a gigantic value in a test request and parsing the response "This model's maximum context length is 16385 tokens."
Follow https://platform.openai.com/docs/models/gpt-3-5 : at dec 13 2023 the gpt 3.5 model will admit 16k tokens. Perhaps find that out with a request instead of using a constant...
Also , there might a conflict between the default value 1000 tokens that's used for the sidebar and the underlying system.
The text was updated successfully, but these errors were encountered: