Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update 16k token limit #37

Open
stoerr opened this issue Nov 7, 2023 · 2 comments
Open

Update 16k token limit #37

stoerr opened this issue Nov 7, 2023 · 2 comments
Assignees
Labels
enhancement New feature or request

Comments

@stoerr
Copy link
Member

stoerr commented Nov 7, 2023

Follow https://platform.openai.com/docs/models/gpt-3-5 : at dec 13 2023 the gpt 3.5 model will admit 16k tokens. Perhaps find that out with a request instead of using a constant...

Also , there might a conflict between the default value 1000 tokens that's used for the sidebar and the underlying system.

@stoerr stoerr added the enhancement New feature or request label Nov 7, 2023
@stoerr stoerr self-assigned this Nov 7, 2023
@stoerr
Copy link
Member Author

stoerr commented Jan 17, 2024

According to https://platform.openai.com/docs/models/gpt-3-5 the gpt-3.5-turbo does not refer to the 16k window gpt-3.5-turbo-1106 yet. https://openai.com/pricing doesn't mention the older models, so we might switch to that for default.
It'd however make sense to determine the maximum token count on startup.

@stoerr
Copy link
Member Author

stoerr commented Feb 4, 2024

Determining the maximum token count can be done by setting max_tokens to a gigantic value in a test request and parsing the response "This model's maximum context length is 16385 tokens."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant