Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This model's maximum context length is 4097 tokens, however you requested 5762 tokens (5506 in your prompt; 256 for the completion). Please reduce your prompt; or completion length. #8

Closed
jrt324 opened this issue Feb 10, 2023 · 5 comments
Labels
bug Something isn't working

Comments

@jrt324
Copy link

jrt324 commented Feb 10, 2023

input : Chinese short article(6000 words)

Ask any question, follow Error:
This model's maximum context length is 4097 tokens, however you requested 5762 tokens (5506 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

@mmz-001
Copy link
Owner

mmz-001 commented Feb 10, 2023

Glad you pointed this out. Out of curiosity, how big was the input question?

@jrt324
Copy link
Author

jrt324 commented Feb 10, 2023

Glad you pointed this out. Out of curiosity, how big was the input question?

Just 8~20 words.
eg: Can you tell me about the main characters included in this article

@mmz-001
Copy link
Owner

mmz-001 commented Feb 10, 2023

I'll look into this. It'll be super helpful if you can give me a link to this article so that I repro the error.

@mmz-001 mmz-001 added the bug Something isn't working label Feb 14, 2023
@elisa-chou
Copy link

Having the same problem too.
I'm asking "what is this content mainly about?"

Error msg:
This model's maximum context length is 4097 tokens, however you requested 4812 tokens (4556 in your prompt; 256 for the completion). Please reduce your prompt; or completion length.

@mmz-001
Copy link
Owner

mmz-001 commented Feb 20, 2023

@elisa-chou The issue seems to be that I'm splitting the document according to number of characters, however, it should be split according to the number of tokens. I'll fix this soon.

mmz-001 added a commit that referenced this issue Jul 5, 2023
The new function uses tokens for the len function instead of characters
Fixes #15 and #8
@mmz-001 mmz-001 closed this as completed Jul 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants