-
Notifications
You must be signed in to change notification settings - Fork 43.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
This model maxium context length is 8191 tokens - trown out of the program #2337
Comments
I have been getting this too. When it tried to read form a large file it doesn't break down into chunks before trying to process so seems that the GPT engineer rejects for being too large. Maybe a break to chunks function between read and injest/send to GPT |
Yes, getting the same and that is what is needed |
Closing as duplicate of #2801 |
me to happened since the first days. Im on stable this is 90% if crashes. 3.5t Stable version NEXT ACTION: COMMAND = search_files ARGUMENTS = {'directory': '.', 'query': 'cassandra_installation.sh'} |
endless crashes on 3.5gpt with search_files output too large for example 1000000 tokens intead of max allowed 8191 for output allowed |
GPT-3 or GPT-4
Steps to reproduce 🕹
read text file that contains more characters that the 8191 tokes
Current behavior 😯
Python\Python311\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8938 tokens (8938 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
Expected behavior 🤔
No response
Your prompt 📝
# Paste your prompt here
The text was updated successfully, but these errors were encountered: