Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

This model maxium context length is 8191 tokens - trown out of the program #2337

Closed
2 tasks done
Blackbrain63 opened this issue Apr 18, 2023 · 6 comments
Closed
2 tasks done

Comments

@Blackbrain63
Copy link

Blackbrain63 commented Apr 18, 2023

⚠️ Search for existing issues first ⚠️

  • I have searched the existing issues, and there is no existing issue for my problem

GPT-3 or GPT-4

  • I am using Auto-GPT with GPT-3 (GPT-3.5)

Steps to reproduce 🕹

read text file that contains more characters that the 8191 tokes

Current behavior 😯

Python\Python311\Lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 8938 tokens (8938 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.

Expected behavior 🤔

No response

Your prompt 📝

# Paste your prompt here
@E-Labs-io
Copy link

I have been getting this too. When it tried to read form a large file it doesn't break down into chunks before trying to process so seems that the GPT engineer rejects for being too large. Maybe a break to chunks function between read and injest/send to GPT

@marktsears
Copy link

Yes, getting the same and that is what is needed

@Pwuts
Copy link
Member

Pwuts commented Apr 18, 2023

Closing as duplicate of #2801

@Pwuts Pwuts closed this as not planned Won't fix, can't repro, duplicate, stale Apr 18, 2023
@GoMightyAlgorythmGo
Copy link

me to happened since the first days. Im on stable this is 90% if crashes. 3.5t Stable version

NEXT ACTION: COMMAND = search_files ARGUMENTS = {'directory': '.', 'query': 'cassandra_installation.sh'}
->
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 157088 tokens (157088 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
->
crash

@GoMightyAlgorythmGo
Copy link

endless crashes on 3.5gpt with search_files output too large for example 1000000 tokens intead of max allowed 8191 for output allowed

@Pwuts
Copy link
Member

Pwuts commented Apr 25, 2023

@GoMightyAlgorythmGo -> #2801

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants