-
Notifications
You must be signed in to change notification settings - Fork 44k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Maximum context length exceeded after execute_shell
#3244
Comments
I hit the same with:
|
I have experienced the same issue |
fyi i reran mine after the same kind of crash, and when prompted i told it to, "decrease token size because you keep erroring out," and i mean it worked afterwards so (i also manually accepted each prompt for a few afterwards before giving it -n) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Pwuts
changed the title
Token Limit
Maximum context length exceeded after Apr 26, 2023
execute_shell
This was referenced Apr 26, 2023
5 tasks
5 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Which Operating System are you using?
Linux
Which version of Auto-GPT are you using?
Latest Release
GPT-3 or GPT-4?
GPT-3.5
Steps to reproduce 🕹
this error came from an installation of a library within the AutoGPT process while running
NEXT ACTION: COMMAND = execute_shell ARGUMENTS = {'command_line': 'pip install en_core_web_sm'}
Executing command 'pip install en_core_web_sm' in working directory '/home/appuser/auto_gpt_workspace'
Current behavior 😯
openai.error.InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 9956 tokens (9956 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
And the program terminates
Expected behavior 🤔
It should auto reduce token length instead of terminating
Your prompt 📝
# Paste your prompt here
Your Logs 📒
The text was updated successfully, but these errors were encountered: