-
Notifications
You must be signed in to change notification settings - Fork 44.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build in Rate-Limit Error Handling #22
Comments
Thanks for submitting this Nathan, It is indeed possible and a good idea, adding it to the to-do list! |
I think I found the fix: in chat.py try: Basically just switching form openai.RateLimitError to openai.error.RateLimitError |
Thank you very much :) |
Signed-off-by: Merwane Hamadi <merwanehamadi@gmail.com>
Below is a readout from an example error.
"I hit a rate limit error working with the OpenAI API: File "D:\projects\auto-gpt\auto-gpt-env\lib\site-packages\openai\api_requestor.py", line 682, in _interpret_response_line raise self.handle_error_response( openai.error.RateLimitError: That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists."
Traceback (most recent call last):
File "D:\projects\auto-gpt\auto-gpt\scripts\main.py", line 199, in
assistant_reply = chat.chat_with_ai(
File "D:\projects\auto-gpt\auto-gpt\scripts\chat.py", line 80, in chat_with_ai
except openai.RateLimitError:
AttributeError: module 'openai' has no attribute 'RateLimitError'
Would it be possible to throttle the script depending on the model being executed?
https://platform.openai.com/docs/guides/rate-limits/overview
The text was updated successfully, but these errors were encountered: