Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no graceful recovery or helpful info on 500 internal server errors #598

Closed
VictorBargains opened this issue May 8, 2024 · 2 comments
Closed
Labels
bug Something isn't working fixed

Comments

@VictorBargains
Copy link

Issue

A few times in the past day working with v0.31.1 and v0.32.0 I have gotten unexplained 500 errors from OpenAI's API, and it crashes out of the whole aider process. it seems trivial that if the actual API suggest is to retry again, that this is an error aider should handle itself with a retry loop (including a backoff and a limit). I will check out version 0.33.0 and see if that improves.

Applied edit to main.py
Traceback (most recent call last):
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\llms\openai.py", line 417, in completion
    raise e
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\llms\openai.py", line 376, in completion
    response = openai_client.chat.completions.create(**data, timeout=timeout)  # type: ignore
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_utils\_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\resources\chat\completions.py", line 579, in create
    return self._post(
           ^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 1240, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 921, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 1005, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 1053, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 1005, in _request
    return self._retry_request(
           ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 1053, in _retry_request
    return self._request(
           ^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\openai\_base_client.py", line 1020, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID req_aba7d180aba6a34863994439d7cfb521 in your email.)', 'type': None, 'param': None, 'code': None}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\main.py", line 1052, in completion
    raise e
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\main.py", line 1025, in completion
    response = openai_chat_completions.completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\llms\openai.py", line 423, in completion
    raise OpenAIError(status_code=e.status_code, message=str(e))
litellm.llms.openai.OpenAIError: Error code: 500 - {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID req_aba7d180aba6a34863994439d7cfb521 in your email.)', 'type': None, 'param': None, 'code': None}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Scripts\aider.exe\__main__.py", line 7, in <module>
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\main.py", line 402, in main
    coder.run()
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\coders\base_coder.py", line 480, in run
    list(self.send_new_user_message(new_user_message))
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\coders\base_coder.py", line 712, in send_new_user_message
    saved_message = self.auto_commit(edited)
                    ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\coders\base_coder.py", line 1156, in auto_commit
    res = self.repo.commit(fnames=edited, context=context, prefix="aider: ")
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\repo.py", line 73, in commit
    commit_message = self.get_commit_message(diffs, context)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\repo.py", line 124, in get_commit_message
    commit_message = simple_send_with_retries(model.name, messages)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\sendchat.py", line 74, in simple_send_with_retries
    _hash, response = send_with_retries(
                      ^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\backoff\_sync.py", line 105, in retry
    ret = target(*args, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\aider\sendchat.py", line 64, in send_with_retries
    res = litellm.completion(**kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\utils.py", line 3222, in wrapper
    raise e
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\utils.py", line 3116, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\main.py", line 2224, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\utils.py", line 9220, in exception_type
    raise e
  File "C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Lib\site-packages\litellm\utils.py", line 8045, in exception_type
    raise APIError(
litellm.exceptions.APIError: OpenAIException - Error code: 500 - {'error': {'message': 'The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID req_aba7d180aba6a34863994439d7cfb521 in your email.)', 'type': None, 'param': None, 'code': None}}
(env) 

Version and model info

aider --yes main.py
Newer version v0.33.0 is available. To upgrade, run:
C:\Users\vandersen\source\repos\SMWParser\design_tools\docx_splitter\env\Scripts\python.exe -m pip install --upgrade aider-chat
Aider v0.32.0
Models: gpt-4-1106-preview with udiff edit format, weak model gpt-3.5-turbo
Git repo: .git with 1 files
Repo-map: using 1024 tokens
Added main.py to the chat.
VSCode terminal detected, pretty output has been disabled.
Use /help to see in-chat commands, run with --help to see cmd line args

@paul-gauthier
Copy link
Owner

Thanks for trying aider and filing this issue.

This is a bug introduced with the recent integration of litellm. Thanks for reporting it!

I've just pushed a fix. The change is available in the main branch. You can get it by installing the latest version from github:

python -m pip install --upgrade git+https://github.com/paul-gauthier/aider.git

If you have a chance to try it, let me know if it works better for you.

@paul-gauthier paul-gauthier added bug Something isn't working fixed labels May 9, 2024
@paul-gauthier
Copy link
Owner

I'm going to close this issue for now, but feel free to add a comment here and I will re-open or file a new issue any time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working fixed
Projects
None yet
Development

No branches or pull requests

2 participants