Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] Handle failed OpenAI API requests #167

Closed
matejm opened this issue May 25, 2023 · 4 comments
Closed

[bug] Handle failed OpenAI API requests #167

matejm opened this issue May 25, 2023 · 4 comments
Labels
bug Something isn't working

Comments

@matejm
Copy link

matejm commented May 25, 2023

Describe the bug
Currently, when using OpenAI chat completion API, and the API response fails (possible failure reasons: invalid API key, too many requests, network error, etc), library throws the following error:

Traceback (most recent call last):
  File "/Users/matejm/dev/main.py", line 58, in <module>
    specs = call_openai(f.read())
            ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/main.py", line 19, in call_openai
    _, validated_output = guard(
                          ^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/guardrails/guard.py", line 166, in __call__
    guard_history = runner(prompt_params=prompt_params)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/guardrails/run.py", line 90, in __call__
    validated_output, reasks = self.step(
                               ^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/guardrails/run.py", line 144, in step
    output, output_as_dict = self.call(index, instructions, prompt, api, output)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/guardrails/run.py", line 236, in call
    output = api(prompt.source)
             ^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/tenacity/__init__.py", line 313, in iter
    if not (is_explicit_retry or self.retry(retry_state)):
                                 ^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/tenacity/retry.py", line 76, in __call__
    return self.predicate(exception)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/matejm/dev/venv/lib/python3.11/site-packages/tenacity/retry.py", line 92, in <lambda>
    super().__init__(lambda e: isinstance(e, exception_types))
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union     

Error is far from descriptive, it actually hides the actual problem from the developer.

To Reproduce
Steps to reproduce the behavior:

  1. RAIL spec
<rail version="0.1">
<output>
    <string name="test" />
</output>
<prompt>
Fill test string
@complete_json_suffix_v2
</prompt>
</rail>
  1. Runtime arguments

Simply run request without a valid API key.

import openai
import guardrails as gd

guard = gd.Guard.from_rail('demo.rail')
guard(
    openai.ChatCompletion.create,
    prompt_params={},
    model="gpt-3.5-turbo",
    max_tokens=1024,
    temperature=0.3,
)

Expected behavior
Return invalid response or throw informative error which would allow developers to figure out what went wrong.

Library version:
Version 0.1.6

@matejm matejm added the bug Something isn't working label May 25, 2023
@matejm
Copy link
Author

matejm commented May 25, 2023

Looks like latest version 0.1.7 returns a readable error.
PyPI still lists 0.1.6 as latest version: https://pypi.org/project/guardrails-ai/, it would be great to update that.

Feel free to close this issue.

@stchau4work
Copy link

stchau4work commented Jun 4, 2023

@matejm can you check if you are able to call the API with the above config without wrapped by the guard object?

@krrishdholakia
Copy link

krrishdholakia commented Jun 21, 2023

Hey @matejm @ShreyaR @stchau4work

Would recommend wrapping the base openai call with reliableGPT - it'll handle the retries, model switching, etc. when OpenAI throws errors

from reliablegpt import reliableGPT
openai.ChatCompletion.create = reliableGPT(openai.ChatCompletion.create,...)

Source: https://github.com/BerriAI/reliableGPT

--
@ShreyaR happy to make any changes you need to make this useful for guardrails.

@irgolic
Copy link
Contributor

irgolic commented Jun 26, 2023

Thanks for the issue @matejm.

@stchau4work not sure what you mean, please open a separate issue.

@irgolic irgolic closed this as completed Jun 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants