Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Retry 503 OpenAI errors #4745

Merged
merged 1 commit into from
Jun 19, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion autogpt/llm/utils/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@
response.usage.completion_tokens if "completion_tokens" in usage else 0,
response.model,
)
except Exception as err:
logger.warn(f"Failed to update API costs: {err.__class__.__name__}: {err}")

Check warning on line 40 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L39-L40

Added lines #L39 - L40 were not covered by tests

def metering_wrapper(*args, **kwargs):
openai_obj = openai_obj_processor(*args, **kwargs)
Expand Down Expand Up @@ -89,20 +89,22 @@

except RateLimitError:
if attempt == num_attempts:
raise

Check warning on line 92 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L92

Added line #L92 was not covered by tests

logger.debug(retry_limit_msg)

Check warning on line 94 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L94

Added line #L94 was not covered by tests
if not user_warned:
logger.double_check(api_key_error_msg)
user_warned = True

Check warning on line 97 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L96-L97

Added lines #L96 - L97 were not covered by tests

except APIError as e:

Check warning on line 99 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L99

Added line #L99 was not covered by tests
if (e.http_status not in [502, 429]) or (attempt == num_attempts):
if (e.http_status not in [429, 502, 503]) or (
attempt == num_attempts
):
raise

Check warning on line 103 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L103

Added line #L103 was not covered by tests

backoff = backoff_base ** (attempt + 2)
logger.debug(backoff_msg.format(backoff=backoff))
time.sleep(backoff)

Check warning on line 107 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L105-L107

Added lines #L105 - L107 were not covered by tests

return _wrapped

Expand Down Expand Up @@ -131,13 +133,13 @@
str: The response from the function
"""
if model is None:
model = config.smart_llm_model

Check warning on line 136 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L136

Added line #L136 was not covered by tests
# For each arg, if any are None, convert to "None":
args = [str(arg) if arg is not None else "None" for arg in args]
# parse args to comma separated string
arg_str: str = ", ".join(args)

Check warning on line 140 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L140

Added line #L140 was not covered by tests

prompt = ChatSequence.for_model(

Check warning on line 142 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L142

Added line #L142 was not covered by tests
model,
[
Message(
Expand All @@ -148,7 +150,7 @@
Message("user", arg_str),
],
)
return create_chat_completion(prompt=prompt, temperature=0)

Check warning on line 153 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L153

Added line #L153 was not covered by tests


@metered
Expand All @@ -159,25 +161,25 @@
temperature: Optional[float],
max_output_tokens: Optional[int],
) -> str:
cfg = Config()

Check warning on line 164 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L164

Added line #L164 was not covered by tests
if model is None:
model = cfg.fast_llm_model

Check warning on line 166 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L166

Added line #L166 was not covered by tests
if temperature is None:
temperature = cfg.temperature

Check warning on line 168 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L168

Added line #L168 was not covered by tests

if cfg.use_azure:
kwargs = {"deployment_id": cfg.get_azure_deployment_id_for_model(model)}

Check warning on line 171 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L171

Added line #L171 was not covered by tests
else:
kwargs = {"model": model}

Check warning on line 173 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L173

Added line #L173 was not covered by tests

response = openai.Completion.create(

Check warning on line 175 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L175

Added line #L175 was not covered by tests
**kwargs,
prompt=prompt,
temperature=temperature,
max_tokens=max_output_tokens,
api_key=cfg.openai_api_key,
)
return response.choices[0].text

Check warning on line 182 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L182

Added line #L182 was not covered by tests


# Overly simple abstraction until we create something better
Expand Down Expand Up @@ -219,19 +221,19 @@
temperature=temperature,
max_tokens=max_tokens,
):
message = plugin.handle_chat_completion(

Check warning on line 224 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L224

Added line #L224 was not covered by tests
messages=prompt.raw(),
model=model,
temperature=temperature,
max_tokens=max_tokens,
)
if message is not None:
return message

Check warning on line 231 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L231

Added line #L231 was not covered by tests
api_manager = ApiManager()
response = None

if cfg.use_azure:
kwargs = {"deployment_id": cfg.get_azure_deployment_id_for_model(model)}

Check warning on line 236 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L236

Added line #L236 was not covered by tests
else:
kwargs = {"model": model}

Expand All @@ -245,8 +247,8 @@
resp = response.choices[0].message.content
for plugin in cfg.plugins:
if not plugin.can_handle_on_response():
continue
resp = plugin.on_response(resp)

Check warning on line 251 in autogpt/llm/utils/__init__.py

View check run for this annotation

Codecov / codecov/patch

autogpt/llm/utils/__init__.py#L250-L251

Added lines #L250 - L251 were not covered by tests
return resp


Expand Down
Loading