New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(agent/core): Add Anthropic Claude 3 support #7085
Conversation
- Fix type of `AssistantChatMessage.role` to match `ChatMessage.role` (str -> `ChatMessage.Role`) - Simplify `ModelProviderUsage` - Remove attribute `total_tokens` as it is always equal to `prompt_tokens + completion_tokens` - Modify signature of `update_usage(..)`; no longer requires a full `ModelResponse` object as input - Improve `ModelProviderBudget` - Change type of attribute `usage` to `defaultdict[str, ModelProviderUsage]` -> allow per-model usage tracking - Modify signature of `update_usage_and_cost(..)`; no longer requires a full `ModelResponse` object as input Also: - Remove unused `OpenAIChatParser` typedef in openai.py - Remove redundant `budget` attribute definition on `OpenAISettings` - Remove unnecessary `usage` in `OpenAIProvider` > `default_settings` > `budget`
yee haw! Also: - Add `ToolResultMessage` to `model_providers.schema`
✅ Deploy Preview for auto-gpt-docs canceled.
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #7085 +/- ##
==========================================
+ Coverage 44.31% 44.65% +0.33%
==========================================
Files 130 133 +3
Lines 6061 6306 +245
Branches 779 822 +43
==========================================
+ Hits 2686 2816 +130
- Misses 3270 3379 +109
- Partials 105 111 +6
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
…ompletion` interface
f71249f
to
02986d8
Compare
…emove `ApiManager`
…ntiation Also: - straighten out related model definitions - remove now-redundant `service=` arguments for `ChatModelInfo`/`EmbeddingModelInfo` usages - use `defaultdict(ModelProviderBudget)` in agent_protocol_server.py to simplify budget tracking setup
…Provider` base class
…t_completion_call` in `processing/text.py:_process_text`
…ovider` - Add `MultiProvider` - Replace all references to / uses of `OpenAIProvider` with `MultiProvider` - Change type of `Config.smart_llm` and `Config.fast_llm` from `str` to `ModelName`
02986d8
to
2aa4ca5
Compare
… to not-our-fault errors So e.g. don't retry on 400 Bad Request errors or anything else in the 4xx range
10 was too much, caused multi-minute timeouts between retries
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
Conflicts have been resolved! 🎉 A maintainer will review the pull request shortly. |
68b12da
to
7acf1c7
Compare
7acf1c7
to
a60854e
Compare
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Description updated to latest commit (338986d)
|
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. Persistent review updated to latest commit 338986d |
def get_api_access_kwargs(self) -> dict[str, str]: | ||
return { | ||
k: (v.get_secret_value() if type(v) is SecretStr else v) | ||
for k, v in { | ||
"api_key": self.api_key, | ||
"base_url": self.api_base, | ||
}.items() | ||
if v is not None | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The method get_api_access_kwargs
in AnthropicCredentials
class uses a dictionary comprehension that checks for None
values after accessing the get_secret_value
method. This could lead to a NoneType
error if self.api_key
or self.api_base
is None
. It's safer to check for None
before attempting to call get_secret_value
. [bug]
def get_api_access_kwargs(self) -> dict[str, str]: | |
return { | |
k: (v.get_secret_value() if type(v) is SecretStr else v) | |
for k, v in { | |
"api_key": self.api_key, | |
"base_url": self.api_base, | |
}.items() | |
if v is not None | |
} | |
def get_api_access_kwargs(self) -> dict[str, str]: | |
return { | |
k: (v.get_secret_value() if v is not None and isinstance(v, SecretStr) else v) | |
for k, v in { | |
"api_key": self.api_key, | |
"base_url": self.api_base, | |
}.items() | |
} |
# Merge prefill into generated response | ||
if prefill_response: | ||
first_text_block = next( | ||
b for b in _assistant_msg.content if b.type == "text" | ||
) | ||
first_text_block.text = prefill_response + first_text_block.text | ||
|
||
assistant_msg = AssistantChatMessage( | ||
content="\n\n".join( | ||
b.text for b in _assistant_msg.content if b.type == "text" | ||
), | ||
tool_calls=self._parse_assistant_tool_calls(_assistant_msg), | ||
) | ||
|
||
# If parsing the response fails, append the error to the prompt, and let the | ||
# LLM fix its mistake(s). | ||
attempts += 1 | ||
tool_call_errors = [] | ||
try: | ||
# Validate tool calls | ||
if assistant_msg.tool_calls and functions: | ||
tool_call_errors = validate_tool_calls( | ||
assistant_msg.tool_calls, functions | ||
) | ||
if tool_call_errors: | ||
raise ValueError( | ||
"Invalid tool use(s):\n" | ||
+ "\n".join(str(e) for e in tool_call_errors) | ||
) | ||
|
||
parsed_result = completion_parser(assistant_msg) | ||
break | ||
except Exception as e: | ||
self._logger.debug( | ||
f"Parsing failed on response: '''{_assistant_msg}'''" | ||
) | ||
self._logger.warning(f"Parsing attempt #{attempts} failed: {e}") | ||
sentry_sdk.capture_exception( | ||
error=e, | ||
extras={"assistant_msg": _assistant_msg, "i_attempt": attempts}, | ||
) | ||
if attempts < self._configuration.fix_failed_parse_tries: | ||
anthropic_messages.append( | ||
_assistant_msg.dict(include={"role", "content"}) | ||
) | ||
anthropic_messages.append( | ||
{ | ||
"role": "user", | ||
"content": [ | ||
*( | ||
# tool_result is required if last assistant message | ||
# had tool_use block(s) | ||
{ | ||
"type": "tool_result", | ||
"tool_use_id": tc.id, | ||
"is_error": True, | ||
"content": [ | ||
{ | ||
"type": "text", | ||
"text": "Not executed because parsing " | ||
"of your last message failed" | ||
if not tool_call_errors | ||
else str(e) | ||
if ( | ||
e := next( | ||
( | ||
tce | ||
for tce in tool_call_errors | ||
if tce.name | ||
== tc.function.name | ||
), | ||
None, | ||
) | ||
) | ||
else "Not executed because validation " | ||
"of tool input failed", | ||
} | ||
], | ||
} | ||
for tc in assistant_msg.tool_calls or [] | ||
), | ||
{ | ||
"type": "text", | ||
"text": ( | ||
"ERROR PARSING YOUR RESPONSE:\n\n" | ||
f"{e.__class__.__name__}: {e}" | ||
), | ||
}, | ||
], | ||
} | ||
) | ||
else: | ||
raise | ||
|
||
if attempts > 1: | ||
self._logger.debug( | ||
f"Total cost for {attempts} attempts: ${round(total_cost, 5)}" | ||
) | ||
|
||
return ChatModelResponse( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The method create_chat_completion
in AnthropicProvider
class has a while loop that could potentially become an infinite loop if the conditions inside the loop do not change the state to break out. It is recommended to add a maximum number of retries to avoid this. [possible issue]
self, | |
model_prompt: list[ChatMessage], | |
model_name: AnthropicModelName, | |
completion_parser: Callable[[AssistantChatMessage], _T] = lambda _: None, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> ChatModelResponse[_T]: | |
"""Create a completion using the Anthropic API.""" | |
anthropic_messages, completion_kwargs = self._get_chat_completion_args( | |
prompt_messages=model_prompt, | |
model=model_name, | |
functions=functions, | |
max_output_tokens=max_output_tokens, | |
prefill_response=prefill_response, | |
**kwargs, | |
) | |
total_cost = 0.0 | |
attempts = 0 | |
while True: | |
completion_kwargs["messages"] = anthropic_messages | |
( | |
_assistant_msg, | |
cost, | |
t_input, | |
t_output, | |
) = await self._create_chat_completion(completion_kwargs) | |
total_cost += cost | |
self._logger.debug( | |
f"Completion usage: {t_input} input, {t_output} output " | |
f"- ${round(cost, 5)}" | |
) | |
# Merge prefill into generated response | |
if prefill_response: | |
first_text_block = next( | |
b for b in _assistant_msg.content if b.type == "text" | |
) | |
first_text_block.text = prefill_response + first_text_block.text | |
assistant_msg = AssistantChatMessage( | |
content="\n\n".join( | |
b.text for b in _assistant_msg.content if b.type == "text" | |
), | |
tool_calls=self._parse_assistant_tool_calls(_assistant_msg), | |
) | |
# If parsing the response fails, append the error to the prompt, and let the | |
# LLM fix its mistake(s). | |
attempts += 1 | |
tool_call_errors = [] | |
try: | |
# Validate tool calls | |
if assistant_msg.tool_calls and functions: | |
tool_call_errors = validate_tool_calls( | |
assistant_msg.tool_calls, functions | |
) | |
if tool_call_errors: | |
raise ValueError( | |
"Invalid tool use(s):\n" | |
+ "\n".join(str(e) for e in tool_call_errors) | |
) | |
parsed_result = completion_parser(assistant_msg) | |
break | |
except Exception as e: | |
self._logger.debug( | |
f"Parsing failed on response: '''{_assistant_msg}'''" | |
) | |
self._logger.warning(f"Parsing attempt #{attempts} failed: {e}") | |
sentry_sdk.capture_exception( | |
error=e, | |
extras={"assistant_msg": _assistant_msg, "i_attempt": attempts}, | |
) | |
if attempts < self._configuration.fix_failed_parse_tries: | |
anthropic_messages.append( | |
_assistant_msg.dict(include={"role", "content"}) | |
) | |
anthropic_messages.append( | |
{ | |
"role": "user", | |
"content": [ | |
*( | |
# tool_result is required if last assistant message | |
# had tool_use block(s) | |
{ | |
"type": "tool_result", | |
"tool_use_id": tc.id, | |
"is_error": True, | |
"content": [ | |
{ | |
"type": "text", | |
"text": "Not executed because parsing " | |
"of your last message failed" | |
if not tool_call_errors | |
else str(e) | |
if ( | |
e := next( | |
( | |
tce | |
for tce in tool_call_errors | |
if tce.name | |
== tc.function.name | |
), | |
None, | |
) | |
) | |
else "Not executed because validation " | |
"of tool input failed", | |
} | |
], | |
} | |
for tc in assistant_msg.tool_calls or [] | |
), | |
{ | |
"type": "text", | |
"text": ( | |
"ERROR PARSING YOUR RESPONSE:\n\n" | |
f"{e.__class__.__name__}: {e}" | |
), | |
}, | |
], | |
} | |
) | |
else: | |
raise | |
if attempts > 1: | |
self._logger.debug( | |
f"Total cost for {attempts} attempts: ${round(total_cost, 5)}" | |
) | |
return ChatModelResponse( | |
async def create_chat_completion( | |
self, | |
model_prompt: list[ChatMessage], | |
model_name: AnthropicModelName, | |
completion_parser: Callable[[AssistantChatMessage], _T] = lambda _: None, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> ChatModelResponse[_T]: | |
# Method body | |
max_retries = 10 # Define a reasonable max retries | |
while attempts < max_retries: | |
# Loop body | |
if attempts < self._configuration.fix_failed_parse_tries: | |
# Loop continue condition | |
else: | |
raise |
def _get_chat_completion_args( | ||
self, | ||
prompt_messages: list[ChatMessage], | ||
model: AnthropicModelName, | ||
functions: Optional[list[CompletionModelFunction]] = None, | ||
max_output_tokens: Optional[int] = None, | ||
prefill_response: str = "", | ||
**kwargs, | ||
) -> tuple[list[MessageParam], MessageCreateParams]: | ||
"""Prepare arguments for message completion API call. | ||
|
||
Args: | ||
prompt_messages: List of ChatMessages. | ||
model: The model to use. | ||
functions: Optional list of functions available to the LLM. | ||
kwargs: Additional keyword arguments. | ||
|
||
Returns: | ||
list[MessageParam]: Prompt messages for the Anthropic call | ||
dict[str, Any]: Any other kwargs for the Anthropic call | ||
""" | ||
kwargs["model"] = model | ||
|
||
if functions: | ||
kwargs["tools"] = [ | ||
{ | ||
"name": f.name, | ||
"description": f.description, | ||
"input_schema": { | ||
"type": "object", | ||
"properties": { | ||
name: param.to_dict() | ||
for name, param in f.parameters.items() | ||
}, | ||
"required": [ | ||
name | ||
for name, param in f.parameters.items() | ||
if param.required | ||
], | ||
}, | ||
} | ||
for f in functions | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The method _get_chat_completion_args
in AnthropicProvider
class uses a complex nested list comprehension inside a dictionary which makes the code hard to read and maintain. It is recommended to refactor this into separate, simpler statements or helper functions. [maintainability]
def _get_chat_completion_args( | |
self, | |
prompt_messages: list[ChatMessage], | |
model: AnthropicModelName, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> tuple[list[MessageParam], MessageCreateParams]: | |
"""Prepare arguments for message completion API call. | |
Args: | |
prompt_messages: List of ChatMessages. | |
model: The model to use. | |
functions: Optional list of functions available to the LLM. | |
kwargs: Additional keyword arguments. | |
Returns: | |
list[MessageParam]: Prompt messages for the Anthropic call | |
dict[str, Any]: Any other kwargs for the Anthropic call | |
""" | |
kwargs["model"] = model | |
if functions: | |
kwargs["tools"] = [ | |
{ | |
"name": f.name, | |
"description": f.description, | |
"input_schema": { | |
"type": "object", | |
"properties": { | |
name: param.to_dict() | |
for name, param in f.parameters.items() | |
}, | |
"required": [ | |
name | |
for name, param in f.parameters.items() | |
if param.required | |
], | |
}, | |
} | |
for f in functions | |
] | |
def _get_chat_completion_args( | |
self, | |
prompt_messages: list[ChatMessage], | |
model: AnthropicModelName, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> tuple[list[MessageParam], MessageCreateParams]: | |
# Method body | |
if functions: | |
kwargs["tools"] = [self._build_tool_schema(f) for f in functions] | |
def _build_tool_schema(self, function: CompletionModelFunction) -> dict: | |
properties = {name: param.to_dict() for name, param in function.parameters.items()} | |
required = [name for name, param in function.parameters.items() if param.required] | |
return { | |
"name": function.name, | |
"description": function.description, | |
"input_schema": { | |
"type": "object", | |
"properties": properties, | |
"required": required, | |
}, | |
} |
def _retry_api_request(self, func: Callable[_P, _T]) -> Callable[_P, _T]: | ||
return tenacity.retry( | ||
retry=( | ||
tenacity.retry_if_exception_type(APIConnectionError) | ||
| tenacity.retry_if_exception( | ||
lambda e: isinstance(e, APIStatusError) and e.status_code >= 500 | ||
) | ||
), | ||
wait=tenacity.wait_exponential(), | ||
stop=tenacity.stop_after_attempt(self._configuration.retries_per_request), | ||
after=tenacity.after_log(self._logger, logging.DEBUG), | ||
)(func) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The method _retry_api_request
in AnthropicProvider
class uses a complex lambda function inside the tenacity.retry_if_exception
which makes the code hard to read and maintain. It is recommended to refactor this into a separate, simpler function. [maintainability]
def _retry_api_request(self, func: Callable[_P, _T]) -> Callable[_P, _T]: | |
return tenacity.retry( | |
retry=( | |
tenacity.retry_if_exception_type(APIConnectionError) | |
| tenacity.retry_if_exception( | |
lambda e: isinstance(e, APIStatusError) and e.status_code >= 500 | |
) | |
), | |
wait=tenacity.wait_exponential(), | |
stop=tenacity.stop_after_attempt(self._configuration.retries_per_request), | |
after=tenacity.after_log(self._logger, logging.DEBUG), | |
)(func) | |
def _retry_api_request(self, func: Callable[_P, _T]) -> Callable[_P, _T]: | |
return tenacity.retry( | |
retry=( | |
tenacity.retry_if_exception_type(APIConnectionError) | |
| tenacity.retry_if_exception(self._is_retryable_status_error) | |
), | |
wait=tenacity.wait_exponential(), | |
stop=tenacity.stop_after_attempt(self._configuration.retries_per_request), | |
after=tenacity.after_log(self._logger, logging.DEBUG), | |
)(func) | |
def _is_retryable_status_error(self, e): | |
return isinstance(e, APIStatusError) and e.status_code >= 500 |
model: The model to use. | ||
functions: Optional list of functions available to the LLM. | ||
kwargs: Additional keyword arguments. | ||
|
||
Returns: | ||
list[MessageParam]: Prompt messages for the Anthropic call | ||
dict[str, Any]: Any other kwargs for the Anthropic call | ||
""" | ||
kwargs["model"] = model | ||
|
||
if functions: | ||
kwargs["tools"] = [ | ||
{ | ||
"name": f.name, | ||
"description": f.description, | ||
"input_schema": { | ||
"type": "object", | ||
"properties": { | ||
name: param.to_dict() | ||
for name, param in f.parameters.items() | ||
}, | ||
"required": [ | ||
name | ||
for name, param in f.parameters.items() | ||
if param.required | ||
], | ||
}, | ||
} | ||
for f in functions | ||
] | ||
|
||
kwargs["max_tokens"] = max_output_tokens or 4096 | ||
|
||
if extra_headers := self._configuration.extra_request_headers: | ||
kwargs["extra_headers"] = kwargs.get("extra_headers", {}) | ||
kwargs["extra_headers"].update(extra_headers.copy()) | ||
|
||
system_messages = [ | ||
m for m in prompt_messages if m.role == ChatMessage.Role.SYSTEM | ||
] | ||
if (_n := len(system_messages)) > 1: | ||
self._logger.warning( | ||
f"Prompt has {_n} system messages; Anthropic supports only 1. " | ||
"They will be merged, and removed from the rest of the prompt." | ||
) | ||
kwargs["system"] = "\n\n".join(sm.content for sm in system_messages) | ||
|
||
messages: list[MessageParam] = [] | ||
for message in prompt_messages: | ||
if message.role == ChatMessage.Role.SYSTEM: | ||
continue | ||
elif message.role == ChatMessage.Role.USER: | ||
messages.append({"role": "user", "content": message.content}) | ||
# TODO: add support for image blocks | ||
elif message.role == ChatMessage.Role.ASSISTANT: | ||
if isinstance(message, AssistantChatMessage) and message.tool_calls: | ||
messages.append( | ||
{ | ||
"role": "assistant", | ||
"content": [ | ||
*( | ||
[{"type": "text", "text": message.content}] | ||
if message.content | ||
else [] | ||
), | ||
*( | ||
{ | ||
"type": "tool_use", | ||
"id": tc.id, | ||
"name": tc.function.name, | ||
"input": tc.function.arguments, | ||
} | ||
for tc in message.tool_calls | ||
), | ||
], | ||
} | ||
) | ||
elif message.content: | ||
messages.append( | ||
{ | ||
"role": "assistant", | ||
"content": message.content, | ||
} | ||
) | ||
elif isinstance(message, ToolResultMessage): | ||
messages.append( | ||
{ | ||
"role": "user", | ||
"content": [ | ||
{ | ||
"type": "tool_result", | ||
"tool_use_id": message.tool_call_id, | ||
"content": [{"type": "text", "text": message.content}], | ||
"is_error": message.is_error, | ||
} | ||
], | ||
} | ||
) | ||
|
||
if prefill_response: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The method create_chat_completion
in AnthropicProvider
class uses a hardcoded string for error handling which could lead to issues if the error message needs to be localized or changed frequently. It is recommended to use a centralized error message repository or configuration. [enhancement]
self, | |
model_prompt: list[ChatMessage], | |
model_name: AnthropicModelName, | |
completion_parser: Callable[[AssistantChatMessage], _T] = lambda _: None, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> ChatModelResponse[_T]: | |
"""Create a completion using the Anthropic API.""" | |
anthropic_messages, completion_kwargs = self._get_chat_completion_args( | |
prompt_messages=model_prompt, | |
model=model_name, | |
functions=functions, | |
max_output_tokens=max_output_tokens, | |
prefill_response=prefill_response, | |
**kwargs, | |
) | |
total_cost = 0.0 | |
attempts = 0 | |
while True: | |
completion_kwargs["messages"] = anthropic_messages | |
( | |
_assistant_msg, | |
cost, | |
t_input, | |
t_output, | |
) = await self._create_chat_completion(completion_kwargs) | |
total_cost += cost | |
self._logger.debug( | |
f"Completion usage: {t_input} input, {t_output} output " | |
f"- ${round(cost, 5)}" | |
) | |
# Merge prefill into generated response | |
if prefill_response: | |
first_text_block = next( | |
b for b in _assistant_msg.content if b.type == "text" | |
) | |
first_text_block.text = prefill_response + first_text_block.text | |
assistant_msg = AssistantChatMessage( | |
content="\n\n".join( | |
b.text for b in _assistant_msg.content if b.type == "text" | |
), | |
tool_calls=self._parse_assistant_tool_calls(_assistant_msg), | |
) | |
# If parsing the response fails, append the error to the prompt, and let the | |
# LLM fix its mistake(s). | |
attempts += 1 | |
tool_call_errors = [] | |
try: | |
# Validate tool calls | |
if assistant_msg.tool_calls and functions: | |
tool_call_errors = validate_tool_calls( | |
assistant_msg.tool_calls, functions | |
) | |
if tool_call_errors: | |
raise ValueError( | |
"Invalid tool use(s):\n" | |
+ "\n".join(str(e) for e in tool_call_errors) | |
) | |
parsed_result = completion_parser(assistant_msg) | |
break | |
except Exception as e: | |
self._logger.debug( | |
f"Parsing failed on response: '''{_assistant_msg}'''" | |
) | |
self._logger.warning(f"Parsing attempt #{attempts} failed: {e}") | |
sentry_sdk.capture_exception( | |
error=e, | |
extras={"assistant_msg": _assistant_msg, "i_attempt": attempts}, | |
) | |
if attempts < self._configuration.fix_failed_parse_tries: | |
anthropic_messages.append( | |
_assistant_msg.dict(include={"role", "content"}) | |
) | |
anthropic_messages.append( | |
{ | |
"role": "user", | |
"content": [ | |
*( | |
# tool_result is required if last assistant message | |
# had tool_use block(s) | |
{ | |
"type": "tool_result", | |
"tool_use_id": tc.id, | |
"is_error": True, | |
"content": [ | |
{ | |
"type": "text", | |
"text": "Not executed because parsing " | |
"of your last message failed" | |
if not tool_call_errors | |
else str(e) | |
if ( | |
e := next( | |
( | |
tce | |
for tce in tool_call_errors | |
if tce.name | |
== tc.function.name | |
), | |
None, | |
) | |
) | |
else "Not executed because validation " | |
"of tool input failed", | |
} | |
], | |
} | |
for tc in assistant_msg.tool_calls or [] | |
), | |
{ | |
"type": "text", | |
"text": ( | |
"ERROR PARSING YOUR RESPONSE:\n\n" | |
f"{e.__class__.__name__}: {e}" | |
), | |
}, | |
], | |
} | |
) | |
else: | |
raise | |
if attempts > 1: | |
self._logger.debug( | |
f"Total cost for {attempts} attempts: ${round(total_cost, 5)}" | |
) | |
return ChatModelResponse( | |
response=assistant_msg, | |
parsed_result=parsed_result, | |
model_info=ANTHROPIC_CHAT_MODELS[model_name], | |
prompt_tokens_used=t_input, | |
completion_tokens_used=t_output, | |
) | |
def _get_chat_completion_args( | |
self, | |
prompt_messages: list[ChatMessage], | |
model: AnthropicModelName, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> tuple[list[MessageParam], MessageCreateParams]: | |
"""Prepare arguments for message completion API call. | |
Args: | |
prompt_messages: List of ChatMessages. | |
model: The model to use. | |
functions: Optional list of functions available to the LLM. | |
kwargs: Additional keyword arguments. | |
Returns: | |
list[MessageParam]: Prompt messages for the Anthropic call | |
dict[str, Any]: Any other kwargs for the Anthropic call | |
""" | |
kwargs["model"] = model | |
if functions: | |
kwargs["tools"] = [ | |
{ | |
"name": f.name, | |
"description": f.description, | |
"input_schema": { | |
"type": "object", | |
"properties": { | |
name: param.to_dict() | |
for name, param in f.parameters.items() | |
}, | |
"required": [ | |
name | |
for name, param in f.parameters.items() | |
if param.required | |
], | |
}, | |
} | |
for f in functions | |
] | |
kwargs["max_tokens"] = max_output_tokens or 4096 | |
if extra_headers := self._configuration.extra_request_headers: | |
kwargs["extra_headers"] = kwargs.get("extra_headers", {}) | |
kwargs["extra_headers"].update(extra_headers.copy()) | |
system_messages = [ | |
m for m in prompt_messages if m.role == ChatMessage.Role.SYSTEM | |
] | |
if (_n := len(system_messages)) > 1: | |
self._logger.warning( | |
f"Prompt has {_n} system messages; Anthropic supports only 1. " | |
"They will be merged, and removed from the rest of the prompt." | |
) | |
kwargs["system"] = "\n\n".join(sm.content for sm in system_messages) | |
messages: list[MessageParam] = [] | |
for message in prompt_messages: | |
if message.role == ChatMessage.Role.SYSTEM: | |
continue | |
elif message.role == ChatMessage.Role.USER: | |
messages.append({"role": "user", "content": message.content}) | |
# TODO: add support for image blocks | |
elif message.role == ChatMessage.Role.ASSISTANT: | |
if isinstance(message, AssistantChatMessage) and message.tool_calls: | |
messages.append( | |
{ | |
"role": "assistant", | |
"content": [ | |
*( | |
[{"type": "text", "text": message.content}] | |
if message.content | |
else [] | |
), | |
*( | |
{ | |
"type": "tool_use", | |
"id": tc.id, | |
"name": tc.function.name, | |
"input": tc.function.arguments, | |
} | |
for tc in message.tool_calls | |
), | |
], | |
} | |
) | |
elif message.content: | |
messages.append( | |
{ | |
"role": "assistant", | |
"content": message.content, | |
} | |
) | |
elif isinstance(message, ToolResultMessage): | |
messages.append( | |
{ | |
"role": "user", | |
"content": [ | |
{ | |
"type": "tool_result", | |
"tool_use_id": message.tool_call_id, | |
"content": [{"type": "text", "text": message.content}], | |
"is_error": message.is_error, | |
} | |
], | |
} | |
) | |
if prefill_response: | |
async def create_chat_completion( | |
self, | |
model_prompt: list[ChatMessage], | |
model_name: AnthropicModelName, | |
completion_parser: Callable[[AssistantChatMessage], _T] = lambda _: None, | |
functions: Optional[list[CompletionModelFunction]] = None, | |
max_output_tokens: Optional[int] = None, | |
prefill_response: str = "", | |
**kwargs, | |
) -> ChatModelResponse[_T]: | |
# Method body | |
if prefill_response: | |
messages.append({"role": "assistant", "content": self._get_prefill_message(prefill_response)}) | |
def _get_prefill_message(self, prefill_response: str) -> str: | |
# This method can be extended to fetch messages from a centralized repository or configuration | |
return prefill_response |
def _configure_llm_provider(config: Config) -> MultiProvider: | ||
multi_provider = MultiProvider() | ||
for model in [config.smart_llm, config.fast_llm]: | ||
# Ensure model providers for configured LLMs are available | ||
multi_provider.get_model_provider(model) | ||
return multi_provider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The function _configure_llm_provider
should handle the case where the configuration does not specify any models, to avoid runtime errors when accessing config.smart_llm
or config.fast_llm
. [enhancement]
def _configure_llm_provider(config: Config) -> MultiProvider: | |
multi_provider = MultiProvider() | |
for model in [config.smart_llm, config.fast_llm]: | |
# Ensure model providers for configured LLMs are available | |
multi_provider.get_model_provider(model) | |
return multi_provider | |
def _configure_llm_provider(config: Config) -> MultiProvider: | |
multi_provider = MultiProvider() | |
models = [model for model in [config.smart_llm, config.fast_llm] if model] | |
for model in models: | |
# Ensure model providers for configured LLMs are available | |
multi_provider.get_model_provider(model) | |
return multi_provider |
def validate_tool_calls( | ||
tool_calls: list[AssistantToolCall], functions: list[CompletionModelFunction] | ||
) -> list[InvalidFunctionCallError]: | ||
""" | ||
Validates a list of tool calls against a list of functions. | ||
|
||
1. Tries to find a function matching each tool call | ||
2. If a matching function is found, validates the tool call's arguments, | ||
reporting any resulting errors | ||
2. If no matching function is found, an error "Unknown function X" is reported | ||
3. A list of all errors encountered during validation is returned | ||
|
||
Params: | ||
tool_calls: A list of tool calls to validate. | ||
functions: A list of functions to validate against. | ||
|
||
Returns: | ||
list[InvalidFunctionCallError]: All errors encountered during validation. | ||
""" | ||
errors: list[InvalidFunctionCallError] = [] | ||
for tool_call in tool_calls: | ||
function_call = tool_call.function | ||
|
||
if function := next( | ||
(f for f in functions if f.name == function_call.name), | ||
None, | ||
): | ||
is_valid, validation_errors = function.validate_call(function_call) | ||
if not is_valid: | ||
fmt_errors = [ | ||
f"{'.'.join(str(p) for p in f.path)}: {f.message}" | ||
if f.path | ||
else f.message | ||
for f in validation_errors | ||
] | ||
errors.append( | ||
InvalidFunctionCallError( | ||
name=function_call.name, | ||
arguments=function_call.arguments, | ||
message=( | ||
"The set of arguments supplied is invalid:\n" | ||
+ "\n".join(fmt_errors) | ||
), | ||
) | ||
) | ||
else: | ||
errors.append( | ||
InvalidFunctionCallError( | ||
name=function_call.name, | ||
arguments=function_call.arguments, | ||
message=f"Unknown function {function_call.name}", | ||
) | ||
) | ||
|
||
return errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: In the validate_tool_calls
function, consider using a more efficient method for finding a matching function, as the current method using next
with a generator expression inside a loop can be inefficient for large lists. [performance]
def validate_tool_calls( | |
tool_calls: list[AssistantToolCall], functions: list[CompletionModelFunction] | |
) -> list[InvalidFunctionCallError]: | |
""" | |
Validates a list of tool calls against a list of functions. | |
1. Tries to find a function matching each tool call | |
2. If a matching function is found, validates the tool call's arguments, | |
reporting any resulting errors | |
2. If no matching function is found, an error "Unknown function X" is reported | |
3. A list of all errors encountered during validation is returned | |
Params: | |
tool_calls: A list of tool calls to validate. | |
functions: A list of functions to validate against. | |
Returns: | |
list[InvalidFunctionCallError]: All errors encountered during validation. | |
""" | |
errors: list[InvalidFunctionCallError] = [] | |
for tool_call in tool_calls: | |
function_call = tool_call.function | |
if function := next( | |
(f for f in functions if f.name == function_call.name), | |
None, | |
): | |
is_valid, validation_errors = function.validate_call(function_call) | |
if not is_valid: | |
fmt_errors = [ | |
f"{'.'.join(str(p) for p in f.path)}: {f.message}" | |
if f.path | |
else f.message | |
for f in validation_errors | |
] | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=( | |
"The set of arguments supplied is invalid:\n" | |
+ "\n".join(fmt_errors) | |
), | |
) | |
) | |
else: | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=f"Unknown function {function_call.name}", | |
) | |
) | |
return errors | |
def validate_tool_calls( | |
tool_calls: list[AssistantToolCall], functions: list[CompletionModelFunction] | |
) -> list[InvalidFunctionCallError]: | |
errors: list[InvalidFunctionCallError] = [] | |
function_dict = {f.name: f for f in functions} | |
for tool_call in tool_calls: | |
function_call = tool_call.function | |
if function := function_dict.get(function_call.name): | |
is_valid, validation_errors = function.validate_call(function_call) | |
if not is_valid: | |
fmt_errors = [ | |
f"{'.'.join(str(p) for p in f.path)}: {f.message}" | |
if f.path | |
else f.message | |
for f in validation_errors | |
] | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=( | |
"The set of arguments supplied is invalid:\n" | |
+ "\n".join(fmt_errors) | |
), | |
) | |
) | |
else: | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=f"Unknown function {function_call.name}", | |
) | |
) | |
return errors |
def _configure_llm_provider(config: Config) -> MultiProvider: | ||
multi_provider = MultiProvider() | ||
for model in [config.smart_llm, config.fast_llm]: | ||
# Ensure model providers for configured LLMs are available | ||
multi_provider.get_model_provider(model) | ||
return multi_provider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: The method _configure_llm_provider
should include error handling or logging to provide feedback when a model provider is not available, which would improve maintainability and debugging. [maintainability]
def _configure_llm_provider(config: Config) -> MultiProvider: | |
multi_provider = MultiProvider() | |
for model in [config.smart_llm, config.fast_llm]: | |
# Ensure model providers for configured LLMs are available | |
multi_provider.get_model_provider(model) | |
return multi_provider | |
def _configure_llm_provider(config: Config) -> MultiProvider: | |
multi_provider = MultiProvider() | |
for model in [config.smart_llm, config.fast_llm]: | |
try: | |
# Ensure model providers for configured LLMs are available | |
multi_provider.get_model_provider(model) | |
except Exception as e: | |
logging.getLogger(__name__).error(f"Failed to configure model provider for {model}: {str(e)}") | |
continue | |
return multi_provider |
settings = self.llm_provider._settings.copy() | ||
settings.budget = task_llm_budget | ||
settings.configuration = task_llm_provider_config | ||
task_llm_provider = self.llm_provider.__class__( | ||
settings=settings, | ||
logger=logger.getChild( | ||
f"Task-{task.task_id}_{self.llm_provider.__class__.__name__}" | ||
), | ||
) | ||
self._task_budgets[task.task_id] = task_llm_provider._budget # type: ignore | ||
|
||
return task_llm_provider or self.llm_provider | ||
return task_llm_provider |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Refactor the method _get_task_llm_provider
to separate concerns, improving readability and maintainability by extracting the settings configuration into a separate method. [maintainability]
settings = self.llm_provider._settings.copy() | |
settings.budget = task_llm_budget | |
settings.configuration = task_llm_provider_config | |
task_llm_provider = self.llm_provider.__class__( | |
settings=settings, | |
logger=logger.getChild( | |
f"Task-{task.task_id}_{self.llm_provider.__class__.__name__}" | |
), | |
) | |
self._task_budgets[task.task_id] = task_llm_provider._budget # type: ignore | |
return task_llm_provider or self.llm_provider | |
return task_llm_provider | |
def _configure_task_settings(self, task: Task, logger: logging.Logger) -> ModelProviderSettings: | |
settings = self.llm_provider._settings.copy() | |
settings.budget = task_llm_budget | |
settings.configuration = task_llm_provider_config | |
return settings | |
def _get_task_llm_provider(self, task: Task, logger: logging.Logger) -> ModelProvider: | |
if task.additional_input and (user_id := task.additional_input.get("user_id")): | |
_extra_request_headers["AutoGPT-UserID"] = user_id | |
settings = self._configure_task_settings(task, logger) | |
task_llm_provider = self.llm_provider.__class__( | |
settings=settings, | |
logger=logger.getChild( | |
f"Task-{task.task_id}_{self.llm_provider.__class__.__name__}" | |
), | |
) | |
self._task_budgets[task.task_id] = task_llm_provider._budget # type: ignore | |
return task_llm_provider |
def validate_tool_calls( | ||
tool_calls: list[AssistantToolCall], functions: list[CompletionModelFunction] | ||
) -> list[InvalidFunctionCallError]: | ||
""" | ||
Validates a list of tool calls against a list of functions. | ||
|
||
1. Tries to find a function matching each tool call | ||
2. If a matching function is found, validates the tool call's arguments, | ||
reporting any resulting errors | ||
2. If no matching function is found, an error "Unknown function X" is reported | ||
3. A list of all errors encountered during validation is returned | ||
|
||
Params: | ||
tool_calls: A list of tool calls to validate. | ||
functions: A list of functions to validate against. | ||
|
||
Returns: | ||
list[InvalidFunctionCallError]: All errors encountered during validation. | ||
""" | ||
errors: list[InvalidFunctionCallError] = [] | ||
for tool_call in tool_calls: | ||
function_call = tool_call.function | ||
|
||
if function := next( | ||
(f for f in functions if f.name == function_call.name), | ||
None, | ||
): | ||
is_valid, validation_errors = function.validate_call(function_call) | ||
if not is_valid: | ||
fmt_errors = [ | ||
f"{'.'.join(str(p) for p in f.path)}: {f.message}" | ||
if f.path | ||
else f.message | ||
for f in validation_errors | ||
] | ||
errors.append( | ||
InvalidFunctionCallError( | ||
name=function_call.name, | ||
arguments=function_call.arguments, | ||
message=( | ||
"The set of arguments supplied is invalid:\n" | ||
+ "\n".join(fmt_errors) | ||
), | ||
) | ||
) | ||
else: | ||
errors.append( | ||
InvalidFunctionCallError( | ||
name=function_call.name, | ||
arguments=function_call.arguments, | ||
message=f"Unknown function {function_call.name}", | ||
) | ||
) | ||
|
||
return errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Improve error handling in validate_tool_calls
by adding more specific error messages and handling potential exceptions that may occur during the validation process. [error handling]
def validate_tool_calls( | |
tool_calls: list[AssistantToolCall], functions: list[CompletionModelFunction] | |
) -> list[InvalidFunctionCallError]: | |
""" | |
Validates a list of tool calls against a list of functions. | |
1. Tries to find a function matching each tool call | |
2. If a matching function is found, validates the tool call's arguments, | |
reporting any resulting errors | |
2. If no matching function is found, an error "Unknown function X" is reported | |
3. A list of all errors encountered during validation is returned | |
Params: | |
tool_calls: A list of tool calls to validate. | |
functions: A list of functions to validate against. | |
Returns: | |
list[InvalidFunctionCallError]: All errors encountered during validation. | |
""" | |
errors: list[InvalidFunctionCallError] = [] | |
for tool_call in tool_calls: | |
function_call = tool_call.function | |
if function := next( | |
(f for f in functions if f.name == function_call.name), | |
None, | |
): | |
is_valid, validation_errors = function.validate_call(function_call) | |
if not is_valid: | |
fmt_errors = [ | |
f"{'.'.join(str(p) for p in f.path)}: {f.message}" | |
if f.path | |
else f.message | |
for f in validation_errors | |
] | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=( | |
"The set of arguments supplied is invalid:\n" | |
+ "\n".join(fmt_errors) | |
), | |
) | |
) | |
else: | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=f"Unknown function {function_call.name}", | |
) | |
) | |
return errors | |
def validate_tool_calls( | |
tool_calls: list[AssistantToolCall], functions: list[CompletionModelFunction] | |
) -> list[InvalidFunctionCallError]: | |
errors: list[InvalidFunctionCallError] = [] | |
function_dict = {f.name: f for f in functions} | |
for tool_call in tool_calls: | |
function_call = tool_call.function | |
try: | |
if function := function_dict.get(function_call.name): | |
is_valid, validation_errors = function.validate_call(function_call) | |
if not is_valid: | |
fmt_errors = [ | |
f"{'.'.join(str(p) for p in f.path)}: {f.message}" | |
if f.path | |
else f.message | |
for f in validation_errors | |
] | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=( | |
"The set of arguments supplied is invalid:\n" | |
+ "\n".join(fmt_errors) | |
), | |
) | |
) | |
else: | |
raise ValueError(f"Unknown function {function_call.name}") | |
except Exception as e: | |
errors.append( | |
InvalidFunctionCallError( | |
name=function_call.name, | |
arguments=function_call.arguments, | |
message=f"Error processing function {function_call.name}: {str(e)}" | |
) | |
) | |
return errors |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. Changelog updates: 2024-05-02Added
Changed
Fixed
|
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. PR Analysis
✨ Usage guide:Using static code analysis capabilities, the
Language that are currently supported: Python, Java, C++, JavaScript, TypeScript. |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. CI Failure Feedback
✨ CI feedback usage guide:The CI feedback tool (
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
where Configuration options
See more information about the |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. CI Failure Feedback
✨ CI feedback usage guide:The CI feedback tool (
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
where Configuration options
See more information about the |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. CI Failure Feedback
✨ CI feedback usage guide:The CI feedback tool (
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
where Configuration options
See more information about the |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. CI Failure Feedback
✨ CI feedback usage guide:The CI feedback tool (
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
where Configuration options
See more information about the |
PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here. CI Failure Feedback
✨ CI feedback usage guide:The CI feedback tool (
In addition to being automatically triggered, the tool can also be invoked manually by commenting on a PR:
where Configuration options
See more information about the |
feat(agent/core): Add
AnthropicProvider
ANTHROPIC_API_KEY
to .env.template and docsNotable differences in logic compared to
OpenAIProvider
:AnthropicProvider._get_chat_completion_args
system
parameter inAnthropicProvider._get_chat_completion_args
Prompt changes to improve compatibility with
AnthropicProvider
Anthropic has a slightly different API compared to OpenAI, and has much stricter input validation. E.g. Anthropic only supports a single
system
prompt, where OpenAI allows multiplesystem
messages. Anthropic also forbids sequences of multipleuser
orassistant
messages and requires that messages alternate between roles.OneShot
generated promptrefactor(agent/core): Tweak
model_providers.schema
ModelProviderUsage
total_tokens
as it is always equal toprompt_tokens + completion_tokens
update_usage(..)
; no longer requires a fullModelResponse
object as inputModelProviderBudget
usage
todefaultdict[str, ModelProviderUsage]
-> allow per-model usage trackingupdate_usage_and_cost(..)
; no longer requires a fullModelResponse
object as inputModelProviderBudget
zero-argument instantiationAssistantChatMessage.role
to matchChatMessage.role
(str ->ChatMessage.Role
)ModelProvider
base classmax_output_tokens
parameter tocreate_chat_completion
interfaceprefill_response
field toChatPrompt
modelprefill_response
parameter tocreate_chat_completion
interfaceChatModelProvider.get_available_models()
and removeApiManager
OpenAIChatParser
typedef in openai.pybudget
attribute definition onOpenAISettings
usage
inOpenAIProvider
>default_settings
>budget
feat(agent): Allow use of any available LLM provider through
MultiProvider
MultiProvider
(model_providers.multi
)OpenAIProvider
withMultiProvider
Config.smart_llm
andConfig.fast_llm
fromstr
toModelName
feat(agent/core): Validate function call arguments in
create_chat_completion
validate_call
method toCompletionModelFunction
inmodel_providers.schema
validate_tool_calls
utility function inmodel_providers.utils
create_chat_completion
inOpenAIProvider
andAnthropicProvider
refactor(agent): Rename
get_openai_command_specs
tofunction_specs_from_commands
Known issues
create_agent
when prompted with the currentAgentProfileGenerator
. It takes 2 or 3 tries to get it to do so, increasing total latency by a lot and also increasing cost.Type
enhancement, bug_fix
Description
MultiProvider
.AnthropicProvider
to handle specific configurations and interactions with the Anthropic API.Changes walkthrough
12 files
benchmarks.py
Simplify Agent Configuration and Update Provider Function
autogpts/autogpt/agbenchmark_config/benchmarks.py
_configure_openai_provider
to_configure_llm_provider
.agent_prompt_config
.configurations.
agent.py
Refactor Agent Module and Update Function Imports
autogpts/autogpt/autogpt/agents/agent.py
function_specs_from_commands
.execution.
Anthropic
provider.base.py
Update Model References and Enable Functions API by Default
autogpts/autogpt/autogpt/agents/base.py
ModelName
.use_functions_api
by default.one_shot.py
Enhance Prompt Strategy Handling and Configuration
autogpts/autogpt/autogpt/agents/prompt_strategies/one_shot.py
configuration.
agent_protocol_server.py
Refactor Task LLM Provider Setup and Budget Tracking
autogpts/autogpt/autogpt/app/agent_protocol_server.py
configurator.py
Update Model Checking to Use MultiProvider
autogpts/autogpt/autogpt/app/configurator.py
MultiProvider
.ModelName
.main.py
Update LLM Provider Configuration to Use MultiProvider
autogpts/autogpt/autogpt/app/main.py
multi-provider setup.
system.py
Enhance System Command Output Formatting
autogpts/autogpt/autogpt/commands/system.py
config.py
Update Config Model References and Defaults
autogpts/autogpt/autogpt/config/config.py
usage.
anthropic.py
Add Anthropic Provider Class
autogpts/autogpt/autogpt/core/resource/model_providers/anthropic.py
AnthropicProvider
with detailed setup forhandling API interactions and model configurations.
multi.py
Introduce MultiProvider Class for Model Management
autogpts/autogpt/autogpt/core/resource/model_providers/multi.py
MultiProvider
class to handle multiple model providers.openai.py
Cleanup and Adjust OpenAI Provider Implementation
autogpts/autogpt/autogpt/core/resource/model_providers/openai.py