Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run sample notebook using Azure OpenAI. #281

Closed
satfl opened this issue Oct 18, 2023 · 2 comments
Closed

Unable to run sample notebook using Azure OpenAI. #281

satfl opened this issue Oct 18, 2023 · 2 comments

Comments

@satfl
Copy link

satfl commented Oct 18, 2023

Hello. I am willing to try running the agentchat_auto_feedback_from_code_execution.ipynb . I have a gpt-4 model with a custom name deployed on Azure. Locally, I have created OAI_CONFIG_LIST file, where I specify the key, the model name, and an API base.

 [
    {
        "model": "custom_model_name",
        "api_key": "",
        "api_base": "https://custom-azure-open-ai-name.openai.azure.com",
        "api_version": "2023-07-01-preview"
    }
]

The file seems to be read properly. I have also adjusted the filter_dict accordingly. However, when I try running the first cell in example task from the notebook i get InvalidRequestError: Resource not found.
I can also confirm that the same model with the same key i perfectly responsive through regular openai python library

Here is the full error stack trace:

{
	"name": "InvalidRequestError",
	"message": "Resource not found",
	"stack": "---------------------------------------------------------------------------
InvalidRequestError                       Traceback (most recent call last)
\\AutoGen\\autogen-main\
otebook\\agentchat_auto_feedback_from_code_execution.ipynb Cell 11 line 2
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=10'>11</a> user_proxy = autogen.UserProxyAgent(
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=11'>12</a>     name=\"user_proxy\",
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=12'>13</a>     human_input_mode=\"NEVER\",
   (...)
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=18'>19</a>     },
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=19'>20</a> )
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=20'>21</a> # the assistant receives a message from the user_proxy, which contains the task description
---> <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=21'>22</a> user_proxy.initiate_chat(
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=22'>23</a>     assistant,
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=23'>24</a>     message=\"\"\"What date is today? Compare the year-to-date gain for META and TESLA.\"\"\",
     <a href='vscode-notebook-cell:/AutoGen/autogen-main/notebook/agentchat_auto_feedback_from_code_execution.ipynb#X13sZmlsZQ%3D%3D?line=24'>25</a> )

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\agentchat\\conversable_agent.py:531, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, **context)
    517 \"\"\"Initiate a chat with the recipient agent.
    518 
    519 Reset the consecutive auto reply counter.
   (...)
    528         \"message\" needs to be provided if the `generate_init_message` method is not overridden.
    529 \"\"\"
    530 self._prepare_chat(recipient, clear_history)
--> 531 self.send(self.generate_init_message(**context), recipient, silent=silent)

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\agentchat\\conversable_agent.py:334, in ConversableAgent.send(self, message, recipient, request_reply, silent)
    332 valid = self._append_oai_message(message, \"assistant\", recipient)
    333 if valid:
--> 334     recipient.receive(message, self, request_reply, silent)
    335 else:
    336     raise ValueError(
    337         \"Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided.\"
    338     )

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\agentchat\\conversable_agent.py:462, in ConversableAgent.receive(self, message, sender, request_reply, silent)
    460 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    461     return
--> 462 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    463 if reply is not None:
    464     self.send(reply, sender, silent=silent)

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\agentchat\\conversable_agent.py:779, in ConversableAgent.generate_reply(self, messages, sender, exclude)
    777     continue
    778 if self._match_trigger(reply_func_tuple[\"trigger\"], sender):
--> 779     final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple[\"config\"])
    780     if final:
    781         return reply

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\agentchat\\conversable_agent.py:606, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
    603     messages = self._oai_messages[sender]
    605 # TODO: #1143 handle token limit exceeded error
--> 606 response = oai.ChatCompletion.create(
    607     context=messages[-1].pop(\"context\", None), messages=self._oai_system_message + messages, **llm_config
    608 )
    609 return True, oai.ChatCompletion.extract_text_or_function_call(response)[0]

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\oai\\completion.py:789, in Completion.create(cls, context, use_cache, config_list, filter_func, raise_on_ratelimit_or_timeout, allow_format_str_template, **config)
    787     base_config[\"max_retry_period\"] = 0
    788 try:
--> 789     response = cls.create(
    790         context,
    791         use_cache,
    792         raise_on_ratelimit_or_timeout=i < last or raise_on_ratelimit_or_timeout,
    793         **base_config,
    794     )
    795     if response == -1:
    796         return response

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\oai\\completion.py:820, in Completion.create(cls, context, use_cache, config_list, filter_func, raise_on_ratelimit_or_timeout, allow_format_str_template, **config)
    818 with diskcache.Cache(cls.cache_path) as cls._cache:
    819     cls.set_cache(seed)
--> 820     return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)

File \\AutoGen\\.venv\\Lib\\site-packages\\autogen\\oai\\completion.py:210, in Completion._get_response(cls, config, raise_on_ratelimit_or_timeout, use_cache)
    208         response = openai_completion.create(**config)
    209     else:
--> 210         response = openai_completion.create(request_timeout=request_timeout, **config)
    211 except (
    212     ServiceUnavailableError,
    213     APIConnectionError,
    214 ):
    215     # transient error
    216     logger.info(f\"retrying in {retry_wait_time} seconds...\", exc_info=1)

File \\AutoGen\\.venv\\Lib\\site-packages\\openai\\api_resources\\chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
     23 while True:
     24     try:
---> 25         return super().create(*args, **kwargs)
     26     except TryAgain as e:
     27         if timeout is not None and time.time() > start + timeout:

File \\AutoGen\\.venv\\Lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py:155, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
    129 @classmethod
    130 def create(
    131     cls,
   (...)
    138     **params,
    139 ):
    140     (
    141         deployment_id,
    142         engine,
   (...)
    152         api_key, api_base, api_type, api_version, organization, **params
    153     )
--> 155     response, _, api_key = requestor.request(
    156         \"post\",
    157         url,
    158         params=params,
    159         headers=headers,
    160         stream=stream,
    161         request_id=request_id,
    162         request_timeout=request_timeout,
    163     )
    165     if stream:
    166         # must be an iterator
    167         assert not isinstance(response, OpenAIResponse)

File \\AutoGen\\.venv\\Lib\\site-packages\\openai\\api_requestor.py:299, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
    278 def request(
    279     self,
    280     method,
   (...)
    287     request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
    288 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
    289     result = self.request_raw(
    290         method.lower(),
    291         url,
   (...)
    297         request_timeout=request_timeout,
    298     )
--> 299     resp, got_stream = self._interpret_response(result, stream)
    300     return resp, got_stream, self.api_key

File \\AutoGen\\.venv\\Lib\\site-packages\\openai\\api_requestor.py:710, in APIRequestor._interpret_response(self, result, stream)
    702     return (
    703         self._interpret_response_line(
    704             line, result.status_code, result.headers, stream=True
    705         )
    706         for line in parse_stream(result.iter_lines())
    707     ), True
    708 else:
    709     return (
--> 710         self._interpret_response_line(
    711             result.content.decode(\"utf-8\"),
    712             result.status_code,
    713             result.headers,
    714             stream=False,
    715         ),
    716         False,
    717     )

File \\AutoGen\\.venv\\Lib\\site-packages\\openai\\api_requestor.py:775, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
    773 stream_error = stream and \"error\" in resp.data
    774 if stream_error or not 200 <= rcode < 300:
--> 775     raise self.handle_error_response(
    776         rbody, rcode, resp.data, rheaders, stream_error=stream_error
    777     )
    778 return resp

InvalidRequestError: Resource not found"
}
@gagb
Copy link
Collaborator

gagb commented Oct 18, 2023

apt_type is missing?

@satfl
Copy link
Author

satfl commented Oct 19, 2023

apt_type is missing?

Thank you, it was indeed the api_type that was missing.
So the correct list looks the following way:

 [
    {
        "model": "custom_model_name",
        "api_key": "",
        "api_type": "azure",
        "api_base": "https://custom-azure-open-ai-name.openai.azure.com",
        "api_version": "2023-07-01-preview"
    }
]

@satfl satfl closed this as completed Oct 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants