help: Client error '404 Resource Not Found' for url 'https://openai-xxx.openai.azure.com/chat/completions' #1373
Unanswered
huanghe1986
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I confirmed that the configuration parameters of my LLM are correct. I can call it correctly in postman, but in opendevin I found that these parameters are not included in the url path, resulting in 404
So some kindhearted people can help me find out what the problem is?
config.toml of my instance:
LLM_BASE_URL="https://openai-xxx.openai.azure.com/"
LLM_API_KEY="xxxxxxxxx"
LLM_MODEL="azure/gpt35"
log:
INFO: 127.0.0.1:33112 - "DELETE /api/messages HTTP/1.1" 200 OK
==============
STEP 0
14:39:28 - PLAN
xxxxxx ?
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:httpx:load_verify_locations cafile='/home/llm01/.cache/pypoetry/virtualenvs/opendevin-Krb0104y-py3.11/lib/python3.11/site-packages/certifi/cacert.pem'
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:httpx:load_verify_locations cafile='/home/llm01/.cache/pypoetry/virtualenvs/opendevin-Krb0104y-py3.11/lib/python3.11/site-packages/certifi/cacert.pem'
DEBUG:httpx:load_ssl_context verify=True cert=None trust_env=True http2=False
DEBUG:httpx:load_verify_locations cafile='/home/llm01/.cache/pypoetry/virtualenvs/opendevin-Krb0104y-py3.11/lib/python3.11/site-packages/certifi/cacert.pem'
DEBUG:openai._base_client:Request options: {'method': 'post', 'url': '/chat/completions', 'timeout': 600.0, 'files': None, 'json_data': {'messages': [{'content': '\nYou're a thoughtful robot. Your main task is this:\nxxxxxx ?\n\nDon't expand the scope of your task--just complete it as written.\n\nThis is your internal monologue, in JSON format:\n\n[]\n\n\nYour most recent thought is at the bottom of that monologue. Continue your train of thought.\nWhat is your next thought or action? Your response must be in JSON format.\nIt must be an object, and it must contain two fields:\n*
action
, which is one of the actions below\n*args
, which is a map of key-value pairs, specifying the arguments for that action\n\nHere are the possible actions:\n*read
- reads the content of a file. Arguments:\n *path
- the path of the file to read\n*write
- writes the content to a file. Arguments:\n *path
- the path of the file to write\n *content
- the content to write to the file\n*run
- runs a command. Arguments:\n *command
- the command to run\n *background
- if true, run the command in the background, so that other commands can be run concurrently. Useful for e.g. starting a server. You won't be able to see the logs. You don't need to end the command with&
, just set this to true.\n*kill
- kills a background command\n *id
- the ID of the background command to kill\n*browse
- opens a web page. Arguments:\n *url
- the URL to open\n*recall
- recalls a past memory. Arguments:\n *query
- the query to search for\n*think
- make a plan, set a goal, or record your thoughts. Arguments:\n *thought
- the thought to record\n*finish
- if you're absolutely certain that you've completed your task and have tested your work, use the finish action to stop working.\n\n\n\nYou MUST take time to think in between read, write, run, browse, and recall actions.\nYou should never act twice in a row without thinking. But if your last several\nactions are all "think" actions, you should consider taking a different action.\n\nNotes:\n* your environment is Debian Linux. You can install software withapt
\n* your working directory will not change, even if you runcd
. All commands will be run in the/workspace
directory.\n* don't run interactive commands, or commands that don't return (e.g.node server.js
). You may run commands in the background (e.g.node server.js &
)\n\nWhat is your next thought or action? Again, you must reply with JSON, and only with JSON.\n\n\n', 'role': 'user'}], 'model': 'gpt-3.5-turbo'}, 'extra_json': {}}DEBUG:httpcore.connection:connect_tcp.started host='xxxxxx .com' port=8080 local_address=None timeout=600.0 socket_options=None
DEBUG:httpcore.connection:connect_tcp.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f9a19d31110>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'CONNECT']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'CONNECT']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'CONNECT']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 200, b'Connection established', [(b'Proxy-agent', b'netentsec')])
DEBUG:httpcore.proxy:start_tls.started ssl_context=<ssl.SSLContext object at 0x7f9b2c6ac830> server_hostname='openai-xxxxxx .openai.azure.com' timeout=600.0
DEBUG:httpcore.proxy:start_tls.complete return_value=<httpcore._backends.sync.SyncStream object at 0x7f9a1c88fc10>
DEBUG:httpcore.http11:send_request_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_headers.complete
DEBUG:httpcore.http11:send_request_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:send_request_body.complete
DEBUG:httpcore.http11:receive_response_headers.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_headers.complete return_value=(b'HTTP/1.1', 404, b'Resource Not Found', [(b'Content-Length', b'56'), (b'Content-Type', b'application/json'), (b'apim-request-id', b'841136c1-1346-4c8d'), (b'Strict-Transport-Security', b'max-age=31536000; includeSubDomains; preload'), (b'x-content-type-options', b'nosniff'), (b'Date', b'Thu, 25 Apr 2024 06:39:24 GMT')])
INFO:httpx:HTTP Request: POST https://openai-xxxxxx .azure.com/chat/completions "HTTP/1.1 404 Resource Not Found"
DEBUG:httpcore.http11:receive_response_body.started request=<Request [b'POST']>
DEBUG:httpcore.http11:receive_response_body.complete
DEBUG:httpcore.http11:response_closed.started
DEBUG:httpcore.http11:response_closed.complete
DEBUG:openai._base_client:HTTP Request: POST https://openai-xxxxxx .openai.azure.com/chat/completions "404 Resource Not Found"
DEBUG:openai._base_client:Encountered httpx.HTTPStatusError
Traceback (most recent call last):
File "/home/llm01/.cache/pypoetry/virtualenvs/opendevin-Krb0104y-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 991, in _request
response.raise_for_status()
File "/home/llm01/.cache/pypoetry/virtualenvs/opendevin-Krb0104y-py3.11/lib/python3.11/site-packages/httpx/_models.py", line 761, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '404 Resource Not Found' for url 'https://openai-xxxxxx .openai.azure.com/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
DEBUG:openai._base_client:Not retrying
DEBUG:openai._base_client:Re-raising status error
Beta Was this translation helpful? Give feedback.
All reactions