New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
opendevin:INFO: agent_controller.py:135 OBSERVATION Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} #1187
Comments
@NancyEdelMary The LLM_DEPLOYMENT_NAME value needs to be an embeddings deployment you have in your Azure account. Please take a look at this picture in this comment: |
The issue is not fixed even after changing the embedding name |
Can you please do Also, how exactly are you starting opendevin, what is the command, in full? |
make run is the command |
Just as a stopgap, so we can see if the rest is running fine, can you please change LLM_EMBEDDING_MODEL to "local" ? Does opendevin work for you with this as "local"? |
@NancyEdelMary I don't think your The valid ones are here: https://docs.litellm.ai/docs/providers/azure I would expect to see |
I tried updating it to LLM_MODEL='azure/gpt-3.5-turbo' , but now facing |
============== 21:21:50 - PLAN Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 21:21:51 - opendevin:ERROR: agent_controller.py:175 - OpenAIException - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} ============== 21:21:51 - PLAN Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 21:21:52 - opendevin:ERROR: agent_controller.py:175 - OpenAIException - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} |
where did the string |
@rbren she maybe mistakenly deleted the dot. -- import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))
messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'],
api_key=config['LLM_API_KEY'],
base_url=config.get('LLM_BASE_URL'),
messages=messages)
print(response.choices[0].message.content)
dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s") |
@SmartManoj Works fine .Able to get response |
@rbren Seems |
@SmartManoj FYR (AzureopenAI) |
Set like this
|
@SmartManoj does not work Let me know what to exactly modify in config.toml file , if any bug exists , let me know that as well, ill refrain from proceeding further. |
Seems, |
Good point, I'll definitely be fixing the reason for a lot of this confusion. But it shouldn't influence you except that it confuses us. The other thing is the api version... @NancyEdelMary Please set LLM_DEPLOYMENT_NAME to 'text-emb-ada-002', LLM_MODEL='azure/gpt35exploration'. If you get an error, what exactly is the error you get in console or logs/opendevin.log, if possible? |
I think it doesn't, the code doesn't use that, it uses LLM_DEPLOYMENT_NAME. But I think we should rename it, because it only refers to embeddings. It really is the "embedding deployment name". That comment is helpful because it shows the reason why we need to look at deployments on Azure and maybe version, but it also suggests a lot of edits to the code that aren't necessary afaict, and will be a maintenance burden as people upgrade. |
@enyst Passing AZURE_API_VERSION to LLM call is not needed? |
Oh, thanks for taking this up! You're fast ❤️ I'm just thinking that if we add another "embedding deployment" var, then the var we have becomes unused. 😅 To me that suggests we should rename the existing one, doesn't it?
Thanks to your test script in comments above, I see it wasn't necessary in the completion call, isn't that right? The llm.py completion call worked as is. This is the documentation on Azure that we have, and it's using the comment you linked as a guideline and help of what works. You may want to see follow-ups on that here: #1033 (comment) The next comment suggests renaming the deployment var, and IMHO that is a really good idea. 😅 |
Or maybe Nancy did have it in the environment, as our guide does state that. Either way, it's not a strict necessity to send it explicitly in the completion call. |
I think so. import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))
messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'],
api_key=config['LLM_API_KEY'],
base_url=config.get('LLM_BASE_URL'),
messages=messages)
content = response.choices[0].message.content
print(content)
if '8' in content:
print('There are still 10 books in the room; reading them does not reduce the count. Consider exploring more accurate models for better results.')
dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
import os; print('AZURE_API_VERSION',os.environ.get('AZURE_API_VERSION'))
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s") |
Same issue here. The env vars look like this:
And still is showing resource not found. After debugging a bit further and printing the value of the model received in the LiteLLM initialization ( It is not correctly reading the value of LLM_MODEL and assign its value to the right var. |
Found the error: OpenDevin/opendevin/server/agent/agent.py Line 138 in 0e572c3
This line is executing this: model = self.get_arg_or_default(args, ConfigType.LLM_MODEL) It is taking the default value from the args. This should be rather replaced to: model = config.get(ConfigType.LLM_MODEL) |
I created this PR to address this: |
The changes that you made
Where is it hardcoded? |
When using azure, and when we change the LLM_MODEL environment variable to replace the LLM version, you will notice that the highlighted piece of code, uses the args default value which currently is This is why it is throwing a My proposed change will take the actual value from the environment variable, rather than the default written in OpenDevin. I spent a couple hours researching this issue, reproducing it, and finally implementing the fix. |
Btw, it is hardcoded here, when launching the monologe agent
|
So, without the |
I got OpenDevin to work locally with the suggested change. The API version is required in the environment, and the nor the change did cover this. As it is, with the line change of the PR, it works as expected |
I try many times, the blow command works. LLM_MODEL : azure/deployment azure/model name. |
With the help of a colleague,I have solved a same question.Here is what I do and the config I set. from litellm import completion
import os
# set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""
response = completion(
model = "azure/gpt-4", # your model's deployname
messages = [{ "content": "Hello, how are you?","role": "user"}]
)
print(response) If your config is wrong,the Error code 404 will show up again. 2. chose your model from webpage
This dude point out where the default coming from,which really help me a lot. Chose your model,and it should be fine. Hope it can help you. @SmartManoj I have some question:
|
|
@ayanjiushishuai you are absolutely correct on what you need to set and how. We have now the choices in the UI always overwrite the plain toml, if you run with an UI at all. So you need to set it there. I just tried too, to set a value that is not necessarily in the list, since it's account-specific, and it worked. Thank you for sharing! |
@enyst No thanks~ |
@ayanjiushishuai Added this detail here: https://github.com/OpenDevin/OpenDevin/pull/1386/files Please feel free to review these docs, PRs most welcome with any other misses! |
This hits an old issue (you can find old topics here if you wish), and we have tried the approach "toml/env overrides UI, except for that case, wait, the other case too". It worked, then it broke, then it worked, then it broke again. Our features became heavier on it, too. So we changed it, or should I say, we recently changed the first things about it and were in the process of changing the rest. I'm really sorry about the time it took you, @ayanjiushishuai and @NancyEdelMary . The way it should work now is:
@NancyEdelMary please upgrade to 0.4.0, start the app, and set "azure/gpt35exploration" in the UI settings. You can set it, even though it's not in the list, just type it and save it. |
LLM_MODEL="azure/gpt35exploration"
LLM_API_KEY="0b1dxxxxxxxxxxxxxf2ae9c23"
LLM_EMBEDDING_MODEL="azureopenai"
LLM_BASE_URL="https://explorxxxxxxx.azure.com"
LLM_DEPLOYMENT_NAME="gpt35exploration"
LLM_API_VERSION="2023-03-15-preview"
WORKSPACE_DIR="/Users/nancyedelmary/Desktop/devin/OpenDevin"
SANDBOX_TYPE="exec"
Error :
21:44:56 - opendevin:INFO: agent_controller.py:86
PLAN
hi
Traceback (most recent call last):
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/opendevin/controller/agent_controller.py", line 101, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/agent.py", line 204, in step
self._initialize(state.plan.main_goal)
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/agent.py", line 191, in _initialize
self._add_event(action.to_dict())
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/agent.py", line 122, in _add_event
self.memory.add_event(event)
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/utils/memory.py", line 88, in add_event
self.index.insert(doc)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/base.py", line 242, in insert
self.insert_nodes(nodes, **insert_kwargs)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 330, in insert_nodes
self._insert(nodes, **insert_kwargs)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 312, in _insert
self._add_nodes_to_index(self._index_struct, nodes, **insert_kwargs)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 233, in _add_nodes_to_index
nodes_batch = self._get_node_with_embedding(nodes_batch, show_progress)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 141, in _get_node_with_embedding
id_to_embed_map = embed_nodes(
^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/utils.py", line 138, in embed_nodes
new_embeddings = embed_model.get_text_embedding_batch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/base/embeddings/base.py", line 326, in get_text_embedding_batch
embeddings = self._get_text_embeddings(cur_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/embeddings/openai/base.py", line 427, in _get_text_embeddings
return get_embeddings(
^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/embeddings/openai/base.py", line 180, in get_embeddings
data = client.embeddings.create(input=list_of_text, model=engine, **kwargs).data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/resources/embeddings.py", line 113, in create
return self._post(
^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 922, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
21:45:12 - opendevin:ERROR: agent_controller.py:108 - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
21:45:12 - opendevin:INFO: agent_controller.py:135
OBSERVATION
Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
The text was updated successfully, but these errors were encountered: