Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opendevin:INFO: agent_controller.py:135 OBSERVATION Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} #1187

Open
NancyEdelMary opened this issue Apr 17, 2024 · 39 comments
Labels
bug Something isn't working severity:low Minor issues, code cleanup, etc

Comments

@NancyEdelMary
Copy link

LLM_MODEL="azure/gpt35exploration"
LLM_API_KEY="0b1dxxxxxxxxxxxxxf2ae9c23"
LLM_EMBEDDING_MODEL="azureopenai"
LLM_BASE_URL="https://explorxxxxxxx.azure.com"
LLM_DEPLOYMENT_NAME="gpt35exploration"
LLM_API_VERSION="2023-03-15-preview"
WORKSPACE_DIR="/Users/nancyedelmary/Desktop/devin/OpenDevin"
SANDBOX_TYPE="exec"


Error :
21:44:56 - opendevin:INFO: agent_controller.py:86
PLAN
hi
Traceback (most recent call last):
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/opendevin/controller/agent_controller.py", line 101, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/agent.py", line 204, in step
self._initialize(state.plan.main_goal)
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/agent.py", line 191, in _initialize
self._add_event(action.to_dict())
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/agent.py", line 122, in _add_event
self.memory.add_event(event)
File "/Users/nancyedelmary/Desktop/devin/OpenDevin-main/agenthub/monologue_agent/utils/memory.py", line 88, in add_event
self.index.insert(doc)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/base.py", line 242, in insert
self.insert_nodes(nodes, **insert_kwargs)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 330, in insert_nodes
self._insert(nodes, **insert_kwargs)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 312, in _insert
self._add_nodes_to_index(self._index_struct, nodes, **insert_kwargs)
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 233, in _add_nodes_to_index
nodes_batch = self._get_node_with_embedding(nodes_batch, show_progress)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/vector_store/base.py", line 141, in _get_node_with_embedding
id_to_embed_map = embed_nodes(
^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/indices/utils.py", line 138, in embed_nodes
new_embeddings = embed_model.get_text_embedding_batch(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/instrumentation/dispatcher.py", line 274, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/core/base/embeddings/base.py", line 326, in get_text_embedding_batch
embeddings = self._get_text_embeddings(cur_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/embeddings/openai/base.py", line 427, in _get_text_embeddings
return get_embeddings(
^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/anaconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/llama_index/embeddings/openai/base.py", line 180, in get_embeddings
data = client.embeddings.create(input=list_of_text, model=engine, **kwargs).data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/resources/embeddings.py", line 113, in create
return self._post(
^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1233, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 922, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/nancyedelmary/Library/Caches/pypoetry/virtualenvs/opendevin-ZOAIK5Vs-py3.11/lib/python3.11/site-packages/openai/_base_client.py", line 1013, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
21:45:12 - opendevin:ERROR: agent_controller.py:108 - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
21:45:12 - opendevin:INFO: agent_controller.py:135
OBSERVATION
Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

@NancyEdelMary NancyEdelMary added the bug Something isn't working label Apr 17, 2024
@enyst
Copy link
Collaborator

enyst commented Apr 17, 2024

@NancyEdelMary The LLM_DEPLOYMENT_NAME value needs to be an embeddings deployment you have in your Azure account. Please take a look at this picture in this comment:
It's not a chat model, like GPT, it's the name for an embedding model deployment.

#1033 (comment)

@NancyEdelMary
Copy link
Author

NancyEdelMary commented Apr 18, 2024

The issue is not fixed even after changing the embedding name
Updated Details :
LLM_MODEL="azure/gpt35exploration"
LLM_API_KEY="0b1dxxxxxxxxxxxxxxxxxae9c23"
LLM_EMBEDDING_MODEL="azureopenai"
LLM_BASE_URL="https://explxxxx.openai.azure.com"
LLM_DEPLOYMENT_NAME="text-exx-xxx-xxx" ===>> Modified to "text-xxx-xxx-xxx" as suggested
LLM_API_VERSION="2023-03-15-preview"
WORKSPACE_DIR="/Users/nancyedelmary/Desktop/devin/OpenDevin"
SANDBOX_TYPE="exec"

opendevinerror opendevin err2

@enyst
Copy link
Collaborator

enyst commented Apr 18, 2024

Can you please do git log -n1 in console, and tell what you get?

Also, how exactly are you starting opendevin, what is the command, in full?

@NancyEdelMary
Copy link
Author

make run is the command

@enyst
Copy link
Collaborator

enyst commented Apr 18, 2024

Just as a stopgap, so we can see if the rest is running fine, can you please change LLM_EMBEDDING_MODEL to "local" ? Does opendevin work for you with this as "local"?

@rbren
Copy link
Collaborator

rbren commented Apr 18, 2024

@NancyEdelMary I don't think your LLM_MODEL is a valid one.

The valid ones are here: https://docs.litellm.ai/docs/providers/azure

I would expect to see LLM_MODEL=azure/gpt-4 or LLM_MODEL=azure/gpt-3.5-turbo-16k. Where did you see gpt35exploration?

@NancyEdelMary
Copy link
Author

NancyEdelMary commented Apr 18, 2024

I tried updating it to LLM_MODEL='azure/gpt-3.5-turbo' , but now facing
Oops. Something went wrong: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=gpt-35-turbo Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providersOops. Something went wrong: LLM Provider NOT provided. Pass in the LLM provi

@NancyEdelMary
Copy link
Author

NancyEdelMary commented Apr 18, 2024

s opendevin work for you with this as "local - -
It does not work for me with local
facing below error

==============
STEP 16

21:21:50 - PLAN
hi

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

21:21:51 - opendevin:ERROR: agent_controller.py:175 - OpenAIException - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
21:21:51 - OBSERVATION
OpenAIException - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

==============
STEP 17

21:21:51 - PLAN
hi

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

21:21:52 - opendevin:ERROR: agent_controller.py:175 - OpenAIException - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}
21:21:52 - OBSERVATION
OpenAIException - Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

@rbren
Copy link
Collaborator

rbren commented Apr 18, 2024

where did the string gpt-35-turbo come from? Why is that in the logs?

@SmartManoj
Copy link
Collaborator

I tried updating it to LLM_MODEL='azure/gpt-3.5-turbo' ,

@rbren she maybe mistakenly deleted the dot.

--
@NancyEdelMary
Run this to check whether LLM is working properly.

import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))

messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config.get('LLM_BASE_URL'),
                      messages=messages)

print(response.choices[0].message.content)

dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")

@NancyEdelMary
Copy link
Author

I tried updating it to LLM_MODEL='azure/gpt-3.5-turbo' ,

@rbren she maybe mistakenly deleted the dot.

-- @NancyEdelMary Run this to check whether LLM is working properly.

import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))

messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config.get('LLM_BASE_URL'),
                      messages=messages)

print(response.choices[0].message.content)

dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")

@SmartManoj Works fine .Able to get response
image
Config.toml:
image

@SmartManoj
Copy link
Collaborator

SmartManoj commented Apr 19, 2024

@rbren Seems azure/gpt35exploration is a valid one.
@NancyEdelMary From where did you get this model name?

@NancyEdelMary
Copy link
Author

NancyEdelMary commented Apr 19, 2024

@rbren Seems azure/gpt35exploration is a valid one. @NancyEdelMary From where did you get this model name?

@SmartManoj FYR (AzureopenAI)
image

@SmartManoj
Copy link
Collaborator

@SmartManoj
Copy link
Collaborator

SmartManoj commented Apr 19, 2024

Set like this

LLM_MODEL="azure/gpt-35-turbo"
LLM_DEPLOYMENT_NAME="gpt35exploration"

@NancyEdelMary
Copy link
Author

@SmartManoj does not work

@enyst
image

Let me know what to exactly modify in config.toml file , if any bug exists , let me know that as well, ill refrain from proceeding further.

@SmartManoj
Copy link
Collaborator

Seems, LLM_EMBEDDING_DEPLOYMENT_NAME needs to be set.
Detailed info on #1027 (comment)

@enyst
Copy link
Collaborator

enyst commented Apr 19, 2024

Let me know what to exactly modify in config.toml file , if any bug exists , let me know that as well, ill refrain from proceeding further.

Good point, I'll definitely be fixing the reason for a lot of this confusion. But it shouldn't influence you except that it confuses us. The other thing is the api version...

@NancyEdelMary Please set LLM_DEPLOYMENT_NAME to 'text-emb-ada-002', LLM_MODEL='azure/gpt35exploration'.
make build
make run

If you get an error, what exactly is the error you get in console or logs/opendevin.log, if possible?

@enyst
Copy link
Collaborator

enyst commented Apr 19, 2024

Seems, LLM_EMBEDDING_DEPLOYMENT_NAME needs to be set. Detailed info on #1027 (comment)

I think it doesn't, the code doesn't use that, it uses LLM_DEPLOYMENT_NAME. But I think we should rename it, because it only refers to embeddings. It really is the "embedding deployment name".

That comment is helpful because it shows the reason why we need to look at deployments on Azure and maybe version, but it also suggests a lot of edits to the code that aren't necessary afaict, and will be a maintenance burden as people upgrade.

@SmartManoj
Copy link
Collaborator

SmartManoj commented Apr 19, 2024

@enyst Passing AZURE_API_VERSION to LLM call is not needed?

@enyst
Copy link
Collaborator

enyst commented Apr 19, 2024

Oh, thanks for taking this up! You're fast ❤️

I'm just thinking that if we add another "embedding deployment" var, then the var we have becomes unused. 😅 To me that suggests we should rename the existing one, doesn't it?

@enyst Passing AZURE_API_VERSION to LLM call is not needed?

Thanks to your test script in comments above, I see it wasn't necessary in the completion call, isn't that right? The llm.py completion call worked as is.
To be sure, I did assume it may be necessary, too! But so far we suggested to pass in an env var instead, and litellm is reading it if it exists. I don't think Nancy had it set though... so the minimum necessary for completion call on Azure doesn't actually include it. Am I missing something?

This is the documentation on Azure that we have, and it's using the comment you linked as a guideline and help of what works. You may want to see follow-ups on that here: #1033 (comment)

The next comment suggests renaming the deployment var, and IMHO that is a really good idea. 😅

@enyst
Copy link
Collaborator

enyst commented Apr 19, 2024

Thanks to your test script in comments above, I see it wasn't necessary in the completion call, isn't that right? The llm.py completion call worked as is. To be sure, I did assume it may be necessary, too! But so far we suggested to pass in an env var instead, and litellm is reading it if it exists. I don't think Nancy had it set though... so the minimum necessary for completion call on Azure doesn't actually include it. Am I missing something?

Or maybe Nancy did have it in the environment, as our guide does state that. Either way, it's not a strict necessity to send it explicitly in the completion call.

@SmartManoj
Copy link
Collaborator

Or maybe Nancy did have it in the environment, as our guide does state that

I think so.
@NancyEdelMary, Could you confirm by running this?

import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))

messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'], 
                        api_key=config['LLM_API_KEY'],
                        base_url=config.get('LLM_BASE_URL'),
                      messages=messages)

content = response.choices[0].message.content
print(content)

if '8' in content:
    print('There are still 10 books in the room; reading them does not reduce the count. Consider exploring more accurate models for better results.')

dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
import os; print('AZURE_API_VERSION',os.environ.get('AZURE_API_VERSION'))
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s")

@Tibiritabara
Copy link

Same issue here. The env vars look like this:

LLM_MODEL=azure/gpt-4

And still is showing resource not found. After debugging a bit further and printing the value of the model received in the LiteLLM initialization (opendevin-lXH3Xh61-py3.11/lib/python3.11/site-packages/litellm/main.py", line 981) I noticed that it is taking a hardcoded value gpt-3.5-turbo despite no mention of this model on my env vars

It is not correctly reading the value of LLM_MODEL and assign its value to the right var.

@Tibiritabara
Copy link

Found the error:

model = self.get_arg_or_default(args, ConfigType.LLM_MODEL)

This line is executing this:

        model = self.get_arg_or_default(args, ConfigType.LLM_MODEL)

It is taking the default value from the args. This should be rather replaced to:

        model = config.get(ConfigType.LLM_MODEL)

@Tibiritabara
Copy link

I created this PR to address this:

#1271

@SmartManoj
Copy link
Collaborator

SmartManoj commented Apr 22, 2024

It is taking the default value from the args.

The changes that you made config.get(ConfigType.LLM_MODEL) is already the default value from the args.

It is taking a hardcoded value gpt-3.5-turbo

Where is it hardcoded?

@Tibiritabara
Copy link

Tibiritabara commented Apr 22, 2024

When using azure, and when we change the LLM_MODEL environment variable to replace the LLM version, you will notice that the highlighted piece of code, uses the args default value which currently is gpt-3.5-turbo rather than the env var.

This is why it is throwing a resource not found exception. Not every model deployment on Azure follows a standard name, such as @NancyEdelMary example, or my own setup.

My proposed change will take the actual value from the environment variable, rather than the default written in OpenDevin. I spent a couple hours researching this issue, reproducing it, and finally implementing the fix.

@Tibiritabara
Copy link

Btw, it is hardcoded here, when launching the monologe agent

LLM_MODEL: "gpt-3.5-turbo",

@NancyEdelMary
Copy link
Author

@SmartManoj
image

@SmartManoj
Copy link
Collaborator

So, without the api_version argument, it is failed. right?

@Tibiritabara
Copy link

I got OpenDevin to work locally with the suggested change. The API version is required in the environment, and the nor the change did cover this. As it is, with the line change of the PR, it works as expected

@huqianghui
Copy link

I try many times, the blow command works.
docker run
-e LLM_API_KEY=XXXXX
-e LLM_BASE_URL=https://XXX-west-us.openai.azure.com/
-e LLM_MODEL=azure/gpt-4-turbo
-e LLM_DEPLOYMENT_NAME=text-embedding-ada-002
-e LLM_API_VERSION=2024-02-15-preview
-e OPENAI_API_VERSION=2024-02-15-preview
-e LLM_EMBEDDING_MODEL=azureopenai
-e LLM_EMBEDDING_DEPLOYMENT_NAME=text-embedding-ada-002
-e WORKSPACE_MOUNT_PATH=/Users/huqianghui/Downloads/OpenDevin-ws
-v /Users/huqianghui/Downloads/OpenDevin-ws:/opt/workspace_base
-v /var/run/docker.sock:/var/run/docker.sock
-p 3000:3000
--add-host host.docker.internal=host-gateway
ghcr.io/opendevin/opendevin:0.3.1

LLM_MODEL : azure/deployment
not

azure/model name.

@ayanjiushishuai
Copy link

With the help of a colleague,I have solved a same question.Here is what I do and the config I set.
1. Make sure your config is right
According to the usage of liteLLM ,you should make sure your config is correct.(For me, I was not so sure about my LLM_API_VERSION is right, and it turns out that my original set is wrong)

from litellm import completion
import os
# set ENV variables
os.environ["AZURE_API_KEY"] = ""
os.environ["AZURE_API_BASE"] = ""
os.environ["AZURE_API_VERSION"] = ""

response = completion(
    model = "azure/gpt-4", # your model's deployname
    messages = [{ "content": "Hello, how are you?","role": "user"}]
)
print(response)

If your config is wrong,the Error code 404 will show up again.


2. chose your model from webpage

Btw, it is hardcoded here, when launching the monologe agent

LLM_MODEL: "gpt-3.5-turbo",

This dude point out where the default coming from,which really help me a lot.
But I still know nothing about vite and ts.Thanks to my colleague,he find out that gpt-3.5-turbois default parameter, which is loaded on every page refresh.
So, you need to set your model from page,like that.
image
And you can find your setting here:
image

Chose your model,and it should be fine.

Hope it can help you.


@SmartManoj I have some question:

  1. From above picture,it seems that I can see some other guy's model setting from my local OpenDevin?
  2. Some configs in make setup-config really make me confuse.Like that:
    image
    When Azure endpoint URL overwrite LLM_BASE_URL, it will raising a error:
  File "/root/.virtualenvs/env_opendevin/lib/python3.11/site-packages/toml/decoder.py", line 514, in loads
    raise TomlDecodeError(str(err), original, pos)
toml.decoder.TomlDecodeError: Duplicate keys! (line 5 column 1 char 171)

image

@SmartManoj
Copy link
Collaborator

SmartManoj commented Apr 26, 2024

  1. No.
  2. Remove the first LLM_BASE_URL in your config.toml file

@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

@ayanjiushishuai you are absolutely correct on what you need to set and how. We have now the choices in the UI always overwrite the plain toml, if you run with an UI at all. So you need to set it there. I just tried too, to set a value that is not necessarily in the list, since it's account-specific, and it worked. Thank you for sharing!

@ayanjiushishuai
Copy link

@enyst No thanks~
I believe that it will be much more friendly for whoever don't know much about liteLLM and OpenDevin, if we can add more details in Readme.md/development.md

@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

@ayanjiushishuai Added this detail here: https://github.com/OpenDevin/OpenDevin/pull/1386/files

Please feel free to review these docs, PRs most welcome with any other misses!

@enyst
Copy link
Collaborator

enyst commented Apr 26, 2024

I got OpenDevin to work locally with the suggested change. The API version is required in the environment, and the nor the change did cover this. As it is, with the line change of the PR, it works as expected

This hits an old issue (you can find old topics here if you wish), and we have tried the approach "toml/env overrides UI, except for that case, wait, the other case too". It worked, then it broke, then it worked, then it broke again. Our features became heavier on it, too. So we changed it, or should I say, we recently changed the first things about it and were in the process of changing the rest. I'm really sorry about the time it took you, @ayanjiushishuai and @NancyEdelMary .

The way it should work now is:

  • if you run with the UI, then the UI rules. The UI settings are applied, whatever they are from (you set it now, or you set it before and it was saved in local storage, or you didn't set but there's a fallback value the UI sends).
  • if a setting is not in the UI (some aren't yet but will be, some from env will not be I believe), then config/env is applied.
  • and if you run headless, of course config/env is applied.

@NancyEdelMary please upgrade to 0.4.0, start the app, and set "azure/gpt35exploration" in the UI settings. You can set it, even though it's not in the list, just type it and save it.

@rbren rbren added the severity:low Minor issues, code cleanup, etc label May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working severity:low Minor issues, code cleanup, etc
Projects
None yet
Development

No branches or pull requests

7 participants