-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: litellm.exceptions.APIConnectionError #1380
Comments
Use curl http://192.168.0.93:1234/v1 which can connect normally. |
set |
log----------------------------------- ============== 04:10:41 - PLAN During handling of the above exception, another exception occurred: Traceback (most recent call last):
|
You can edit the model field. add |
Thanks for your reply. I tried canceling the variable setting and adding the docker -e command to start, but the result was the same as before. Same error message. export LLM_API_KEY="lm-studio" docker run
|
Try running without Docker. After step 3, run |
make build Done. They still seem to be the same error. (opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ poetry run python opendevin/main.py -d ./workspace -t "write bash script to print 5" ============== 17:21:50 - PLAN Provider List: https://docs.litellm.ai/docs/providers Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 17:21:50 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Provider List: https://docs.litellm.ai/docs/providers Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 17:21:53 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Provider List: https://docs.litellm.ai/docs/providers Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 17:21:54 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Provider List: https://docs.litellm.ai/docs/providers Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 17:21:56 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF Provider List: https://docs.litellm.ai/docs/providers Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 17:22:01 - opendevin:ERROR: llm.py:64 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF During handling of the above exception, another exception occurred: Traceback (most recent call last): ERROR:root:<class 'litellm.exceptions.APIConnectionError'>: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF |
In |
Wow,thank you for your reply. it's working, but only up to step 4. error log------------------------------------- ============== 19:50:02 - PLAN ============== 19:50:11 - PLAN ============== 19:50:18 - PLAN ============== 19:50:23 - PLAN ============== 19:50:30 - PLAN During handling of the above exception, another exception occurred: Traceback (most recent call last): ERROR:root:<class 'TypeError'>: unhashable type: 'dict' |
According to the above setting idea, I started it using docker. Unfortunately, it still gave an error 401. As long as I enter a task description in the WEB UI, no matter how I set up the model, it still does. (opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -e LLM_MODEL="openai/lm-studio" -e SANDBOX_TYPE=exec -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal=host-gateway ghcr.io/opendevin/opendevin:0.4.0 ============== 12:08:55 - PLAN During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): |
|
Does the model in WEB UI need to be set?
19:49:59 - opendevin:INFO: llm.py:52 - Initializing LLM with model: openai/MaziyarPanahi/WizardLM-2-7B-GGUF During handling of the above exception, another exception occurred: Traceback (most recent call last): |
you need to pass |
@zhonggegege your last attempt worked, as far as the APIConnectionError is concerned. It now connected successfully, and it started the task. It executed several steps. So please note that this way is how you make it work. (Yes, the web UI needs the model) It encountered a different error later, one about JSON, that's not the same thing... The LLM quality matters, unfortunately the LLM you're using didn't seem to obey instructions and probably sent something it shouldn't have. I think that in this behavior you're seeing now, there's a bug in that on opendevin side, too, will fix that. Please note though, some tasks might not complete as you wish with various LLMs anyway... Try again or try other LLMs too, you can set them up in a similar way. |
Thanks for your reply, I understand. However, in the above successful attempt, I did not set the model in the WEB UI, because many previous attempts to fill in the customized model address were successfully sent to the terminal and enabled, but the model settings here were never displayed properly on the WEB UI. Fill in the model path. Currently, I am eager to connect to the LLM server. I will try other models many times and feedback some useful information. Thank you lovely people. ^^ |
Ah, I know what you mean, you are absolutely right, I just noticed that too. But when I tried, it worked with the model I saved, even if it doesn't show it later. It saved the model, it didn't display it. I'm sure we will fix that, it is unexpected. Can you please tell, the successful attempt was this? |
This is wrong, as @SmartManoj directed me to try, it works properly when used in the parameters of "dcoker run": |
Thanks for the feedback! Were you running |
He did both. |
Yes, I am using WEB UI now. |
You can set Here @rbren still added the label to a solved issue? |
Ah if this is solved I'll close :) |
Is there an existing issue for the same bug?
Describe the bug
(opendev) agent@DESKTOP-OJHF2BM:~/OpenDevin$ docker run -e LLM_API_KEY -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -e LLM_BASE_URL="http://192.168.0.93:1234/v1" -e SANDBOX_TYPE=exec -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal=host-gateway ghcr.io/opendevin/opendevin:0.4.0
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO: 172.17.0.1:39496 - "GET /index.html HTTP/1.1" 304 Not Modified
INFO: ('172.17.0.1', 39502) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiI2YzNhZmY0OC1mZDIwLTRmNjAtYmZhOS0yYmY3OTk3NDJlNDQifQ.dr-5Izu4B2Ziz0plH-KU7DCSNHL2sue7FU-x77iOEJk" [accepted]
INFO: connection open
Starting loop_recv for sid: 6c3aff48-fd20-4f60-bfa9-2bf799742e44
INFO: 172.17.0.1:39496 - "GET /locales/zh/translation.json HTTP/1.1" 404 Not Found
INFO: 172.17.0.1:39496 - "GET /api/refresh-files HTTP/1.1" 200 OK
07:55:26 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf
07:55:26 - opendevin:INFO: llm.py:51 - Initializing LLM with model: shenzhi-wang/Llama3-8B-Chinese-Chat-GGUF-fp16/Llama3-8B-Chinese-Chat-f16.gguf
07:55:27 - opendevin:INFO: exec_box.py:221 - Container stopped
07:55:27 - opendevin:INFO: exec_box.py:239 - Container started
INFO: 172.17.0.1:39496 - "GET /api/litellm-models HTTP/1.1" 200 OK
INFO: 172.17.0.1:39500 - "GET /api/messages/total HTTP/1.1" 200 OK
INFO: 172.17.0.1:39496 - "GET /api/agents HTTP/1.1" 200 OK
07:55:32 - opendevin:INFO: agent.py:144 - Creating agent MonologueAgent using LLM MaziyarPanahi/WizardLM-2-7B-GGUF
07:55:32 - opendevin:INFO: llm.py:51 - Initializing LLM with model: MaziyarPanahi/WizardLM-2-7B-GGUF
07:55:43 - opendevin:INFO: exec_box.py:221 - Container stopped
07:55:43 - opendevin:INFO: exec_box.py:239 - Container started
==============
STEP 0
07:55:53 - PLAN
Use python to write a snake game
07:55:54 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #1 | You can customize these settings in the configuration.07:55:55 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #2 | You can customize these settings in the configuration.07:55:56 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #3 | You can customize these settings in the configuration.07:55:58 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #4 | You can customize these settings in the configuration.07:56:05 - opendevin:ERROR: llm.py:63 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providers. Attempt #5 | You can customize these settings in the configuration.07:56:05 - opendevin:ERROR: agent_controller.py:102 - Error in loop
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 662, in completion
model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 5944, in get_llm_provider
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 5931, in get_llm_provider
raise litellm.exceptions.BadRequestError( # type: ignore
litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersDuring handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/opendevin/controller/agent_controller.py", line 98, in _run
finished = await self.step(i)
^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 211, in step
action = self.agent.step(self.state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/agenthub/monologue_agent/agent.py", line 218, in step
resp = self.llm.completion(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 379, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 325, in iter
raise retry_exc.reraise()
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 158, in reraise
raise self.last_attempt.result()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 382, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/app/opendevin/llm/llm.py", line 78, in wrapper
resp = completion_unwrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2977, in wrapper
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2875, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2137, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8665, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8633, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=MaziyarPanahi/WizardLM-2-7B-GGUF
Pass model as E.g. For 'Huggingface' inference endpoints pass in
completion(model='huggingface/starcoder',..)
Learn more: https://docs.litellm.ai/docs/providersCurrent Version
Installation and Configuration
Model and Agent
lm-studio:MaziyarPanahi/WizardLM-2-7B-GGUF
Reproduction Steps
export LLM_API_KEY="lm-studio"
export WORKSPACE_BASE=/home/agent/OpenDevin/workspace
docker run
-e LLM_API_KEY
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE
-e LLM_BASE_URL="http://192.168.0.93:1234/v1"
-v $WORKSPACE_BASE:/opt/workspace_base
-v /var/run/docker.sock:/var/run/docker.sock
-p 3000:3000
--add-host host.docker.internal=host-gateway
ghcr.io/opendevin/opendevin:0.4.0
In WEB UI:
1.Set up the model:lm-studio:MaziyarPanahi/WizardLM-2-7B-GGUF(OR MaziyarPanahi/WizardLM-2-7B-GGUF/WizardLM-2-7B.Q6_K.gguf)
2."Use python to write a snake game"
Logs, Errors, Screenshots, and Additional Context
After using 0.4.0, "Error creating controller. Please check Docker is running using docker ps" appears. Reinstallation has no effect. refer to "https://github.com/OpenDevin/OpenDevin/issues/1156#issuecomment-2064549427".
Use method "-e SANDBOX_TYPE=exec".
But the problem still exists after starting and running.
It is worth noting that 0.3.1 started normally in the same way, and there was no problem here.
Windows 10+WSL+Ubuntu-20.04+Docker(win)
The text was updated successfully, but these errors were encountered: