New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama: 'NoneType' object has no attribute 'request' #1208
Comments
@evrenyal OpenAI and Claude seem to be much more stable than ollama |
thank you for reply @rbren I'm not talking about performance. The Ollama doesn't work at all. I keep getting these errors. |
According to this doc, the model name needs to be the full model name, as seen in |
Unfortunately, I tried again but it didn't work. @enyst |
Can you please paste the errors now? I'm not sure where the problem is if the settings are taken into account, but I wonder first if they were applied, or you may need to set the model in the UI. We've been changing this behavior lately afaik. |
@evrenyal That's strange, i can't re-produce your error. export LLM_MODEL="ollama/gemma:2b"
export LLM_API_KEY="ollama"
export LLM_EMBEDDING_MODEL="local"
export WORKSPACE_DIR="./workspace"
export LLM_BASE_URL="http://localhost:11434"
01:38:51 - opendevin:INFO: llm.py:25 - Initializing LLM with model: ollama/gemma:2b
01:38:52 - opendevin:INFO: ssh_box.py:271 - Container stopped
01:38:52 - opendevin:WARNING: ssh_box.py:283 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
01:38:52 - opendevin:INFO: ssh_box.py:309 - Container started
01:38:53 - opendevin:INFO: ssh_box.py:326 - waiting for container to start: 1, container status: running
01:38:55 - opendevin:INFO: agent_controller.py:154 - STEP 0
01:38:55 - opendevin:INFO: agent_controller.py:155 - write a bash script that prints hello
01:43:06 - opendevin:INFO: llm.py:25 - Initializing LLM with model: ollama/gemma:2b
01:43:06 - opendevin:INFO: ssh_box.py:271 - Container stopped
01:43:06 - opendevin:WARNING: ssh_box.py:283 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.
01:43:07 - opendevin:INFO: ssh_box.py:309 - Container started
01:43:08 - opendevin:INFO: ssh_box.py:326 - waiting for container to start: 1, container status: running
01:43:09 - opendevin:INFO: agent_controller.py:154 - STEP 0
01:43:09 - opendevin:INFO: agent_controller.py:155 - write a bash script that prints hello
01:46:34 - opendevin:ERROR: agent_controller.py:175 - opendevin.action.agent.AgentThinkAction() argument after ** must be a mapping, not str
01:46:34 - opendevin:INFO: agent_controller.py:202 - opendevin.action.agent.AgentThinkAction() argument after ** must be a mapping, not str
01:46:34 - opendevin:INFO: agent_controller.py:154 - STEP 1
01:46:34 - opendevin:INFO: agent_controller.py:155 - write a bash script that prints hello
01:47:50 - opendevin:INFO: agent_controller.py:172 - AgentThinkAction(thought="It seems like there might be an existing project here. I should probably start by running `ls` to see what's here.", action=<ActionType.THINK: 'think'>)
01:47:50 - opendevin:INFO: agent_controller.py:154 - STEP 2
01:47:50 - opendevin:INFO: agent_controller.py:155 - write a bash script that prints hello
01:49:03 - opendevin:INFO: agent_controller.py:172 - AgentThinkAction(thought="It seems like there might be an existing project here. I should probably start by running `ls` to see what's here.", action=<ActionType.THINK: 'think'>)
01:49:03 - opendevin:INFO: agent_controller.py:154 - STEP 3
01:49:03 - opendevin:INFO: agent_controller.py:155 - write a bash script that prints hello |
Run this to check whether LLM is working properly. import tomllib as toml
from litellm import completion
from datetime import datetime
file_path=r'config.toml'
config = toml.load(open(file_path,'rb'))
messages = [{ "content": "If there are 10 books in a room and I read 2, how many books are still in the room?","role": "user"}]
dt = datetime.now()
response = completion(model=config['LLM_MODEL'],
api_key=config['LLM_API_KEY'],
base_url=config.get('LLM_BASE_URL'),
messages=messages)
print(response.choices[0].message.content)
dt2 = datetime.now()
print('Used model:',config['LLM_MODEL'])
print(f"Time taken: {(dt2-dt).total_seconds():.1f}s") |
having the same issue here. I noticed that not always it takes correctly the env vars, but always prints out the same error. Also trying to run
from docker, it says
|
Need full traceback |
|
Exact issue with you @evrenyal , step 99 in 1 second X_X
|
Similar to what you said, I got a result like this in docker. @SmartManoj
When I try it through the UI it appears like this. It doesn't seem to be working properly.
|
I am pretty sure with only ollama it's fine for inferencing @evrenyal |
I just triggered Guessing it's an issue connecting to ollama from inside docker--LLM_BASE_URL might need to be host.docker.internal... |
I think the problem here is not the connection issue. I think using From @evrenyal 's log:
|
I've tried sth like this too, 99 steps in 1 second also |
I had an issue with OpenDeviin not reading my config.toml file variables because the browser's local storage had old settings in it that would override whatever I would try to set in the config, and that seemed to also cause the "opendevin:ERROR: agent_controller.py:175 - LLM Provider NOT provided. Pass in the LLM provider you are trying to call" error for me as well. I was able to overcome that by stopping the app, rechecking my LLM settings, then clearing the local storage in my browser. Some combination of those seemed to resolve the error I was getting. In Chrome, right click -> Inspect -> Application tab -> Storage section -> expand Local storage, right click on the entries to clear them. I think it's a bug (or a feature) that the LLM_MODEL setting is being ignored for whatever is in the browser's local storage since you can set it in the browser with the gear icon, |
@spoonbobo, Could you please provide the logs? |
The UI interface is set: ollama/codeqwen:chat docker run log------------------------ Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new |
Hi
I got errors like:
My observations :
If there is something more stable than Llama, I can try that too.
The text was updated successfully, but these errors were encountered: