Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama: 'llama2' not found, try pulling it first #311

Closed
R3verseIN opened this issue Mar 28, 2024 · 16 comments
Closed

ollama: 'llama2' not found, try pulling it first #311

R3verseIN opened this issue Mar 28, 2024 · 16 comments
Labels
bug Something isn't working

Comments

@R3verseIN
Copy link

**STEP 3

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
openai_client = OpenAI(
File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 989, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 962, in completion
response = openai_chat_completions.completion(
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 384, in completion
raise OpenAIError(status_code=500, message=traceback.format_exc())
litellm.llms.openai.OpenAIError: Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
openai_client = OpenAI(
File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/r3versein/OpenDevin/opendevin/controller/init.py", line 85, in step
action = self.agent.step(state)
File "/home/r3versein/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 172, in step
resp = self.llm.completion(messages=messages)
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2796, in wrapper
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2693, in wrapper
result = original_function(*args, **kwargs)
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 2093, in completion
raise exception_type(
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8283, in exception_type
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 7069, in exception_type
raise AuthenticationError(
litellm.exceptions.AuthenticationError: OpenAIException - Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
openai_client = openai(
File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init
raise openaiError(
openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

    OBSERVATION:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

==============
STEP 4

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
openai_client = OpenAI(
File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 989, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 962, in completion
response = openai_chat_completions.completion(
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 384, in completion
raise OpenAIError(status_code=500, message=traceback.format_exc())
litellm.llms.openai.OpenAIError: Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
openai_client = OpenAI(
File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/r3versein/OpenDevin/opendevin/controller/init.py", line 85, in step
action = self.agent.step(state)
File "/home/r3versein/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 172, in step
resp = self.llm.completion(messages=messages)
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2796, in wrapper
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2693, in wrapper
result = original_function(*args, **kwargs)
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 2093, in completion
raise exception_type(
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8283, in exception_type
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 7069, in exception_type
raise AuthenticationError(
litellm.exceptions.AuthenticationError: OpenAIException - Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
openai_client = openai(
File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in init
raise openaiError(
openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

    OBSERVATION:
    OpenAIException - Traceback (most recent call last):
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 376, in completion
        raise e
      File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/openai.py", line 312, in completion
        openai_client = openai(
      File "/home/r3versein/.local/lib/python3.10/site-packages/openai/_client.py", line 98, in __init__
        raise openaiError(
    openai.openaiError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable**

the executed command is
export LLM_EMBEDDING_MODEL="llama2"
export LLM_BASE_URL="http://localhost:11434"
export LLM_API_KEY=""
export WORKSPACE_DIR="/home/r3versein/work/"
uvicorn opendevin.server.listen:app --port 3000

@R3verseIN R3verseIN added the bug Something isn't working label Mar 28, 2024
@rbren
Copy link
Collaborator

rbren commented Mar 28, 2024

It's defaulting to using OpenAI for the core model. Can you set LLM_MODEL="ollama/llama2" and see if that fixes it?

@R3verseIN
Copy link
Author

It's defaulting to using OpenAI for the core model. Can you set LLM_MODEL="ollama/llama2" and see if that fixes it?

STEP 99

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

    AGENT ERROR:
    {"error":"model 'llama2' not found, try pulling it first"}

Traceback (most recent call last):
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 1878, in completion
generator = ollama.get_ollama_response(
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/llms/ollama.py", line 198, in get_ollama_response
raise OllamaError(status_code=response.status_code, message=response.text)
litellm.llms.ollama.OllamaError: {"error":"model 'llama2' not found, try pulling it first"}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/r3versein/OpenDevin/opendevin/controller/init.py", line 85, in step
action = self.agent.step(state)
File "/home/r3versein/OpenDevin/agenthub/langchains_agent/langchains_agent.py", line 172, in step
resp = self.llm.completion(messages=messages)
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2796, in wrapper
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 2693, in wrapper
result = original_function(*args, **kwargs)
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/main.py", line 2093, in completion
raise exception_type(
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8283, in exception_type
raise e
File "/home/r3versein/.local/lib/python3.10/site-packages/litellm/utils.py", line 8251, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: {"error":"model 'llama2' not found, try pulling it first"}

    OBSERVATION:
    {"error":"model 'llama2' not found, try pulling it first"}

Exited before finishing

but when i run ollama list it gives

r3versein@DESKTOP-IL31CM9:~$ ollama list
NAME ID SIZE MODIFIED
llama2:latest 78e26419b446 3.8 GB 3 hours ago

@rbren
Copy link
Collaborator

rbren commented Mar 28, 2024

Oh I bet it's related to this: #285

@rbren
Copy link
Collaborator

rbren commented Mar 28, 2024

Actually according to the docs, you should be fine: https://docs.litellm.ai/docs/providers/ollama

So it seems like an ollama issue...

@rbren rbren changed the title litellm or api related issue ollama: 'llama2' not found, try pulling it first Mar 28, 2024
@rbren
Copy link
Collaborator

rbren commented Mar 28, 2024

It looks like litellm thinks the model name is just llama2. Did you set LLM_MODEL=llama2? Or LLM_MODEL=ollama/llama2? It should be the latter

@R3verseIN
Copy link
Author

It looks like litellm thinks the model name is just llama2. Did you set LLM_MODEL=llama2? Or LLM_MODEL=ollama/llama2? It should be the latter

yes both done. Still same issue.

@jojeyh
Copy link
Contributor

jojeyh commented Mar 29, 2024

I got it running using

export LLM_MODEL=ollama/llama2
export LLM_API_KEY=
export LLM_BASE_URL=http://localhost:11434
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

@R3verseIN can you check your open ports to see if ollama is listening? If not can you run ollama serve. Might be he that you pulled the model but didn't serve

@R3verseIN
Copy link
Author

I got it running using

export LLM_MODEL=ollama/llama2
export LLM_API_KEY=
export LLM_BASE_URL=http://localhost:11434
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

@R3verseIN can you check your open ports to see if ollama is listening? If not can you run ollama serve. Might be he that you pulled the model but didn't serve

Fixed the issue.

@10htts
Copy link

10htts commented Mar 29, 2024

I'm running llama2 in LMStudio and I'm running theses:

$Env:LLM_API_KEY="lm-studio"
$Env:LLM_MODEL="ollama/llama2"
$Env:LLM_BASE_URL="http://localhost:1234/v1"
$Env:LLM_EMBEDDING_MODEL="local" # can be "llama2", "openai", "azureopenai", or "local"

$Env:WORKSPACE_DIR = "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\opendevin\Project01"
python -m pip install -r requirements.txt
python -m uvicorn opendevin.server.listen:app --port 3000

And I get this error:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.


        AGENT ERROR:
        'response'
Traceback (most recent call last):
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\main.py", line 1878, in completion
    generator = ollama.get_ollama_response(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\llms\ollama.py", line 228, in get_ollama_response
    model_response["choices"][0]["message"]["content"] = response_json["response"]
                                                         ~~~~~~~~~~~~~^^^^^^^^^^^^
KeyError: 'response'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\opendevin\controller\__init__.py", line 85, in step
    action = self.agent.step(state)
             ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\agenthub\langchains_agent\__init__.py", line 172, in step
    resp = self.llm.completion(messages=messages)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\utils.py", line 2796, in wrapper
    raise e
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\utils.py", line 2693, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\main.py", line 2093, in completion
  File "C:\Users\Bob\miniconda3\Lib\site-packages\litellm\utils.py", line 8258, in exception_type
    raise APIConnectionError(
litellm.exceptions.APIConnectionError: 'response'

        OBSERVATION:
        'response'
    Been hitting my head for the last 2 hours without any kind of progress. 
    
    I also tried to run this: 
    python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"
        Traceback (most recent call last):
  File "C:\Users\Bob\Desktop\OpenDevin\OpenDevin\opendevin\main.py", line 7, in <module>
    import agenthub # noqa F401 (we import this to get the agents registered)
    ^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named 'agenthub'

If anyone has ideas on what to do, I'm all ears. So far, everything works before sending the command in the UI where i get this:
Oops. Something went wrong: 'response'

And i get this in the LMStudio:

 [ERROR] Unexpected endpoint or method. (POST /v1/api/generate). Returning 200 anyway
 [ERROR] Unexpected endpoint or method. (POST /v1/api/generate). Returning 200 anyway
...

@rbren
Copy link
Collaborator

rbren commented Mar 29, 2024

For

 python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

Try running

PYTHONPATH=`pwd`  python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"

Otherwise, it seems like your ollama server isn't behaving as expected. LiteLLM expects to get results from the endpoint /v1/api/generate, but LMStudio doesn't recognize that endpoint.

I'm not familiar with LMStudio but my guess is you have to run ollama without LMStudio for it to work

@10htts
Copy link

10htts commented Mar 29, 2024

Think I found why here:
https://github.com/BerriAI/litellm?tab=readme-ov-file#supported-providers-docs

llama 2 does not support Embedding?

@rbren
Copy link
Collaborator

rbren commented Mar 29, 2024

Hmm...I think we're using llamaindex for the embeddings, not litellm

@10htts
Copy link

10htts commented Mar 29, 2024

I get this error now:
[ERROR] Unexpected endpoint or method. (POST /v1/api/embeddings). Returning 200 anyway

Started from scratch with LMStudio (Running nitsuai/llama-2-70b-Guanaco-QLoRA-GGUF/llama-2-70b-guanaco-qlora.Q3_K_S.gguf) and these parameters for LiteLLM:
$Env:LLM_API_KEY="lm-studio"
$Env:LLM_MODEL="nitsuai/llama-2-70b-Guanaco-QLoRA-GGUF/llama-2-70b-guanaco-qlora.Q3_K_S.gguf" # Doesn't seem to change anything. Tried ollama/llama2 too.
$Env:LLM_BASE_URL="http://localhost:1234/v1"
$Env:LLM_EMBEDDING_MODEL="llama2"

I'm not taking any more of your time since I do not have sufficient knowledge to have any constructive contribution to the project but if you wish to know more about my setup, do not hesitate.

Thanks for the help and great project!

@ishaan-jaff
Copy link

So it seems like an ollama issue.

This is an ollama issue, it means the model has not started running on the ollama server

@ishaan-jaff
Copy link

I'm running llama2 in LMStudio and I'm running theses:

@10htts, LMStudio is already openai compatible. So pass model=openai/llama2, litellm will route this request to OpenAI /chat/completions

@rbren
Copy link
Collaborator

rbren commented Apr 1, 2024

Closing in favor of #417

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants