Skip to content

Problem running local Ollama model #3348

@TueDissingWork

Description

@TueDissingWork

Plugin Type

VSCode Extension

App Version

4.111.1

Description

Hey,

I am trying to setup Kilo Code plugin to use an Ollama model, running locally. Ollama is running and it's API are up.
KIlo Code can talk to the API, as available images in Ollama is visible in the UI config section.

The problem is that when talking to the agent, I never receive a response - this is a CPU based system, so I am not expecting great results, but I would expect to see at least something returned.

Maybe I am using the wrong models, or missing some configuration - all hints and ideas on how to move forward are welcome :-)

Reproduction steps

  1. pull ibm/granite4:latest with ollama
  2. configire Kilocode to use local API: http://127.0.0.1:11434
  3. select ibm/granite4:latest as the model
  4. Type 'hi' in chat window

Now nothing happens, other than it seems that the command did trigger something in Ollama, but nothing is returned.

When executing the model by hand - ollama run ibm/granite4:latest - and interacting with this model manually. It works fine and even feels a little snappy. Where's the disconnect?

Provider

Kilo Code

Model

ibm/granite4:latest

System Information

Windows 11, using WSL2

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    Projects

    Status

    Intake

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions