Skip to content

Feature: Support for Remote / Private Ollama Server (External LLM Backend) #28

@OxCom

Description

@OxCom

Summary
Allow the application to connect to and use a remote/private Ollama server instead of running local models.

Details
In many environments (e.g., corporate laptops, low-spec machines), running local LLMs is not feasible due to:

  • Insufficient CPU/GPU resources
  • Limited RAM
  • Restricted environments (corporate policies)

At the same time, users may have access to a dedicated internal Ollama server capable of running models efficiently.

The application should:

  • Allow configuration of a remote Ollama server endpoint
  • Send inference requests to the remote server
  • Avoid downloading or running models locally when remote is configured

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions