-
-
Notifications
You must be signed in to change notification settings - Fork 40
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
To upvote this issue, give it a thumbs up. See this list for the most upvoted issues.
Describe the bug
I'm running a local ai server with llama.cpp and llama-swap, however it fails to retrieve the models from the server.
~/.config/eca/config.json
{
"providers": {
"delfi": {
"api": "openai-chat",
"url": "http://delfi:8088/v1",
"key": "NONE",
"fetchModels": true
}
}
}
logs:
[server] Starting server...
[DB] No existing DB cache found for /home/andrea/.cache/eca/db.transit.json
:db/read-cache 0ms
:db/read-cache 0ms
[DB] Loading from workspace-cache caches...
:eca/initialize 1ms
[MODELS][MODELS] [MODELS] Provider 'anthropic': Using models.dev provider-id fallback (url 'https://api.anthropic.com' not matched)Provider 'google': Using models.dev provider-id fallback (url 'https://generativelanguage.googleapis.com/v1beta/openai' not matched)Provider 'github-copilot': Loaded 20 models from models.dev
[MODELS] Provider 'google': Loaded 26 models from models.dev
[MODELS] Provider 'openai': Using models.dev provider-id fallback (url 'https://api.openai.com' not matched)
[MODELS] Provider 'openai': Loaded 42 models from models.dev
[MODELS] Provider 'anthropic': Loaded 22 models from models.dev
[MODELS] [7128] Sending body: 'null', headers: '{"x-llm-application-name" "eca", "Content-Type" "application/json", "Authorization" "Beare***** NONE"}', url: 'http://delfi:8088/v1/models'
[MODELS] Provider 'delfi': Failed to fetch models from http://delfi:8088/v1/models: null
[MODELS] Fetched model catalogs from 4 providers in 5040.5ms
[LLM-API] Default LLM model 'null' decision ':no-available-model'
[LLM-API] Default LLM model 'null' decision ':no-available-model'
:eca/initialized 5221ms
The server returns them models correctly AFAICT:
~$ curl http://delfi:8088/v1/models
{"data":[{"created":1771265628,"id":"CodeLlama-34b-Python-hf.i1-Q5_K_M-test","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"DIES-Qwen3-Coder-Next-UD-Q4_K_XL","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"DIES-codellama-70b-instruct","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"Qwen3-30B-A3B-Thinking-2507-Claude-4","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"Qwen3-30B-A3B-python-coder.i1-Q5_K_M","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"glm-4.7-flash-claude-4.5-opus.q6_k","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"granite-docling-258M-f16","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"lmstudio-community_DeepSeek-R1-Distill-Qwen-32B-Q4_K_M","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"orionstar-yi-34b-chat-llama","object":"model","owned_by":"llama-swap"},{"created":1771265628,"id":"unsloth_Qwen3-30B-A3B-Q4_K_M","object":"model","owned_by":"llama-swap"}],"object":"list"}
Expected behavior
Should retrieve the models available on the local server.
Doctor
/doctor
ECA version: 0.101.1
Server cmd: /home/andrea/.emacs.d/eca/eca server --log-level debug
Workspaces: /home/andrea/.emacs.d
Default model:
Login providers:
anthropic: {}
azure: {}
deepseek: {}
github-copilot: {}
google: {}
openai: {}
openrouter: {}
z-ai: {}
Relevant env vars:
DEBUGINFOD_URLS=https://debuginfod.ubuntu.com
Credential files: None found
**Screenshots**
Additional context
Running on emacs.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working