Problem
Clawdapus/cllama does not currently have first-class Google Gemini provider support from env seeds, even though operators may want to use native Gemini keys rather than routing Gemini through OpenRouter.
Current state in-tree:
cmd/claw/compose_up.go seeds only:
OPENAI_API_KEY
XAI_API_KEY
ANTHROPIC_API_KEY
OPENROUTER_API_KEY
cllama/internal/provider/provider.go knows only:
openai
anthropic
openrouter
ollama
- There is no
GEMINI_API_KEY, GOOGLE_API_KEY, or google provider seed path.
Today the practical workaround is to use Gemini via OpenRouter model refs like openrouter/google/gemini-2.5-flash, which is fine for some operators but should not be the only path.
Expected behavior
A pod operator should be able to declare a Gemini model directly and provide a native Gemini key through x-claw.cllama-env, for example:
x-claw:
cllama-defaults:
env:
GEMINI_API_KEY: "${GEMINI_API_KEY}"
services:
trader:
x-claw:
models:
primary: google/gemini-2.5-flash
Requested changes
- Add first-class
google provider support to provider seeding and registry loading.
- Support
GEMINI_API_KEY and/or GOOGLE_API_KEY consistently across claw up and cllama runtime loading.
- Add a default Google base URL suitable for the Gemini OpenAI-compatible endpoint.
- Define auth and API format defaults explicitly.
- Add docs and tests for direct
google/<model> routing.
- Optionally add pricing entries for current Gemini models so cost telemetry stays accurate.
Notes
This is not strictly required to run Gemini models today if OpenRouter is present. But native provider support matters for operator choice, cost/accounting separation, and reducing unnecessary dependence on OpenRouter for Google-hosted models.
Problem
Clawdapus/cllama does not currently have first-class Google Gemini provider support from env seeds, even though operators may want to use native Gemini keys rather than routing Gemini through OpenRouter.
Current state in-tree:
cmd/claw/compose_up.goseeds only:OPENAI_API_KEYXAI_API_KEYANTHROPIC_API_KEYOPENROUTER_API_KEYcllama/internal/provider/provider.goknows only:openaianthropicopenrouterollamaGEMINI_API_KEY,GOOGLE_API_KEY, orgoogleprovider seed path.Today the practical workaround is to use Gemini via OpenRouter model refs like
openrouter/google/gemini-2.5-flash, which is fine for some operators but should not be the only path.Expected behavior
A pod operator should be able to declare a Gemini model directly and provide a native Gemini key through
x-claw.cllama-env, for example:Requested changes
googleprovider support to provider seeding and registry loading.GEMINI_API_KEYand/orGOOGLE_API_KEYconsistently acrossclaw upandcllamaruntime loading.google/<model>routing.Notes
This is not strictly required to run Gemini models today if OpenRouter is present. But native provider support matters for operator choice, cost/accounting separation, and reducing unnecessary dependence on OpenRouter for Google-hosted models.