Problem
Pods can now declare direct Gemini models like google/gemini-3-flash-preview behind cllama, but the OpenClaw driver still compiles models.providers.google.api as google-generative-ai whenever the provider prefix is google.
That is correct for direct-to-Google runtime wiring, but it is wrong when cllama is enabled. In the cllama case the provider base URL is rewritten to http://cllama:8080/v1, and the proxy only exposes Google through the OpenAI-compatible /v1/chat/completions surface.
Reproduction
- Configure an OpenClaw service with:
x-claw.cllama: passthrough
x-claw.models.primary: google/gemini-3-flash-preview
- Run
claw up -d.
- Observe generated OpenClaw config under
models.providers.google:
baseUrl: http://cllama:8080/v1
api: google-generative-ai
- Send a message to the agent.
- The request fails with
404 page not found from the provider path.
Direct probes against Google's OpenAI-compatible endpoint succeed with the same key and model. The failure is specific to the runner->cllama request shape.
Root cause
internal/driver/openclaw/config.go calls defaultModelAPIForProvider(provider) even inside the cllama rewrite branch. For google, that returns google-generative-ai, which causes OpenClaw to speak Google-native request semantics to a cllama endpoint that expects OpenAI-compatible semantics.
Expected behavior
When OpenClaw rewrites a provider to cllama, the compiled api should match the proxy surface, not the upstream vendor-native surface. For google/* models behind cllama, that means openai-completions.
Scope
- Fix OpenClaw config generation for
google/* + cllama.
- Add regression coverage in
internal/driver/openclaw/config_test.go.
- Check whether any other driver has the same provider-API rewrite bug before closing.
Problem
Pods can now declare direct Gemini models like
google/gemini-3-flash-previewbehindcllama, but the OpenClaw driver still compilesmodels.providers.google.apiasgoogle-generative-aiwhenever the provider prefix isgoogle.That is correct for direct-to-Google runtime wiring, but it is wrong when
cllamais enabled. In the cllama case the provider base URL is rewritten tohttp://cllama:8080/v1, and the proxy only exposes Google through the OpenAI-compatible/v1/chat/completionssurface.Reproduction
x-claw.cllama: passthroughx-claw.models.primary: google/gemini-3-flash-previewclaw up -d.models.providers.google:baseUrl: http://cllama:8080/v1api: google-generative-ai404 page not foundfrom the provider path.Direct probes against Google's OpenAI-compatible endpoint succeed with the same key and model. The failure is specific to the runner->cllama request shape.
Root cause
internal/driver/openclaw/config.gocallsdefaultModelAPIForProvider(provider)even inside the cllama rewrite branch. Forgoogle, that returnsgoogle-generative-ai, which causes OpenClaw to speak Google-native request semantics to a cllama endpoint that expects OpenAI-compatible semantics.Expected behavior
When OpenClaw rewrites a provider to cllama, the compiled
apishould match the proxy surface, not the upstream vendor-native surface. Forgoogle/*models behind cllama, that meansopenai-completions.Scope
google/*+ cllama.internal/driver/openclaw/config_test.go.