Context
PR #129 fixed the concrete google/* + cllama breakage in OpenClaw by separating the direct-provider API helper from the cllama-proxy API helper.
That fix is correct, but the regression coverage is still narrow. The current tests lock in the Google proxied case and the direct Google case, but they do not yet document the full contract between:
defaultModelAPIForProvider()
cllamaModelAPIForProvider()
normalizeProviderID()
in internal/driver/openclaw/config.go.
Problem
A future edit could reintroduce broken vendor-native API modes for providers that should use the cllama proxy's OpenAI-compatible surface, or accidentally drop the Anthropic-family special case.
Examples of currently unprotected behavior:
amazon-bedrock/* behind cllama must not compile bedrock-converse-stream
github-copilot/* behind cllama must not compile github-copilot
ollama/* behind cllama must not compile ollama
- Anthropic-family providers behind
cllama must continue to compile anthropic-messages
- provider aliases normalized by
normalizeProviderID() should still land on the correct cllama API mode
Expected
Add a table-driven regression barrier for OpenClaw's cllama provider API selection.
At minimum cover:
- direct helper behavior vs cllama helper behavior across the providers encoded in both switch statements
google, github-copilot, amazon-bedrock, and ollama behind cllama => openai-completions
- Anthropic-family providers behind
cllama => anthropic-messages
- at least one alias-normalization path such as
kimi-code -> kimi-coding or z.ai -> zai
Scope
- tests only
internal/driver/openclaw/config_test.go unless a small focused helper test file is clearly cleaner
- do not broaden this into runtime or cross-driver behavior changes unless the tests reveal a real bug
Acceptance
- a future change that reintroduces vendor-native API modes behind
cllama fails fast in unit tests
- the Anthropic-family special case is explicitly asserted
- alias normalization is covered at least once through the cllama API-selection path
Reference
Context
PR #129 fixed the concrete
google/* + cllamabreakage in OpenClaw by separating the direct-provider API helper from the cllama-proxy API helper.That fix is correct, but the regression coverage is still narrow. The current tests lock in the Google proxied case and the direct Google case, but they do not yet document the full contract between:
defaultModelAPIForProvider()cllamaModelAPIForProvider()normalizeProviderID()in
internal/driver/openclaw/config.go.Problem
A future edit could reintroduce broken vendor-native API modes for providers that should use the cllama proxy's OpenAI-compatible surface, or accidentally drop the Anthropic-family special case.
Examples of currently unprotected behavior:
amazon-bedrock/*behindcllamamust not compilebedrock-converse-streamgithub-copilot/*behindcllamamust not compilegithub-copilotollama/*behindcllamamust not compileollamacllamamust continue to compileanthropic-messagesnormalizeProviderID()should still land on the correct cllama API modeExpected
Add a table-driven regression barrier for OpenClaw's cllama provider API selection.
At minimum cover:
google,github-copilot,amazon-bedrock, andollamabehindcllama=>openai-completionscllama=>anthropic-messageskimi-code->kimi-codingorz.ai->zaiScope
internal/driver/openclaw/config_test.gounless a small focused helper test file is clearly cleanerAcceptance
cllamafails fast in unit testsReference