fix: detect model-not-supported errors from provider response body (#208)#209
fix: detect model-not-supported errors from provider response body (#208)#209
Conversation
Adding CLAUDE.md with task information for AI processing. This file will be removed when the task is complete. Issue: #208
) When the OpenCode provider removes or restricts a model, the API returns HTTP 401 with a body like: {"type":"error","error":{"type":"ModelError","message":"Model X not supported"}} Without special handling this looks identical to an authentication failure (both are HTTP 401), making root-cause diagnosis very confusing. Adds SessionProcessor.isModelNotSupportedError() which parses the response body and detects the ModelError pattern from OpenCode/OpenRouter. When detected, a dedicated log.error() entry is emitted that: - Labels the error as a model-availability issue, NOT an auth error - Includes providerID, modelID, statusCode, and the full response body - Suggests using --model <provider>/<model-id> to specify an alternative - Links to the case study (issue #208) for further investigation Also adds: - 11 unit tests covering nested/flat JSON formats, real auth errors (no false positives), plain-text fallback detection, and edge cases - Experiment script demonstrating all detection scenarios - Case study analysis in docs/case-studies/issue-208/ with the full 1920-line log from the failing run, timeline, root cause analysis, and 5 proposed solutions Fixes #208 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This reverts commit 0f06f23.
🤖 Solution Draft LogThis log file contains the complete execution trace of the AI solution draft process. 💰 Cost estimation:
Now working session is ended, feel free to review and add any feedback on the solution draft. |
🔄 Auto-restart triggered (attempt 1)Reason: CI failures detected Starting new session to address the issues. Auto-restart-until-mergeable mode is active. Will continue until PR becomes mergeable. |
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
🔄 Auto-restart-until-mergeable Log (iteration 1)This log file contains the complete execution trace of the AI solution draft process. 💰 Cost estimation:
Now working session is ended, feel free to review and add any feedback on the solution draft. |
✅ Ready to mergeThis pull request is now ready to be merged:
Monitored by hive-mind with --auto-restart-until-mergeable flag |
|
Also update our docs, to reflect that now only 3 models are free:
|
|
🤖 AI Work Session Started Starting automated work session at 2026-03-03T18:29:55.245Z The PR has been converted to draft mode while work is in progress. This comment marks the beginning of an AI work session. Please wait for the session to finish, and provide your feedback. |
… to big-pickle (#208) `opencode/kimi-k2.5-free` was removed from the OpenCode Zen provider (see case study #208). Update all docs and code to reflect the current 3 free models: big-pickle, minimax-m2.5-free, gpt-5-nano. Changes: - FREE_MODELS.md: remove kimi-k2.5-free from active table, add to discontinued; set big-pickle as recommended default - MODELS.md: same table updates + update default model note - README.md, EXAMPLES.md, js/README.md, rust/README.md: replace all opencode/kimi-k2.5-free example usages with opencode/big-pickle - js/src/provider/provider.ts: remove kimi-k2.5-free from priority arrays (both small-model selection and default model sort order) - js/src/tool/task.ts: update hardcoded fallback default from kimi-k2.5-free to big-pickle - docs/case-studies/issue-208/README.md: add Documentation Updates section summarising the changes made as a result of this case study Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 Solution Draft LogThis log file contains the complete execution trace of the AI solution draft process. 💰 Cost estimation:
Now working session is ended, feel free to review and add any feedback on the solution draft. |
This model should be default now.
|
|
🤖 AI Work Session Started Starting automated work session at 2026-03-04T10:13:43.442Z The PR has been converted to draft mode while work is in progress. This comment marks the beginning of an AI work session. Please wait for the session to finish, and provide your feedback. |
#208) Change default free model from big-pickle to minimax-m2.5-free (best of the 3 free OpenCode Zen models), update recommendation order to: 1. minimax-m2.5-free (new default — best quality) 2. gpt-5-nano (second) 3. big-pickle (last — worst of the 3) Also fixes Rust CLI default which still had the removed kimi-k2.5-free. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
🤖 Solution Draft LogThis log file contains the complete execution trace of the AI solution draft process. 💰 Cost estimation:
Now working session is ended, feel free to review and add any feedback on the solution draft. |
Summary
Fixes #208 — Agent CLI was not able to finish its work due to a model that was removed from the OpenCode provider catalog.
Root Cause
The
solvetool was invoked with--model kimi-k2.5-free. The agent resolved the model from the local cache (stale data) and sent a prompt to the OpenCode API. The provider responded with:The problem: OpenCode uses HTTP 401 for "model not supported" instead of the semantically correct 400/404. This makes the error look identical to a real authentication failure, producing confusing diagnostics.
The full root cause analysis and 1920-line log are documented in
docs/case-studies/issue-208/README.md.Fix
Added
SessionProcessor.isModelNotSupportedError()injs/src/session/processor.tswhich:{"type":"ModelError",...}patterns from OpenCode/OpenRoutererror.error.type === "ModelError") and flat format (error.type === "ModelError")When detected, a dedicated
log.error()entry is emitted that:providerID,modelID,statusCode, and the full response body--model <provider>/<model-id>to specify an alternativeDocumentation Update
Updated all documentation and code defaults to reflect the current 3 free models on OpenCode (per opencode.ai/docs/zen/), in order of recommendation:
opencode/minimax-m2.5-freeopencode/gpt-5-nanoopencode/big-pickleBefore vs. After
Before — logs only showed a generic API error that looked like auth failure:
{"message": "process", "error": {"name": "APIError", "data": {"statusCode": 401, "isRetryable": false}}}After — a dedicated diagnostic log entry makes the root cause clear:
{ "level": "error", "service": "session.processor", "message": "model not supported by provider — this is NOT an auth error", "hint": "The model was found in the local cache but the provider rejected it. The model may have been removed or is temporarily unavailable.", "providerID": "opencode", "modelID": "kimi-k2.5-free", "statusCode": 401, "responseBody": "{\"type\":\"error\",\"error\":{\"type\":\"ModelError\",\"message\":\"Model kimi-k2.5-free not supported\"}}", "suggestion": "Try a different model or check the provider status. Use --model <provider>/<model-id> to specify an alternative.", "issue": "https://github.com/link-assistant/agent/issues/208" }Changes
js/src/session/processor.ts: AddisModelNotSupportedError()+ diagnostic log for model-not-supported 401 errorsjs/tests/model-not-supported.test.ts: 11 unit testsjs/experiments/test-model-error-detection.ts: Experiment scriptjs/.changeset/fix-model-not-supported-detection.md: Changeset for patch releasedocs/case-studies/issue-208/README.md: Complete case study + documentation updates sectiondocs/case-studies/issue-208/solution-draft-log.txt: Full 1920-line logFREE_MODELS.md,MODELS.md,README.md,EXAMPLES.md,js/README.md,rust/README.md: Updated free model lists — removedkimi-k2.5-free, setminimax-m2.5-freeas new recommended defaultjs/src/provider/provider.ts: Removedkimi-k2.5-freefrom priority arrays;minimax-m2.5-freeis now the top OpenCode free modeljs/src/tool/task.ts: Updated hardcoded fallback default fromkimi-k2.5-freetominimax-m2.5-freerust/src/cli.rs: Updated default model fromkimi-k2.5-freetominimax-m2.5-freeTest Plan
model-not-supported.test.ts)