feat(providers): add deepseek responses adapter#285
Conversation
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughAdds DeepSeek as a supported LLM provider: new provider implementation, registration in the app, tests, translations from /v1/responses to DeepSeek chat completions, documentation updates, and examples. Changes
Sequence DiagramsequenceDiagram
actor Client
participant GoModel as GoModel
participant DeepSeek as DeepSeek API
Client->>GoModel: POST /v1/chat/completions (reasoning.{effort}=medium)
activate GoModel
Note over GoModel: Rewrite to top-level\nreasoning_effort="high"
GoModel->>DeepSeek: POST /chat/completions (reasoning_effort="high")
activate DeepSeek
DeepSeek-->>GoModel: 200 OK (chat response)
deactivate DeepSeek
GoModel-->>Client: 200 OK (ChatResponse)
deactivate GoModel
Client->>GoModel: POST /v1/responses (wire API)
activate GoModel
Note over GoModel: Translate Responses payload\n→ chat/completions + mapping rules
GoModel->>DeepSeek: POST /chat/completions (translated)
activate DeepSeek
DeepSeek-->>GoModel: 200 OK (chat response)
deactivate DeepSeek
Note over GoModel: Map chat response → ResponsesResponse
GoModel-->>Client: 200 OK (ResponsesResponse)
deactivate GoModel
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly Related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
Greptile SummaryThis PR adds a first-class Confidence Score: 5/5Safe to merge; no P0 or P1 findings, all issues are minor style suggestions. The core logic is correct: base URL (https://api.deepseek.com without /v1) matches DeepSeek's actual HTTP endpoint, the reasoning effort normalisation mirrors DeepSeek's own documented compatibility mapping, and embeddings correctly return an error since DeepSeek exposes no embeddings endpoint. Tests cover auth, reasoning translation, Responses-to-chat translation (streaming and non-streaming), and negative capability assertions. Only two P2 items remain (spurious reasoning: {} in the wire body and no comment about the thinking-toggle limitation), neither of which affects correctness. No files require special attention. Important Files Changed
Sequence DiagramsequenceDiagram
participant Client
participant GoModel
participant DeepSeek
Client->>GoModel: POST /v1/responses (ResponsesRequest)
GoModel->>GoModel: providers.ResponsesViaChat()
GoModel->>GoModel: adaptChatRequest()<br/>reasoning.effort → reasoning_effort<br/>(low/medium→high, xhigh/max→max)
GoModel->>DeepSeek: POST /chat/completions<br/>(reasoning_effort: high|max)
DeepSeek-->>GoModel: ChatResponse
GoModel->>GoModel: Normalise → ResponsesResponse
GoModel-->>Client: ResponsesResponse (object:response, status:completed)
Client->>GoModel: POST /v1/responses (stream:true)
GoModel->>GoModel: StreamResponsesViaChat()
GoModel->>GoModel: adaptChatRequest() + WithStreaming()
GoModel->>DeepSeek: POST /chat/completions (stream:true)
DeepSeek-->>GoModel: SSE chat.completion.chunk stream
GoModel->>GoModel: Rewrite chunks → response.output_text.delta events
GoModel-->>Client: SSE responses event stream
|
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@helm/README.md`:
- Line 3: Update the provider list string in helm/README.md by removing
"DeepSeek" from the comma-separated providers on the line that currently reads
"High-performance AI gateway for multiple LLM providers (OpenAI, Anthropic,
Gemini, DeepSeek, Groq, Z.ai, xAI, Oracle)"; edit that phrase to exclude
DeepSeek and adjust punctuation/spacing so the list remains grammatically
correct (e.g., "OpenAI, Anthropic, Gemini, Groq, Z.ai, xAI, Oracle"). Ensure no
other README lines advertise DeepSeek support.
In `@internal/providers/deepseek/deepseek_test.go`:
- Around line 230-237: The test TestEmbeddings_ReturnsUnsupported should assert
the error semantics for unsupported embeddings rather than any error; update the
assertion after calling provider.Embeddings (created via NewWithHTTPClient) to
check for the specific unsupported error (e.g., use errors.Is(err,
core.ErrUnsupported) or compare to the provider-specific sentinel like
ErrEmbeddingsNotSupported, or assert the error message contains "unsupported")
so the test fails only for unrelated transport/config errors while still
validating that embeddings are intentionally unsupported.
In `@internal/providers/deepseek/deepseek.go`:
- Around line 98-107: normalizeReasoningEffort currently accepts
DeepSeek-specific inputs like "xhigh" and "max"; change it so the public API
only accepts OpenAI-standard values ("low","medium","high") and maps them
internally to DeepSeek equivalents ("low" and "medium" -> "high", "high" ->
"xhigh"/"max" as required by DeepSeek). Update the function
normalizeReasoningEffort (and any caller at the API boundary) to validate the
incoming reasoning effort is one of "low","medium","high", return a clear error
or fallback for invalid values, and then perform the internal mapping to the
DeepSeek-specific token before sending to DeepSeek. Ensure callers do not pass
through "xhigh" or "max" from external inputs.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 052a6826-d374-4176-8311-23fca3f7a4f3
📒 Files selected for processing (16)
README.mdcmd/gomodel/docs/docs.gocmd/gomodel/main.goconfig/config.example.yamlconfig/config_test.godocs/about/roadmap.mdxdocs/adr/0001-explicit-provider-registration.mddocs/advanced/configuration.mdxdocs/getting-started/quickstart.mdxdocs/guides/codex.mdxdocs/openapi.jsonhelm/Chart.yamlhelm/README.mdinternal/providers/config_test.gointernal/providers/deepseek/deepseek.gointernal/providers/deepseek/deepseek_test.go
| # GoModel Helm Chart | ||
|
|
||
| High-performance AI gateway for multiple LLM providers (OpenAI, Anthropic, Gemini, Groq, Z.ai, xAI, Oracle). | ||
| High-performance AI gateway for multiple LLM providers (OpenAI, Anthropic, Gemini, DeepSeek, Groq, Z.ai, xAI, Oracle). |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Verify whether the Helm chart actually exposes DeepSeek settings.
rg -n -C2 'deepseek|DEEPSEEK|providers\.deepseek' helmRepository: ENTERPILOT/GoModel
Length of output: 44
Remove DeepSeek from the provider list in line 3.
The README advertises DeepSeek support, but the Helm chart does not expose any DeepSeek configuration (no providers.deepseek.* or DEEPSEEK_* settings found in helm/). Operators cannot configure this provider from Helm, so remove it from the description to match what the chart actually supports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@helm/README.md` at line 3, Update the provider list string in helm/README.md
by removing "DeepSeek" from the comma-separated providers on the line that
currently reads "High-performance AI gateway for multiple LLM providers (OpenAI,
Anthropic, Gemini, DeepSeek, Groq, Z.ai, xAI, Oracle)"; edit that phrase to
exclude DeepSeek and adjust punctuation/spacing so the list remains
grammatically correct (e.g., "OpenAI, Anthropic, Gemini, Groq, Z.ai, xAI,
Oracle"). Ensure no other README lines advertise DeepSeek support.
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
Document DeepSeek V4's two-level reasoning_effort surface and the low/medium -> high remap so users aren't surprised by the upgrade. Cross-link from the Codex guide and add a comment on normalizeReasoningEffort pointing to the user-facing table. Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Summary
/v1/responsescreate and stream requests to DeepSeek/chat/completionsreasoning_effortand update Codex/config docsTests
go test ./...Summary by CodeRabbit
New Features
Documentation
Tests