fix: make llm_enabled a user-configurable field for MCP services#314
fix: make llm_enabled a user-configurable field for MCP services#314rshoemaker merged 3 commits intomainfrom
Conversation
The control plane previously hardcoded llm.enabled: true in every MCP server config.yaml, forcing users to provide llm_provider, llm_model, and an API key even when they only needed MCP protocol access (e.g., via Claude Desktop or Cursor). llm_enabled is now an optional boolean field (default: false) in MCPServiceConfig. When false, the llm: section is omitted entirely from the generated config.yaml. When true, llm_provider, llm_model, and the provider credential are required (unchanged behavior). LLM-specific fields are rejected when llm_enabled is false to prevent silent misconfiguration. ollama_url is shared between LLM and embedding providers, so it is only rejected when neither needs it. Also fixes a pre-existing gap where embedding_provider: ollama did not require ollama_url — this path is now validated.
📝 WalkthroughWalkthroughThe changes introduce an optional Changes
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/internal/orchestrator/swarm/mcp_config.go`:
- Around line 113-146: Existing MCP configs with llm_provider but no llm_enabled
will fail reconciliation because mcp_config.go only builds LLM when
cfg.LLMEnabled != nil && *cfg.LLMEnabled; fix by treating an explicit
llm_provider as implying enabled (or backfill the DB): either add a DB migration
that sets llm_enabled = true for rows where llm_provider IS NOT NULL, or change
the logic in the mcp LLM construction to treat cfg.LLMProvider (and non-nil/
non-empty) as enabling LLM when cfg.LLMEnabled == nil (i.e., if cfg.LLMEnabled
== nil && cfg.LLMProvider != "" { consider enabled }). Also ensure the
validation in database/mcp_service_config.go (the validation that rejects
LLM-only fields when llm_enabled is not true) is updated or kept consistent with
the migration so existing records are accepted during reconciliation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: db18264f-f1d8-4c47-a52c-e65fc6ea8014
📒 Files selected for processing (6)
e2e/service_provisioning_test.goserver/internal/api/apiv1/validate_test.goserver/internal/database/mcp_service_config.goserver/internal/database/mcp_service_config_test.goserver/internal/orchestrator/swarm/mcp_config.goserver/internal/orchestrator/swarm/mcp_config_test.go
There was a problem hiding this comment.
🧹 Nitpick comments (1)
server/internal/orchestrator/swarm/mcp_config_test.go (1)
397-434: Add explicitllm_enabled=falsecoverage for embedding-only configs.
TestGenerateMCPConfig_EmbeddingWithoutLLMcurrently exercises only the omitted (nil) flag path. Adding an explicitLLMEnabled: falsevariant would better lock in the intended parity and prevent regressions.Suggested test addition
+func TestGenerateMCPConfig_EmbeddingWithoutLLM_DisabledExplicitly(t *testing.T) { + embProvider := "voyage" + embModel := "voyage-3" + embAPIKey := "pa-voyage-key" + + params := &MCPConfigParams{ + Config: &database.MCPServiceConfig{ + LLMEnabled: utils.PointerTo(false), + EmbeddingProvider: &embProvider, + EmbeddingModel: &embModel, + EmbeddingAPIKey: &embAPIKey, + }, + DatabaseName: "mydb", + DatabaseHosts: []database.ServiceHostEntry{{Host: "db-host", Port: 5432}}, + Username: "appuser", + Password: "secret", + } + + data, err := GenerateMCPConfig(params) + if err != nil { + t.Fatalf("GenerateMCPConfig() error = %v", err) + } + + cfg := parseYAML(t, data) + if cfg.LLM != nil { + t.Errorf("llm section should be absent when llm_enabled is false, got %+v", cfg.LLM) + } + if cfg.Embedding == nil { + t.Fatal("embedding section should be present") + } +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/internal/orchestrator/swarm/mcp_config_test.go` around lines 397 - 434, Add a second sub-case to TestGenerateMCPConfig_EmbeddingWithoutLLM that explicitly sets LLMEnabled: false on the MCPServiceConfig inside the MCPConfigParams (rather than leaving it nil) to verify behavior when the flag is present but disabled; call GenerateMCPConfig with that params object, parse the YAML as in the existing test, and assert the same expectations (cfg.LLM is nil/absent and embedding section exists and is enabled with provider "voyage"), ensuring the test covers both the omitted-nil and explicit-false paths for LLMEnabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@server/internal/orchestrator/swarm/mcp_config_test.go`:
- Around line 397-434: Add a second sub-case to
TestGenerateMCPConfig_EmbeddingWithoutLLM that explicitly sets LLMEnabled: false
on the MCPServiceConfig inside the MCPConfigParams (rather than leaving it nil)
to verify behavior when the flag is present but disabled; call GenerateMCPConfig
with that params object, parse the YAML as in the existing test, and assert the
same expectations (cfg.LLM is nil/absent and embedding section exists and is
enabled with provider "voyage"), ensuring the test covers both the omitted-nil
and explicit-false paths for LLMEnabled.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 6b7b4565-cc36-43ee-99d4-0ce784f303c7
📒 Files selected for processing (1)
server/internal/orchestrator/swarm/mcp_config_test.go
Summary
Adds
llm_enabledas an optional boolean config field (default:false) for MCP services, replacing the previously hardcodedllm.enabled: true.When
llm_enabledisfalse(or omitted):llm:section is omitted entirely from the generatedconfig.yamlllm_provider,llm_model, API keys,llm_temperature, andllm_max_tokensmust not be provided (validation error if present)When
llm_enabledistrue:llm_provider,llm_model, and the matching provider credential are required (unchanged from prior behavior)llm:section is written toconfig.yamlwithenabled: trueAdditional fix:
embedding_provider: ollamanow correctly requiresollama_url— a pre-existing validation gap that was masked when LLM fields were always required.Test plan
go test ./server/internal/database/... ./server/internal/orchestrator/swarm/...make test-e2e E2E_FIXTURE=lima{}), connect via Claude Desktop usingmcp-remotellm_enabled: true+ provider config, verify LLM proxy works via curl