Skip to content

MCP server improvements: docs, error handling, create_model consolidation, edit_model redesign#27

Merged
ZmeiGorynych merged 9 commits intomainfrom
egor/dev-1239-mcp-server-tweaks
Apr 13, 2026
Merged

MCP server improvements: docs, error handling, create_model consolidation, edit_model redesign#27
ZmeiGorynych merged 9 commits intomainfrom
egor/dev-1239-mcp-server-tweaks

Conversation

@ZmeiGorynych
Copy link
Copy Markdown
Member

@ZmeiGorynych ZmeiGorynych commented Apr 13, 2026

Summary

A batch of MCP server improvements spanning documentation, robustness, and tool API design:

  • Docs: uvx install-free setup & YAML datasource config — MCP getting-started now leads with uvx (no install needed) and JSON config. Replaced conversational DB setup with the recommended YAML datasource config approach using ${ENV_VAR} references. Trimmed SSE details from getting-started (linked to reference instead). Added Quick Start section to MCP reference.

  • Graceful error handling for invalid datasourcesresolve_env_vars() detects unresolved ${VAR} references and raises a clear ValueError. yaml_storage.get_datasource() catches parse/validation errors. MCP tools (list_datasources, datasource_summary, etc.) catch per-datasource errors so one bad config doesn't break the whole listing.

  • Merge create_model_from_query into create_model — The create_model tool now accepts an optional query parameter. When provided, it delegates to engine.create_model_from_query() and auto-introspects dimensions/measures from the query result. Mutually exclusive with sql_table/sql/dimensions/measures. Reduces tool surface without losing functionality.

  • Redesign edit_model with upsert semantics and full entity coverage — Replace the limited add-only edit_model with a comprehensive version supporting all model entities (dimensions, measures, aggregations, joins, filters). Named entities use upsert-by-name: if an entity exists, only provided fields are updated; if not, it's created. Structured remove dict keyed by entity type replaces the old ambiguous flat list. 30 tests (rewrote 9, added 21).

Test plan

  • Full unit test suite passes (poetry run pytest) — 467 tests
  • Ruff linter clean (poetry run ruff check slayer/ tests/)
  • Verify edit_model upsert: create a model, then call edit_model with an existing dimension name — should update, not error
  • Verify edit_model structured remove: remove={"measures": ["x"]} deletes the named measure
  • Verify create_model with query param works end-to-end against a live datasource
  • Verify malformed datasource YAML doesn't crash datasource_summary

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Query-based model creation is unified into the main model-creation API with typed upsert/remove for model edits and explicit filter management.
  • Documentation

    • Updated getting-started, MCP reference, and examples to show the unified create_model(query=...) flow, uvx-based setup, and revised query examples.
  • Improvements

    • Datasource configs now validate env placeholders and surface unresolved vars; datasource listing/inspection and summary responses return structured JSON.
  • Tests

    • Expanded coverage for query-mode creation, upsert/remove semantics, filter handling, and datasource parsing/error cases.
  • Chores

    • Added auto-review CI config.

ZmeiGorynych and others added 4 commits April 11, 2026 15:53
…handling for invalid datasources

Docs:
- MCP getting-started now leads with uvx (no install needed) and JSON config
- Replaced conversational DB setup with the recommended path: drop a YAML
  datasource config with ${ENV_VAR} references into the storage folder
- Noted that datasource configs are hot-reloaded (no restart needed)
- Removed SSE details from getting-started (link to reference instead)
- MCP reference now has a Quick Start section with uvx examples
- Moved datasource_summary and ingest_datasource_models to Datasource
  Management section in reference

Error handling:
- resolve_env_vars() now detects unresolved ${VAR} references and raises
  a clear ValueError listing the missing variables
- yaml_storage.get_datasource() catches YAML parse errors and Pydantic
  validation errors, re-raises as descriptive ValueErrors
- MCP tools (list_datasources, datasource_summary, describe_datasource,
  list_tables) catch per-datasource errors so one bad config doesn't
  break the whole list — errors are reported inline

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The create_model tool now accepts an optional `query` parameter (a SLayer
query dict). When provided, it delegates to engine.create_model_from_query()
and auto-introspects dimensions and measures from the query result.
Mutually exclusive with sql_table/sql/dimensions/measures — clear error
if both are supplied.

This reduces the MCP tool surface from two tools to one without losing
any functionality. The engine-level create_model_from_query() method is
unchanged.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…verage

Replace the limited add-only edit_model with a comprehensive version that
supports creating, updating, and deleting all model entities (dimensions,
measures, aggregations, joins, filters) in a single call.

Key changes:
- Upsert semantics: dimensions/measures/aggregations/joins matched by name
  (or target_model for joins); existing entities get partial updates, new
  ones are created
- Structured remove dict keyed by entity type replaces ambiguous flat list
- Filter management via add_filters/remove_filters params
- New scalar params: sql_table, sql, hidden
- Full model re-validation before save catches cross-field errors
- 30 tests (rewrote 9, added 21) covering upsert, partial update,
  aggregations, joins, filters, typed remove, and validation

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9cbb8f25-df89-49ca-825d-c78b8fe5d7ee

📥 Commits

Reviewing files that changed from the base of the PR and between bce3f82 and 9569878.

📒 Files selected for processing (1)
  • .coderabbit.yaml
✅ Files skipped from review due to trivial changes (1)
  • .coderabbit.yaml

📝 Walkthrough

Walkthrough

Consolidates query-based model creation into create_model(query=...), removes create_model_from_query, refactors edit_model for typed upsert/remove and filter management, enforces datasource env-var resolution with errors, hardens YAML parsing, updates docs/examples, and expands tests to cover these behaviors.

Changes

Cohort / File(s) Summary
MCP Server — model APIs
slayer/mcp/server.py
Merge query-based creation into create_model(query=...) (remove create_model_from_query), enforce mutual-exclusivity, route to engine query-introspection, refactor edit_model to typed upsert/remove for dimensions/measures/aggregations/joins, add add_filters/remove_filters, scalar fields (sql_table, sql, hidden), validate model before save, and improve datasource-list/describe error handling.
Datasource config & YAML storage
slayer/core/models.py, slayer/storage/yaml_storage.py
DatasourceConfig.resolve_env_vars() now collects referenced ${VAR} names and raises on unresolved placeholders. YAMLStorage.get_datasource() wraps YAML parsing and Pydantic validation with contextual ValueErrors and preserves original exception chaining.
Tests
tests/test_mcp_server.py, tests/test_storage.py
Add/adjust tests for create_model(query=...) conflict/validation, edit_model upsert/partial-update and typed removals (dimensions/measures/aggregations/joins), filter management, scalar metadata, cross-field validation, and YAML/env-var error handling and malformed-datasource reporting.
Docs & Examples
docs/interfaces/mcp.md, docs/reference/mcp.md, docs/getting-started/mcp.md, docs/concepts/models.md, docs/index.md, docs/examples/...
Replace references to create_model_from_query with create_model(query=...), document conditional inputs for create_model, switch getting-started to uvx/YAML datasource workflow, and update example queries and notebook guidance accordingly.
Repo config
.coderabbit.yaml
Add CI/review config enabling auto-review for draft PRs.

Sequence Diagram

sequenceDiagram
    actor User
    participant MCP as MCP Server
    participant Engine as Query Engine
    participant Storage as Model Storage
    participant YAML as YAML Loader/Config

    User->>MCP: create_model(query={...})
    MCP->>MCP: validate inputs (reject mixed params)
    MCP->>Engine: create_model_from_query(query)
    Engine->>Engine: execute query & infer schema
    Engine-->>MCP: inferred dimensions/measures
    MCP->>Storage: persist model definition
    Storage->>YAML: resolve datasource config (${VAR})
    YAML-->>Storage: resolved values or raise ValueError
    Storage-->>MCP: persisted confirmation
    MCP-->>User: return dimension/measure names or error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • AivanF
  • whimo

"I nibble on queries, stitch fields with care,
create_model keeps what once lived elsewhere.
Upserts and removes, env-vars all checked,
YAML parsed tidy, no configs left wrecked.
Docs and tests hopped in — the rabbit's well-prepared. 🐇"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.03% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly reflects the four main themes of the changeset: MCP server improvements, docs updates, error handling hardening, and the two major tool redesigns (create_model consolidation and edit_model overhaul).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch egor/dev-1239-mcp-server-tweaks

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
slayer/mcp/server.py (1)

300-345: ⚠️ Potential issue | 🟠 Major

Reject data_source in query mode instead of silently dropping it.

create_model(..., query=...) still accepts data_source, but this branch neither rejects it nor forwards it to engine.create_model_from_query(...). That makes part of the public API a no-op and can break query payloads that rely on an inline source_model and expect the outer data_source to be honored.

Suggested fix
         if query is not None:
             table_params = _build_dict(
-                sql_table=sql_table, sql=sql, dimensions=dimensions, measures=measures,
+                sql_table=sql_table,
+                sql=sql,
+                data_source=data_source,
+                dimensions=dimensions,
+                measures=measures,
             )
             if table_params:
                 return (
                     f"Error: 'query' cannot be combined with {', '.join(table_params.keys())}. "
                     "Use 'query' alone to create from a query, or provide table details without 'query'."
                 )

Based on learnings: SlayerQuery.source_model accepts a model name, inline SlayerModel, or ModelExtension; create_model_from_query() saves a query as a permanent model.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 300 - 345, The create_model function
currently ignores data_source when query is provided; update create_model to
explicitly reject a non-None data_source in the branch handling query: after
entering if query is not None and before parsing the query (before calling
SlayerQuery.model_validate or engine.create_model_from_query), check if
data_source is not None and return or raise a clear error message (e.g., "Error:
'data_source' cannot be used when creating a model from 'query'") so callers are
not silently misled; keep references to SlayerQuery.model_validate and
engine.create_model_from_query and do not forward data_source to
create_model_from_query.
🧹 Nitpick comments (1)
slayer/mcp/server.py (1)

517-533: Use keyword args for the _upsert_entity(...) calls.

These new call sites are hard to audit because every argument is positional, and they also miss the repo’s multi-parameter call convention.

Suggested refactor
         for spec in dimensions or []:
-            err = _upsert_entity(model.dimensions, spec, Dimension, "name", changes, "dimension")
+            err = _upsert_entity(
+                entity_list=model.dimensions,
+                spec=spec,
+                entity_cls=Dimension,
+                id_field="name",
+                changes=changes,
+                label="dimension",
+            )
             if err:
                 return err
 
         for spec in measures or []:
-            err = _upsert_entity(model.measures, spec, Measure, "name", changes, "measure")
+            err = _upsert_entity(
+                entity_list=model.measures,
+                spec=spec,
+                entity_cls=Measure,
+                id_field="name",
+                changes=changes,
+                label="measure",
+            )
             if err:
                 return err

As per coding guidelines: **/*.py: Use keyword arguments for functions with more than 1 parameter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 517 - 533, The four calls to
_upsert_entity use only positional args which violates the repo convention and
makes reviews error-prone; update each call in the loops (the ones using
model.dimensions/model.measures/model.aggregations/model.joins and classes
Dimension/Measure/Aggregation/ModelJoin) to pass keyword arguments instead (e.g.
collection=..., spec=..., entity_cls=... or entity_class=..., key_field=... or
key=..., changes=..., kind=...), matching the _upsert_entity parameter names so
each argument is explicit and follows the multi-parameter call convention.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/reference/mcp.md`:
- Line 76: The docs line for datasource_summary incorrectly promises JSON even
when no datasources/models exist; update either the tool or the doc: either
modify the datasource_summary implementation to always return a JSON empty
structure (e.g., a predictable empty object/array and consistent schema) or
change the documentation text for `datasource_summary` to state that it returns
JSON when datasources/models exist but returns a plain-text empty-state message
when none are present; reference `datasource_summary` in your change so
consumers know which behavior changed.

In `@slayer/mcp/server.py`:
- Around line 469-474: Ensure sql_table and sql remain mutually exclusive: in
the edit handling around model, first check if both sql_table and sql are
provided in the request and reject with a 400/error response (do not apply
changes). If only sql_table is provided, set model.sql_table = sql_table, clear
model.sql (set to None/""), and append both the "set sql_table ..." and a
"cleared sql" change entry; if only sql is provided, set model.sql = sql, clear
model.sql_table, and append both the "set sql ..." and a "cleared sql_table"
change entry. Use the existing variables model, sql_table, sql, and changes to
implement this logic.

---

Outside diff comments:
In `@slayer/mcp/server.py`:
- Around line 300-345: The create_model function currently ignores data_source
when query is provided; update create_model to explicitly reject a non-None
data_source in the branch handling query: after entering if query is not None
and before parsing the query (before calling SlayerQuery.model_validate or
engine.create_model_from_query), check if data_source is not None and return or
raise a clear error message (e.g., "Error: 'data_source' cannot be used when
creating a model from 'query'") so callers are not silently misled; keep
references to SlayerQuery.model_validate and engine.create_model_from_query and
do not forward data_source to create_model_from_query.

---

Nitpick comments:
In `@slayer/mcp/server.py`:
- Around line 517-533: The four calls to _upsert_entity use only positional args
which violates the repo convention and makes reviews error-prone; update each
call in the loops (the ones using
model.dimensions/model.measures/model.aggregations/model.joins and classes
Dimension/Measure/Aggregation/ModelJoin) to pass keyword arguments instead (e.g.
collection=..., spec=..., entity_cls=... or entity_class=..., key_field=... or
key=..., changes=..., kind=...), matching the _upsert_entity parameter names so
each argument is explicit and follows the multi-parameter call convention.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: de280bf6-01fd-41d1-9102-0ee1bedf80c4

📥 Commits

Reviewing files that changed from the base of the PR and between ad59951 and 6589405.

📒 Files selected for processing (12)
  • docs/concepts/models.md
  • docs/examples/02_sql_vs_dsl/sql_vs_dsl_nb.ipynb
  • docs/examples/07_aggregations/aggregations.md
  • docs/getting-started/mcp.md
  • docs/index.md
  • docs/interfaces/mcp.md
  • docs/reference/mcp.md
  • slayer/core/models.py
  • slayer/mcp/server.py
  • slayer/storage/yaml_storage.py
  • tests/test_mcp_server.py
  • tests/test_storage.py

ZmeiGorynych and others added 3 commits April 13, 2026 10:08
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
slayer/mcp/server.py (1)

523-538: Switch the _upsert_entity() calls to keyword arguments.

These six-argument positional calls are hard to audit and violate the repo's Python calling convention.

♻️ Suggested cleanup
-            err = _upsert_entity(model.dimensions, spec, Dimension, "name", changes, "dimension")
+            err = _upsert_entity(
+                entity_list=model.dimensions,
+                spec=spec,
+                entity_cls=Dimension,
+                id_field="name",
+                changes=changes,
+                label="dimension",
+            )
             if err:
                 return err

-            err = _upsert_entity(model.measures, spec, Measure, "name", changes, "measure")
+            err = _upsert_entity(
+                entity_list=model.measures,
+                spec=spec,
+                entity_cls=Measure,
+                id_field="name",
+                changes=changes,
+                label="measure",
+            )
             if err:
                 return err

-            err = _upsert_entity(model.aggregations, spec, Aggregation, "name", changes, "aggregation")
+            err = _upsert_entity(
+                entity_list=model.aggregations,
+                spec=spec,
+                entity_cls=Aggregation,
+                id_field="name",
+                changes=changes,
+                label="aggregation",
+            )
             if err:
                 return err

-            err = _upsert_entity(model.joins, spec, ModelJoin, "target_model", changes, "join")
+            err = _upsert_entity(
+                entity_list=model.joins,
+                spec=spec,
+                entity_cls=ModelJoin,
+                id_field="target_model",
+                changes=changes,
+                label="join",
+            )
             if err:
                 return err

As per coding guidelines "Use keyword arguments for functions with more than 1 parameter."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 523 - 538, The calls to _upsert_entity in
server.py use six positional arguments which violates the repo convention;
change each call to use keyword arguments (e.g., model=<...>, spec=<...>,
entity_class=<...>, key_field=<...>, changes=<...>, kind=<...>) so it's obvious
what each value represents—update the four loops calling _upsert_entity
(referencing model.dimensions/model.measures/model.aggregations/model.joins and
the corresponding Dimension/Measure/Aggregation/ModelJoin, the
"name"/"target_model" key_field, changes, and the
"dimension"/"measure"/"aggregation"/"join" kind) to pass those values as
keywords.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@slayer/mcp/server.py`:
- Around line 214-222: The exception handler around storage.get_datasource
currently returns str(exc) to MCP clients, which can leak sensitive input;
instead, catch exceptions from storage.get_datasource (e.g., in the try/except
blocks that call storage.get_datasource and then datasources.append) and replace
the returned error text with a sanitized category like "invalid datasource" or
"validation error"; log the full exception and stack trace server-side (using
the existing logger) for debugging, and apply the same change to the other
similar handlers that call storage.get_datasource (the blocks around lines
663-668, 678-681, 719-722) so clients only receive the safe message while full
details remain in server logs.

---

Nitpick comments:
In `@slayer/mcp/server.py`:
- Around line 523-538: The calls to _upsert_entity in server.py use six
positional arguments which violates the repo convention; change each call to use
keyword arguments (e.g., model=<...>, spec=<...>, entity_class=<...>,
key_field=<...>, changes=<...>, kind=<...>) so it's obvious what each value
represents—update the four loops calling _upsert_entity (referencing
model.dimensions/model.measures/model.aggregations/model.joins and the
corresponding Dimension/Measure/Aggregation/ModelJoin, the "name"/"target_model"
key_field, changes, and the "dimension"/"measure"/"aggregation"/"join" kind) to
pass those values as keywords.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 522acff0-4cad-48b5-a896-01a1fab0c505

📥 Commits

Reviewing files that changed from the base of the PR and between 6589405 and 78fa66b.

📒 Files selected for processing (1)
  • slayer/mcp/server.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
slayer/mcp/server.py (1)

232-239: ⚠️ Potential issue | 🟡 Minor

Keep datasource_summary()'s JSON envelope stable.

The new empty-workspace branch returns "datasources": [], but a model-only workspace still omits that key because the top-level dict is built conditionally. Clients parsing this as structured JSON now have to handle two response shapes for the same tool.

📦 Suggested fix
-        result = {}
-        if datasources:
-            result["datasources"] = datasources
-        result["models"] = models
-        result["model_count"] = len(models)
+        result = {
+            "datasources": datasources,
+            "models": models,
+            "model_count": len(models),
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 232 - 239, The JSON envelope returned by
datasource_summary() is unstable because "datasources" is only set when
datasources is truthy; always include the "datasources" key (even as an empty
list) so clients see a consistent shape. Modify the construction of result (the
dict built from datasources, models, and model_count) to always set
result["datasources"] = datasources (or [] when None/falsey), and keep
result["models"] = models and result["model_count"] = len(models) as before so
the response shape is stable for all workspaces.
♻️ Duplicate comments (1)
slayer/mcp/server.py (1)

214-222: ⚠️ Potential issue | 🟠 Major

Stop returning raw datasource load errors to MCP clients.

slayer/storage/yaml_storage.py:55-71 already wraps bad datasource files with the datasource path and low-level parser/validation detail. Echoing str(exc) here leaks that internal config/debug data through datasource_summary, list_datasources, describe_datasource, and list_tables. Log the full exception server-side and return a fixed client-safe message instead.

🛡️ Suggested sanitization pattern
-        except (ValueError, Exception) as exc:
-            datasources.append({"name": name, "error": str(exc)})
+        except Exception as exc:
+            logger.warning("Failed to load datasource %s", name, exc_info=exc)
+            datasources.append({"name": name, "error": "invalid datasource config"})
-        except (ValueError, Exception) as exc:
-            return f"Datasource '{name}' has an invalid config: {exc}"
+        except Exception as exc:
+            logger.warning("Failed to load datasource %s", name, exc_info=exc)
+            return f"Datasource '{name}' has an invalid config."

Also applies to: 663-668, 678-681, 719-722

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 214 - 222, The current try/except around
storage.get_datasource in server.py (used by the datasource summary/listing
endpoints such as datasource_summary, list_datasources, describe_datasource, and
list_tables) is returning str(exc) to clients which leaks internal
parser/validation details; instead, log the full exception server-side
(including stack trace) and append a fixed, client-safe error entry like
{"name": name, "error": "failed to load datasource"} (or similar) to
datasources; update the identical patterns in the other handlers that call
storage.get_datasource so they all log the exception and return the safe message
rather than the raw exception string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@slayer/mcp/server.py`:
- Around line 332-340: The check for mixed query/table parameters wrongly treats
empty placeholders (e.g., sql="", dimensions=[], measures=[]) as present; after
creating table_params (via _build_dict) filter out empty/falsy values (empty
strings, empty lists/dicts) before deciding there is a conflict. Update the
logic around table_params in server.py so that when query is not None you ignore
keys whose values are empty (use a dict comprehension or extend _build_dict to
drop empty containers/strings) and only raise the error if any remaining keys
exist; reference variables/functions: query, table_params, and _build_dict.

---

Outside diff comments:
In `@slayer/mcp/server.py`:
- Around line 232-239: The JSON envelope returned by datasource_summary() is
unstable because "datasources" is only set when datasources is truthy; always
include the "datasources" key (even as an empty list) so clients see a
consistent shape. Modify the construction of result (the dict built from
datasources, models, and model_count) to always set result["datasources"] =
datasources (or [] when None/falsey), and keep result["models"] = models and
result["model_count"] = len(models) as before so the response shape is stable
for all workspaces.

---

Duplicate comments:
In `@slayer/mcp/server.py`:
- Around line 214-222: The current try/except around storage.get_datasource in
server.py (used by the datasource summary/listing endpoints such as
datasource_summary, list_datasources, describe_datasource, and list_tables) is
returning str(exc) to clients which leaks internal parser/validation details;
instead, log the full exception server-side (including stack trace) and append a
fixed, client-safe error entry like {"name": name, "error": "failed to load
datasource"} (or similar) to datasources; update the identical patterns in the
other handlers that call storage.get_datasource so they all log the exception
and return the safe message rather than the raw exception string.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6869cb3b-494b-4e10-b180-7be8988692ca

📥 Commits

Reviewing files that changed from the base of the PR and between 78fa66b and c7390d1.

📒 Files selected for processing (2)
  • slayer/mcp/server.py
  • tests/test_mcp_server.py

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
slayer/mcp/server.py (2)

833-835: ⚠️ Potential issue | 🟡 Minor

Wrap storage.get_datasource() in try/except for consistency.

Same issue as edit_datasource: if the datasource config is malformed, the exception will propagate and may expose error details.

🛡️ Suggested fix
     `@mcp.tool`()
     def ingest_datasource_models(datasource_name: str, include_tables: str = "", schema_name: str = "") -> str:
         ...
         from slayer.engine.ingestion import ingest_datasource as _ingest
 
-        ds = storage.get_datasource(datasource_name)
+        try:
+            ds = storage.get_datasource(datasource_name)
+        except Exception as exc:
+            logger.warning("Failed to load datasource '%s': %s", datasource_name, exc)
+            return f"Datasource '{datasource_name}' has an invalid config."
         if ds is None:
             return f"Datasource '{datasource_name}' not found."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 833 - 835, Wrap the call to
storage.get_datasource(datasource_name) in a try/except like edit_datasource
does: catch exceptions raised while loading/parsing the datasource config, log
the internal exception details via the module logger (or process logger) and
return a sanitized message such as "Datasource '<name>' not found or invalid" to
the caller; ensure you still return the existing "Datasource '<datasource_name>'
not found." when ds is None, but prevent uncaught exceptions from propagating by
handling errors from storage.get_datasource.

782-789: ⚠️ Potential issue | 🟡 Minor

Wrap storage.get_datasource() in try/except for consistency.

Unlike list_datasources, describe_datasource, and list_tables, this handler doesn't catch exceptions from storage.get_datasource(). If the datasource config is malformed (invalid YAML, unresolved env vars), the exception will propagate up and potentially expose raw error details.

🛡️ Suggested fix
     `@mcp.tool`()
     def edit_datasource(
         name: str,
         description: Optional[str] = None,
     ) -> str:
         """Update a datasource's metadata.
 
         Args:
             name: Datasource name to update.
             description: New description for the datasource.
         """
-        ds = storage.get_datasource(name)
+        try:
+            ds = storage.get_datasource(name)
+        except Exception as exc:
+            logger.warning("Failed to load datasource '%s': %s", name, exc)
+            return f"Datasource '{name}' has an invalid config."
         if ds is None:
             return f"Datasource '{name}' not found."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@slayer/mcp/server.py` around lines 782 - 789, Wrap the call to
storage.get_datasource(name) in a try/except like the other handlers
(list_datasources, describe_datasource, list_tables) so malformed datasource
configs don't raise raw exceptions; catch Exception around
storage.get_datasource(name), log the exception internally (not returned to
caller) and return a safe error message (e.g., "Datasource '<name>' not found or
invalid") if an error occurs, then proceed to the existing ds is None check and
the subsequent description update and storage.save_datasource(ds).
🧹 Nitpick comments (2)
tests/test_mcp_server.py (2)

79-80: Remove redundant import.

DatasourceConfig is already imported at the top of the file (line 14). This local import is unnecessary.

♻️ Suggested fix
     def test_includes_datasource(self, mcp_server, storage: YAMLStorage) -> None:
-        from slayer.core.models import DatasourceConfig
         storage.save_datasource(DatasourceConfig(
             name="mydb", type="postgres", host="localhost", description="Production DB",
         ))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_mcp_server.py` around lines 79 - 80, In test_includes_datasource
within tests/test_mcp_server.py remove the redundant local import statement
"from slayer.core.models import DatasourceConfig" (DatasourceConfig is already
imported at the top of the file); simply delete that line so the
test_includes_datasource function uses the existing top-level DatasourceConfig
import.

186-187: Consider clarifying the assertion logic.

The assertion "Error:" not in result or "Datasource" in result is intended to verify routing to the query path (per the comment), but the logic is subtle: "Error:" (colon immediately after "Error") won't match "Error creating model from query:" because the colon appears later in that string. A more explicit check would improve readability.

♻️ Suggested clarification
         # Should route to query path (fails on missing datasource), not mixed-param error
-        assert "Error:" not in result or "Datasource" in result
+        # Mixed-param error: "Error: 'query' cannot..." vs query-path error: "Error creating model..."
+        assert "cannot be combined" not in result  # Ensure we didn't hit mixed-param check
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_mcp_server.py` around lines 186 - 187, The existing assertion uses
the subtle expression `"Error:" not in result or "Datasource" in result`; update
it to an explicit boolean check to make intent clear — for example replace with
`assert not ("Error" in result and "Datasource" not in result)` (or, if you want
to match the specific error text, use `assert not ("Error creating model from
query:" in result and "Datasource" not in result")`). Change the assertion
around the `result` variable in the test so it clearly asserts that either no
"Error" appears or the string "Datasource" is present.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@slayer/mcp/server.py`:
- Around line 833-835: Wrap the call to storage.get_datasource(datasource_name)
in a try/except like edit_datasource does: catch exceptions raised while
loading/parsing the datasource config, log the internal exception details via
the module logger (or process logger) and return a sanitized message such as
"Datasource '<name>' not found or invalid" to the caller; ensure you still
return the existing "Datasource '<datasource_name>' not found." when ds is None,
but prevent uncaught exceptions from propagating by handling errors from
storage.get_datasource.
- Around line 782-789: Wrap the call to storage.get_datasource(name) in a
try/except like the other handlers (list_datasources, describe_datasource,
list_tables) so malformed datasource configs don't raise raw exceptions; catch
Exception around storage.get_datasource(name), log the exception internally (not
returned to caller) and return a safe error message (e.g., "Datasource '<name>'
not found or invalid") if an error occurs, then proceed to the existing ds is
None check and the subsequent description update and
storage.save_datasource(ds).

---

Nitpick comments:
In `@tests/test_mcp_server.py`:
- Around line 79-80: In test_includes_datasource within tests/test_mcp_server.py
remove the redundant local import statement "from slayer.core.models import
DatasourceConfig" (DatasourceConfig is already imported at the top of the file);
simply delete that line so the test_includes_datasource function uses the
existing top-level DatasourceConfig import.
- Around line 186-187: The existing assertion uses the subtle expression
`"Error:" not in result or "Datasource" in result`; update it to an explicit
boolean check to make intent clear — for example replace with `assert not
("Error" in result and "Datasource" not in result)` (or, if you want to match
the specific error text, use `assert not ("Error creating model from query:" in
result and "Datasource" not in result")`). Change the assertion around the
`result` variable in the test so it clearly asserts that either no "Error"
appears or the string "Datasource" is present.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c606b523-33cd-4ad0-bf66-8b23e5540045

📥 Commits

Reviewing files that changed from the base of the PR and between c7390d1 and bce3f82.

📒 Files selected for processing (2)
  • slayer/mcp/server.py
  • tests/test_mcp_server.py

@ZmeiGorynych ZmeiGorynych requested a review from AivanF April 13, 2026 10:37
@ZmeiGorynych ZmeiGorynych merged commit 68a4203 into main Apr 13, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants