Skip to content

fix(mcp): replace uuid with url and changed_on_humanized in default list columns#38566

Merged
Antonio-RiveroMartnez merged 2 commits intoapache:masterfrom
aminghadersohi:amin/fix-mcp-default-columns-remove-uuid
Mar 13, 2026
Merged

fix(mcp): replace uuid with url and changed_on_humanized in default list columns#38566
Antonio-RiveroMartnez merged 2 commits intoapache:masterfrom
aminghadersohi:amin/fix-mcp-default-columns-remove-uuid

Conversation

@aminghadersohi
Copy link
Contributor

@aminghadersohi aminghadersohi commented Mar 10, 2026

User description

Summary

When users ask the copilot chatbot "list my dashboards", the LLM displays all default columns from the MCP tool response — including uuid. Despite system prompt instructions to hide raw technical fields, LLMs reliably display whatever data is in the tool response. Prompt-level instructions are suggestions; data-level changes are deterministic.

This PR removes uuid from the default columns in all three list tools and replaces it with user-friendly fields:

  • Dashboards: uuidurl, changed_on_humanized
  • Charts: uuidurl, changed_on_humanized
  • Datasets: uuidchanged_on_humanized (no url since datasets lack a direct user-facing URL)

Changes applied to:

  • list_dashboards.py, list_charts.py, list_datasets.py (tool default columns)
  • schema_discovery.py (schema metadata defaults + is_default flags on extra columns)
  • Tool docstrings updated to reflect new defaults
  • All affected unit tests updated

uuid remains available via select_columns parameter and stays in search_columns — just no longer returned by default.

BEFORE/AFTER

Beforelist_dashboards default response includes:

id, dashboard_title, slug, uuid

Afterlist_dashboards default response includes:

id, dashboard_title, slug, url, changed_on_humanized

TESTING INSTRUCTIONS

  1. Call list_dashboards, list_charts, list_datasets with no select_columns
  2. Verify responses contain url/changed_on_humanized instead of uuid
  3. Call with select_columns=["id", "uuid"] and verify uuid is still available on demand
  4. Call get_schema for each model type and verify default_select reflects new columns

ADDITIONAL INFORMATION

  • Has associated issue
  • Required feature flags
  • Changes UI
  • Includes DB Migration
  • Changes API

🤖 Generated with Claude Code


CodeAnt-AI Description

Show URL and humanized last-modified time instead of UUID in default lists

What Changed

  • Dashboard and chart list results now return a user-facing URL and a humanized "last modified" time by default instead of the raw UUID.
  • Dataset list results now include a humanized "last modified" time by default (datasets have no default URL).
  • The UUID field remains available when explicitly requested; schema discovery and the get_schema tool now report the new default columns.
  • Unit tests and tool docstrings updated to reflect the new default columns.

Impact

✅ Clearer Copilot list responses
✅ Fewer raw UUIDs shown to users
✅ Visible last-modified time in dashboard/chart/dataset lists

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

@bito-code-review
Copy link
Contributor

bito-code-review bot commented Mar 10, 2026

Code Review Agent Run #2f23b6

Actionable Suggestions - 0
Review Details
  • Files reviewed - 8 · Commit Range: 5abff06..5abff06
    • superset/mcp_service/chart/tool/list_charts.py
    • superset/mcp_service/common/schema_discovery.py
    • superset/mcp_service/dashboard/tool/list_dashboards.py
    • superset/mcp_service/dataset/tool/list_datasets.py
    • tests/unit_tests/mcp_service/chart/tool/test_list_charts.py
    • tests/unit_tests/mcp_service/dashboard/tool/test_dashboard_tools.py
    • tests/unit_tests/mcp_service/dataset/tool/test_dataset_tools.py
    • tests/unit_tests/mcp_service/system/tool/test_get_schema.py
  • Files skipped - 0
  • Tools
    • Whispers (Secret Scanner) - ✔︎ Successful
    • Detect-secrets (Secret Scanner) - ✔︎ Successful
    • MyPy (Static Code Analysis) - ✔︎ Successful
    • Astral Ruff (Static Code Analysis) - ✔︎ Successful

Bito Usage Guide

Commands

Type the following command in the pull request comment and save the comment.

  • /review - Manually triggers a full AI review.

  • /pause - Pauses automatic reviews on this pull request.

  • /resume - Resumes automatic reviews.

  • /resolve - Marks all Bito-posted review comments as resolved.

  • /abort - Cancels all in-progress reviews.

Refer to the documentation for additional commands.

Configuration

This repository uses Superset You can customize the agent settings here or contact your Bito workspace admin at evan@preset.io.

Documentation & Help

AI Code Review powered by Bito Logo

@codecov
Copy link

codecov bot commented Mar 11, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 64.40%. Comparing base (95f61bd) to head (d49aacb).
⚠️ Report is 5 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #38566      +/-   ##
==========================================
- Coverage   65.01%   64.40%   -0.62%     
==========================================
  Files        1817     2529     +712     
  Lines       72318   128947   +56629     
  Branches    23032    29718    +6686     
==========================================
+ Hits        47016    83042   +36026     
- Misses      25302    44460   +19158     
- Partials        0     1445    +1445     
Flag Coverage Δ
hive 40.76% <100.00%> (?)
mysql 61.89% <100.00%> (?)
postgres 61.96% <100.00%> (?)
presto 40.77% <100.00%> (?)
python 63.59% <100.00%> (?)
sqlite 61.59% <100.00%> (?)
unit 100.00% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

aminghadersohi and others added 2 commits March 12, 2026 16:15
…ist columns

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@aminghadersohi aminghadersohi force-pushed the amin/fix-mcp-default-columns-remove-uuid branch from 5308c5e to d49aacb Compare March 12, 2026 20:15
@codeant-ai-for-open-source codeant-ai-for-open-source bot added the size:L This PR changes 100-499 lines, ignoring generated files label Mar 12, 2026
@codeant-ai-for-open-source
Copy link
Contributor

Sequence Diagram

This PR changes the default selected columns for list tools so responses return user-friendly fields like url and changed_on_humanized instead of uuid. It also updates get schema metadata so clients see the same new defaults in default_select.

sequenceDiagram
    participant User
    participant ListTool
    participant SchemaConfig
    participant DataStore
    participant SchemaTool

    User->>ListTool: Request list dashboards with no select columns
    ListTool->>SchemaConfig: Resolve default columns for dashboard
    SchemaConfig-->>ListTool: id dashboard_title slug url changed_on_humanized
    ListTool->>DataStore: Fetch dashboard rows with resolved columns
    DataStore-->>ListTool: Dashboard records
    ListTool-->>User: Return list without uuid by default

    User->>SchemaTool: Request schema for dashboard
    SchemaTool->>SchemaConfig: Build schema info
    SchemaConfig-->>SchemaTool: default_select uses url and changed_on_humanized
    SchemaTool-->>User: Return schema metadata with updated defaults
Loading

Generated by CodeAnt AI

Comment on lines +50 to +51
"url",
"changed_on_humanized",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: changed_on_humanized is a computed property, not a queryable SQLAlchemy column. In this flow, ModelListCore passes default columns to BaseDAO.list, which does column-based queries and returns row objects; computed fields are not loaded there, so changed_on_humanized will be None for every row. Use a queryable timestamp column in defaults (or compute humanized text during serialization) to avoid returning empty values. [logic error]

Severity Level: Major ⚠️
- ⚠️ list_charts default returns null changed_on_humanized values.
- ⚠️ MCP chart summaries lose human-readable modification context.
- ❌ PR's user-friendly default-column goal is partially broken.
Suggested change
"url",
"changed_on_humanized",
"url",
"changed_on",
Steps of Reproduction ✅
1. Invoke MCP tool `list_charts` without `select_columns` (default workflow documented in
`superset/mcp_service/app.py:62` and request default behavior in
`tests/unit_tests/mcp_service/chart/tool/test_list_charts.py:70-78`).

2. Execution enters `list_charts()` at
`superset/mcp_service/chart/tool/list_charts.py:67`, where `ModelListCore` is configured
with `default_columns=DEFAULT_CHART_COLUMNS` (`list_charts.py:109-115`), including
`"changed_on_humanized"` (`list_charts.py:46-52`).

3. `ModelListCore.run_tool()` (`superset/mcp_service/mcp_core.py:135`) uses defaults when
`select_columns` is empty (`mcp_core.py:41-43`) and calls `ChartDAO.list(...,
columns=columns_to_load)` (`mcp_core.py:46-55`).

4. In `BaseDAO.list()` (`superset/daos/base.py:605`), non-queryable attributes are
explicitly ignored (`base.py:38` comment). Since `changed_on_humanized` is a computed
`@property` (`superset/models/helpers.py:644-646`), it is not included in SQL-selected
columns (`base.py:29-47`), and rows are returned without that field (`base.py:84-86`).

5. `serialize_chart_object()` reads `changed_on_humanized` via `getattr(chart,
"changed_on_humanized", None)` (`superset/mcp_service/chart/schemas.py:268`), so default
list responses contain null/empty modification-humanized values even though
`columns_requested` includes that field.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/mcp_service/chart/tool/list_charts.py
**Line:** 50:51
**Comment:**
	*Logic Error: `changed_on_humanized` is a computed property, not a queryable SQLAlchemy column. In this flow, `ModelListCore` passes default columns to `BaseDAO.list`, which does column-based queries and returns row objects; computed fields are not loaded there, so `changed_on_humanized` will be `None` for every row. Use a queryable timestamp column in defaults (or compute humanized text during serialization) to avoid returning empty values.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

"dashboard_title",
"slug",
"uuid",
"url",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: url is a computed @property on the dashboard model, not a queryable SQLAlchemy column. In list mode, the DAO builds a column-only query from default_columns, so url is skipped during SELECT and the serializer returns null for this field. Use a real persisted column in defaults to avoid returning empty values. [logic error]

Severity Level: Critical 🚨
- ❌ list_dashboards default response returns null URL values.
- ⚠️ MCP clients cannot open dashboards from list output.
- ⚠️ Schema default_select promises field not actually populated.
Suggested change
"url",
"uuid",
Steps of Reproduction ✅
1. Start MCP service and invoke `list_dashboards` without `select_columns` (tool is
registered via `superset/mcp_service/app.py:402-407`, and exposed in MCP instructions at
`superset/mcp_service/app.py:52`).

2. Request enters `list_dashboards()` at
`superset/mcp_service/dashboard/tool/list_dashboards.py:68`, which calls
`tool.run_tool(...)` at `:130` using `DEFAULT_DASHBOARD_COLUMNS` from `:48-54` (includes
`"url"`).

3. `ModelListCore.run_tool()` sets `columns_to_load = self.default_columns` when no
`select_columns` are provided (`superset/mcp_service/mcp_core.py:161`) and passes them to
DAO list as `columns=columns_to_load` (`:165-173`).

4. `BaseDAO.list()` only keeps SQLAlchemy `ColumnProperty`/`RelationshipProperty` fields
(`superset/daos/base.py:633-636`) and explicitly ignores non-queryable properties
(`:637`), so `"url"` is skipped and query is built from remaining real columns (`:645`).

5. Serializer `serialize_dashboard_object()` reads `url=getattr(dashboard, "url", None)`
(`superset/mcp_service/dashboard/schemas.py:533`), but row objects from column query have
no `url`, so default response returns `url: null`.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/mcp_service/dashboard/tool/list_dashboards.py
**Line:** 52:52
**Comment:**
	*Logic Error: `url` is a computed `@property` on the dashboard model, not a queryable SQLAlchemy column. In list mode, the DAO builds a column-only query from `default_columns`, so `url` is skipped during SELECT and the serializer returns `null` for this field. Use a real persisted column in defaults to avoid returning empty values.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

"slug",
"uuid",
"url",
"changed_on_humanized",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: changed_on_humanized is also a computed property, so it is ignored by the DAO's column selection path and will not be loaded from the database in list queries. This makes the default response include an empty field instead of the expected last-modified value; use the persisted timestamp column for reliable output. [logic error]

Severity Level: Major ⚠️
- ❌ Default dashboard list misses humanized last-modified value.
- ⚠️ Users lose quick recency context in MCP responses.
- ⚠️ Prompted field appears but carries empty payload.
Suggested change
"changed_on_humanized",
"changed_on",
Steps of Reproduction ✅
1. Call MCP `list_dashboards` with default parameters (tool imported/registered in
`superset/mcp_service/app.py:402-407`; default usage documented at `app.py:52`).

2. `list_dashboards()` uses `DEFAULT_DASHBOARD_COLUMNS` containing
`"changed_on_humanized"` (`superset/mcp_service/dashboard/tool/list_dashboards.py:48-54`)
and executes `tool.run_tool(...)` (`:130`).

3. `ModelListCore.run_tool()` forwards default columns directly to DAO
(`superset/mcp_service/mcp_core.py:161,165-173`).

4. In `BaseDAO.list()`, non-queryable properties are ignored
(`superset/daos/base.py:637`), so computed `"changed_on_humanized"` is not selected; only
real DB columns are queried (`:633-645`).

5. `serialize_dashboard_object()` tries `changed_on_humanized=getattr(dashboard,
"changed_on_humanized", None)` (`superset/mcp_service/dashboard/schemas.py:537`),
producing `null` in the returned dashboard list metadata.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/mcp_service/dashboard/tool/list_dashboards.py
**Line:** 53:53
**Comment:**
	*Logic Error: `changed_on_humanized` is also a computed property, so it is ignored by the DAO's column selection path and will not be loaded from the database in list queries. This makes the default response include an empty field instead of the expected last-modified value; use the persisted timestamp column for reliable output.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

"table_name",
"schema",
"uuid",
"changed_on_humanized",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: changed_on_humanized is a computed property, not a SQLAlchemy column, so BaseDAO.list(columns=...) does not load it. With this default, the query returns row objects without that attribute and the serializer emits null for the field. Use changed_on in defaults (or otherwise force full model loading) so the returned default field is actually populated. [logic error]

Severity Level: Major ⚠️
- ⚠️ list_datasets default modified-time field often returns null.
- ⚠️ MCP responses degrade quality for dataset discovery.
- ⚠️ Copilot summaries lose reliable recency information.
Suggested change
"changed_on_humanized",
"changed_on",
Steps of Reproduction ✅
1. Start MCP service where dataset tools are registered via
`superset/mcp_service/app.py:408-410` (`list_datasets` imported into app startup).

2. Call MCP tool `list_datasets` without `select_columns` (same entry path used in tests
at `tests/unit_tests/mcp_service/dataset/tool/test_dataset_tools.py:187-190` via
`client.call_tool("list_datasets", ...)`).

3. `list_datasets` uses default columns containing `"changed_on_humanized"` at
`superset/mcp_service/dataset/tool/list_datasets.py:48-53`, then `ModelListCore.run_tool`
sets `columns_to_load=self.default_columns` (`superset/mcp_service/mcp_core.py:161`) and
passes `columns=columns_to_load` to DAO (`superset/mcp_service/mcp_core.py:165-173`).

4. `BaseDAO.list` only includes SQLAlchemy `ColumnProperty` fields
(`superset/daos/base.py:633`) and explicitly ignores non-queryable properties
(`superset/daos/base.py:637`), while `changed_on_humanized` is a Python `@property`
(`superset/models/helpers.py:644-645`), so it is dropped from SQL selection
(`superset/daos/base.py:645`).

5. Serializer reads `changed_on_humanized` with `getattr(dataset, "changed_on_humanized",
None)` (`superset/mcp_service/dataset/schemas.py:329`); for row results missing that
attribute, output becomes `null` for this default field.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** superset/mcp_service/dataset/tool/list_datasets.py
**Line:** 52:52
**Comment:**
	*Logic Error: `changed_on_humanized` is a computed property, not a SQLAlchemy column, so `BaseDAO.list(columns=...)` does not load it. With this default, the query returns row objects without that attribute and the serializer emits `null` for the field. Use `changed_on` in defaults (or otherwise force full model loading) so the returned default field is actually populated.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

@bito-code-review
Copy link
Contributor

bito-code-review bot commented Mar 12, 2026

Code Review Agent Run #35dab2

Actionable Suggestions - 0
Review Details
  • Files reviewed - 8 · Commit Range: 0abd4d8..d49aacb
    • superset/mcp_service/chart/tool/list_charts.py
    • superset/mcp_service/common/schema_discovery.py
    • superset/mcp_service/dashboard/tool/list_dashboards.py
    • superset/mcp_service/dataset/tool/list_datasets.py
    • tests/unit_tests/mcp_service/chart/tool/test_list_charts.py
    • tests/unit_tests/mcp_service/dashboard/tool/test_dashboard_tools.py
    • tests/unit_tests/mcp_service/dataset/tool/test_dataset_tools.py
    • tests/unit_tests/mcp_service/system/tool/test_get_schema.py
  • Files skipped - 0
  • Tools
    • Whispers (Secret Scanner) - ✔︎ Successful
    • Detect-secrets (Secret Scanner) - ✔︎ Successful
    • MyPy (Static Code Analysis) - ✔︎ Successful
    • Astral Ruff (Static Code Analysis) - ✔︎ Successful

Bito Usage Guide

Commands

Type the following command in the pull request comment and save the comment.

  • /review - Manually triggers a full AI review.

  • /pause - Pauses automatic reviews on this pull request.

  • /resume - Resumes automatic reviews.

  • /resolve - Marks all Bito-posted review comments as resolved.

  • /abort - Cancels all in-progress reviews.

Refer to the documentation for additional commands.

Configuration

This repository uses Superset You can customize the agent settings here or contact your Bito workspace admin at evan@preset.io.

Documentation & Help

AI Code Review powered by Bito Logo

@Antonio-RiveroMartnez Antonio-RiveroMartnez merged commit fc156d0 into apache:master Mar 13, 2026
84 checks passed
michael-s-molina pushed a commit that referenced this pull request Mar 17, 2026
…ist columns (#38566)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
(cherry picked from commit fc156d0)
aminghadersohi added a commit to aminghadersohi/superset that referenced this pull request Mar 17, 2026
…ist columns (apache#38566)

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
(cherry picked from commit fc156d0)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size/L size:L This PR changes 100-499 lines, ignoring generated files

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants