fix(reports): validate nativeFilters on report create/update and deactivate on dashboard filter deletion#38715
Conversation
…ctivate on dashboard filter deletion
Sequence DiagramThis PR adds native filter validation when creating or updating report schedules, preventing invalid dashboard filter payloads from being saved. It also deactivates existing reports and notifies owners when referenced dashboard native filters are removed. sequenceDiagram
participant User
participant ReportAPI
participant ReportCommand
participant DashboardAPI
participant ReportDAO
participant EmailService
User->>ReportAPI: Create or update report with nativeFilters
ReportAPI->>ReportCommand: Validate nativeFilters against dashboard filters
alt Invalid nativeFilters
ReportCommand-->>ReportAPI: Validation error
ReportAPI-->>User: Reject request with validation message
else Valid nativeFilters
ReportCommand-->>ReportAPI: Validation passed
ReportAPI-->>User: Save report schedule
end
User->>DashboardAPI: Update dashboard and remove native filter
DashboardAPI->>ReportDAO: Find reports using deleted filter ids
DashboardAPI->>ReportDAO: Deactivate matched reports
DashboardAPI->>EmailService: Send deactivation emails to report owners
Generated by CodeAnt AI |
There was a problem hiding this comment.
Code Review Agent Run #bee052
Actionable Suggestions - 1
-
superset/commands/report/base.py - 1
- Invalid error message formatting · Line 113-113
Filtered by Review Rules
Bito filtered these suggestions based on rules created automatically for your feedback. Manage rules.
-
superset/commands/dashboard/update.py - 1
- Inconsistent deactivation message · Line 231-231
Review Details
-
Files reviewed - 6 · Commit Range:
1c3be78..1c3be78- superset/commands/dashboard/update.py
- superset/commands/report/base.py
- superset/commands/report/create.py
- superset/commands/report/update.py
- superset/daos/report.py
- tests/integration_tests/reports/api_tests.py
-
Files skipped - 0
-
Tools
- Whispers (Secret Scanner) - ✔︎ Successful
- Detect-secrets (Secret Scanner) - ✔︎ Successful
- MyPy (Static Code Analysis) - ✔︎ Successful
- Astral Ruff (Static Code Analysis) - ✔︎ Successful
Bito Usage Guide
Commands
Type the following command in the pull request comment and save the comment.
-
/review- Manually triggers a full AI review. -
/pause- Pauses automatic reviews on this pull request. -
/resume- Resumes automatic reviews. -
/resolve- Marks all Bito-posted review comments as resolved. -
/abort- Cancels all in-progress reviews.
Refer to the documentation for additional commands.
Configuration
This repository uses Superset You can customize the agent settings here or contact your Bito workspace admin at evan@preset.io.
Documentation & Help
Code Review Agent Run #cd4909Actionable Suggestions - 0Review Details
Bito Usage GuideCommands Type the following command in the pull request comment and save the comment.
Refer to the documentation for additional commands. Configuration This repository uses Documentation & Help |
msyavuz
left a comment
There was a problem hiding this comment.
Looks good to me, testing should probably focus on previously valid update requests for reports
| current_exec = next(schedule) | ||
|
|
||
| for _ in range(iterations): | ||
| for _i in range(iterations): |
There was a problem hiding this comment.
I renamed the loop variable to avoid Ruff flagging it as shadowed
✅ Deploy Preview for superset-docs-preview ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
| for f in json_metadata.get("native_filter_configuration", []) | ||
| if "id" in f |
There was a problem hiding this comment.
Suggestion: native_filter_configuration is allowed to be null in dashboard metadata, but iterating directly over json_metadata.get("native_filter_configuration", []) will raise TypeError when it is None. Coerce it to an empty list before iterating. [possible bug]
Severity Level: Major ⚠️
- ❌ Report create/update can fail with server exception.
- ⚠️ Native filter validation breaks on nullable dashboard metadata.| for f in json_metadata.get("native_filter_configuration", []) | |
| if "id" in f | |
| for f in (json_metadata.get("native_filter_configuration") or []) | |
| if isinstance(f, dict) and "id" in f |
Steps of Reproduction ✅
1. Store dashboard metadata with `"native_filter_configuration": null`; this is
schema-valid because `DashboardJSONMetadataSchema` sets `allow_none=True`
(`superset/dashboards/schemas.py:142`).
2. Create or update report via `/api/v1/report/` or `/api/v1/report/<id>`
(`superset/reports/api.py`), with `extra.dashboard.nativeFilters` present.
3. Validation enters `_validate_native_filters()`
(`superset/commands/report/base.py:134-224`) from create/update validate flows
(`create.py:140-141`, `update.py:133-134`).
4. Set comprehension iterates `json_metadata.get("native_filter_configuration", [])`
(`base.py:209`); when value is `None`, iteration raises `TypeError`, causing request
failure instead of clean 422 validation.Prompt for AI Agent 🤖
This is a comment left during a code review.
**Path:** superset/commands/report/base.py
**Line:** 209:210
**Comment:**
*Possible Bug: `native_filter_configuration` is allowed to be `null` in dashboard metadata, but iterating directly over `json_metadata.get("native_filter_configuration", [])` will raise `TypeError` when it is `None`. Coerce it to an empty list before iterating.
Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
Code Review Agent Run #3e4446Actionable Suggestions - 0Review Details
Bito Usage GuideCommands Type the following command in the pull request comment and save the comment.
Refer to the documentation for additional commands. Configuration This repository uses Documentation & Help |
…tivate on dashboard filter deletion (#38715)
…tivate on dashboard filter deletion (apache#38715)
User description
SUMMARY
Reports scheduled from dashboards can include native filter state (
nativeFilters) intheir
extra.dashboardpayload. Previously, this field was never validated — anyarbitrary data was accepted at creation/update time and only crashed at execution when
the scheduler tried to read required keys from the filter objects.
Changes:
_validate_native_filters()toBaseReportScheduleCommandvalidating thateach
nativeFiltersitem has all required keys (nativeFilterId,filterType,columnName,filterValues), thatfilterValuesis a list, and thatnativeFilterIdreferences a filter that actually exists on the dashboard_validate_report_extra()toBaseReportScheduleCommandso validation runson both create and update (previously only ran on create)
process_native_filter_diff()toUpdateDashboardCommand— when a nativefilter is removed from a dashboard, any report referencing that filter ID is
deactivated and each report owner receives an email notification
ReportScheduleDAO.find_by_native_filter_id()for filter ID lookups_send_deactivated_report_email()helper to avoid duplicating theHTML email template between tab and filter deactivation flows
BEFORE/AFTER SCREENSHOTS OR ANIMATED GIF
TESTING INSTRUCTIONS
/api/v1/report/withextra: {"dashboard": {"nativeFilters": [{"garbage": true}]}}→ expect 422nativeFilterIdthat doesn't exist on the dashboard → expect 422nativeFilterIdthat exists on the dashboard andfilterValues: []→ expect 201dashboard via PUT
/api/v1/dashboard/:id→ report is deactivated and owner receivesemail
ADDITIONAL INFORMATION
CodeAnt-AI Description
Validate native dashboard filters on report create/update and deactivate reports when dashboard filters are removed
What Changed
Impact
✅ Fewer report creation/update errors slipping to execution✅ Clearer deactivation emails when dashboard filters are removed✅ Fewer failing scheduled reports after dashboard changes💡 Usage Guide
Checking Your Pull Request
Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.
Talking to CodeAnt AI
Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:
This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.
Example
Preserve Org Learnings with CodeAnt
You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:
This helps CodeAnt AI learn and adapt to your team's coding style and standards.
Example
Retrigger review
Ask CodeAnt AI to review the PR again, by typing:
Check Your Repository Health
To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.