feat: Deploy & Run History follow-ups — polling, metrics, deep-links, DB columns, server-side filters#64
feat: Deploy & Run History follow-ups — polling, metrics, deep-links, DB columns, server-side filters#64abhizipstack wants to merge 16 commits intomainfrom
Conversation
The Quick Deploy success toast now includes a clickable "View in Run History →" link that navigates to /project/job/history?task=<id>, preselecting the job. On arrival, the Run History page auto-expands the most recent run (first row) in addition to any FAILURE rows, so the user immediately sees the deploy they just triggered. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When no job covers the current model, clicking "Go to Scheduler" now navigates to /project/job/list?create=1&project=<pid>&model=<name>. The Jobs List reads these params: auto-opens the create drawer, and JobDeploy pre-enables the specified model in Model Configuration with the config panel auto-expanded. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously stored only in kwargs JSON, making server-side filtering impossible. Now first-class nullable CharField columns with DB indexes, written by trigger_scheduled_run alongside kwargs. - Migration 0002 adds trigger (scheduled/manual) and scope (job/model) columns with defaults matching existing behavior. - celery_tasks.py writes both the columns and kwargs (backward compat). - Frontend getRunTriggerScope prefers top-level row.trigger / row.scope (from serializer) and falls back to kwargs for pre-migration rows. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaces client-side filtering with server-side query params on the task_run_history endpoint. Filter changes now trigger a fresh API call with ?trigger=manual&scope=model&status=FAILURE, so results are accurate across all pages (previously client-side filtering only worked on the visible page). Backend accepts optional trigger, scope, status query params and applies them as Django ORM filters against the new DB columns from the previous migration. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After dispatching a deploy, the Quick Deploy button flips to "Deploying…" with a spinner and polls the latest run status every 5s. On terminal state (SUCCESS/FAILURE/REVOKED): - Clears the polling interval - Shows a completion toast with status + deep-link to Run History - Refreshes the explorer (status badges) and recent-runs cache Polling auto-cleans on component unmount. The button returns to its normal state when the run finishes or the component unmounts. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After DAG execution (success or failure), trigger_scheduled_run now serializes BASE_RESULT into run.result as JSON with per-model status/end_status and aggregate passed/failed counts. Frontend insights panel renders a metrics bar when result is present: "N models attempted · X passed · Y failed" plus per-model breakdown. Falls back gracefully to scope/models display for older runs without result data. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The notify service defaults to renderMarkdown: true, which wraps description in ReactMarkdown. When description is JSX (our <a> link), ReactMarkdown stringifies it via JSON.stringify, rendering as raw text instead of a clickable link. Added renderMarkdown: false to both the dispatch toast and the polling-completion toast. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Old runs have result as {} or with total=0. Guard with
record.result?.total > 0 so the metrics bar only renders when
there's actual execution data.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
BASE_RESULT.node_name stores str(cls) which renders as <class 'project.models.mdoela.Mdoela'>. Extract the module name (second-to-last dotted segment) so metrics show "mdoela" instead of the full class repr. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
A model file can define multiple classes (e.g. SourceMdoela + Mdoela) in the same module. Using [-2] (module name) made them indistinguishable. Switch to [-1] (class name) so the metrics display shows "SourceMdoela (OK), Mdoela (OK)" instead of "mdoela (OK), mdoela (OK)". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
No-code models generate a *Source class (e.g. MdoelaSource) for DAG dependency resolution alongside the user's actual model class. Both execute as DAG nodes and appear in BASE_RESULT, but users only care about their own models. Filter out classes ending with "Source" from the metrics serialization so the count and per-model list reflect user-created models only. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
SourceMdoela, DevPaymentsSource — the sample projects use both conventions. The generated no-code models use the prefix pattern (SourceX). Changed endswith to startswith to match. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
| Filename | Overview |
|---|---|
| frontend/src/ide/run-history/Runhistory.jsx | Adds server-side filtering, deep-link auto-expand, and metrics bar; introduces a double-fetch race condition on job change. |
| backend/backend/core/scheduler/celery_tasks.py | Adds BASE_RESULT metrics capture with snapshotting before clear; introduces duplicate _clean_name and _clean inner functions. |
| backend/backend/core/scheduler/migrations/0002_taskrunhistory_trigger_scope.py | Adds trigger/scope CharField columns with indexes; default=scheduled/job means old manual runs will be labelled scheduled in DB. |
| backend/backend/core/scheduler/models.py | Adds trigger and scope CharField fields + indexes matching the migration; clean and straightforward. |
| backend/backend/core/scheduler/views.py | Adds server-side trigger/scope/status filtering to task_run_history endpoint; additive and non-breaking. |
| frontend/src/ide/editor/no-code-model/no-code-model.jsx | Adds polling loop for live deploy status, JSX toast with Run History deep-link, and goToScheduler pre-fill params; cleanup on unmount is correct. |
| frontend/src/ide/scheduler/JobDeploy.jsx | Adds prefillModel and prefillProject props with useEffect handlers; straightforward and well-guarded. |
| frontend/src/ide/scheduler/JobList.jsx | Reads create/model/project URL params, opens JobDeploy drawer with pre-fill, clears params via replace. |
| frontend/src/ide/scheduler/service.js | Adds getLatestRunStatus fetching page 1 limit 1; correctly extracts the first run from the response. |
Sequence Diagram
sequenceDiagram
participant U as User
participant NCM as NoCodeModel
participant SVC as service.js
participant API as Django API
participant RH as RunHistory
U->>NCM: Click Quick Deploy
NCM->>API: POST runTask / runTaskForModel
API-->>NCM: 200 OK dispatched
NCM->>U: Toast Deploy Triggered + View in Run History link
NCM->>NCM: startDeployPolling(taskId)
loop Every 5s while non-terminal
NCM->>SVC: getLatestRunStatus(projectId, taskId)
SVC->>API: GET /run-history/{taskId}?page=1&limit=1
API-->>SVC: run status object
SVC-->>NCM: run object
end
NCM->>U: Toast Deploy Completed/Failed + View in Run History link
NCM->>NCM: clearInterval + refresh caches
U->>RH: Navigate to /project/job/history?task=taskId
RH->>API: GET /run-history/{taskId}?page=1&limit=10
API-->>RH: runs with trigger/scope columns
RH->>RH: Auto-expand most recent run
U->>RH: Change filter
RH->>API: GET /run-history/{taskId}?trigger=X&scope=Y
API-->>RH: Server-side filtered runs
Prompt To Fix All With AI
This is a comment left during a code review.
Path: frontend/src/ide/run-history/Runhistory.jsx
Line: 195-209
Comment:
**Double-fetch race condition on job change**
Adding `filterQueries.job` to this effect's dep array means the effect fires whenever the job changes — but `handleJobChange` (line 229) and `getJobList` (line 181) already call `getRunHistoryList` directly after calling `setFilterQuery`. This produces two concurrent requests on every job switch. The direct call uses `currentPage` (which may be > 1 from the previous job), while the effect always passes `page=1`. Whichever response resolves last wins; if the stale-page response lands second it overwrites the correct page-1 data.
The fix is to remove the direct `getRunHistoryList(value)` calls from `handleJobChange` and `getJobList` and let this effect be the sole fetch trigger.
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: frontend/src/ide/run-history/Runhistory.jsx
Line: 212-224
Comment:
**Deep-link auto-expand fires on every data refresh, not just initial arrival**
`searchParams.has("task")` stays `true` for the lifetime of the page. Because this effect depends on `backUpData`, it re-runs after every API response — page changes, filter changes, refreshes — and auto-expands `backUpData[0].id` each time. A user who navigates to page 2 will find the first item of that page unexpectedly expanded.
A `useRef` one-shot flag would limit the behaviour to arrival only:
```js
const deepLinkExpandedRef = useRef(false);
useEffect(() => {
const ids = [];
const fromDeepLink = !deepLinkExpandedRef.current && searchParams.has("task");
if (fromDeepLink && backUpData.length > 0) {
ids.push(backUpData[0].id);
deepLinkExpandedRef.current = true;
}
(backUpData || [])
.filter((r) => r.status === "FAILURE" && r.error_message)
.forEach((r) => { if (!ids.includes(r.id)) ids.push(r.id); });
setExpandedRowKeys(ids);
}, [backUpData, searchParams]);
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: backend/backend/core/scheduler/celery_tasks.py
Line: 348-353
Comment:
**Duplicate name-cleaning logic defined twice**
`_clean_name` (success path) and `_clean` (`_mark_failure`) are byte-for-byte identical. A future edit to one will silently diverge from the other. Extract to a single module-level helper:
```python
def _clean_node_name(raw: str) -> str:
if "'" in raw:
return raw.split("'")[1].split(".")[-1]
return raw
```
How can I resolve this? If you propose a fix, please make it concise.Reviews (5): Last reviewed commit: "fix: use filterQueries.job instead of en..." | Re-trigger Greptile
…e bugs - Pass active filters through handlePagination and handleRefresh (P1) - Snapshot-then-clear BASE_RESULT to prevent stale metrics across worker reuse (P1) - Fix handleRefresh stale closure deps (P2) - Forward project URL param from goToScheduler to JobDeploy (P2) - Prefer DB columns over kwargs for trigger/scope in list_recent_runs_for_model (P2) - Sanitize taskId with encodeURIComponent in toast deep-links (CodeQL) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
All 6 issues addressed in 3c44484:
|
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
A few items to consider: 1. Same def _clean_name(raw):
if "'" in raw:
return raw.split("'")[1].split(".")[-1]PR #59 already fixed this with 2. 3. Polling has no max duration / backoff — no-code-model.jsx:370. Hardcoded 5s interval = 360 backend hits per 30-min deploy. Add a max-poll-count or exponential backoff. Also no upper-bound timeout — if backend never returns terminal status, polling runs until unmount. 4. 5. 6. |
P1: _mark_failure was called after _clear_base_result(), so failure
metrics were always empty. Swapped order: capture metrics first
via _mark_failure, then clear the global.
P1: getRunHistoryList had currentPage/pageSize in useCallback deps,
causing infinite re-creation on pagination. Removed — they're
passed as explicit arguments, not captured from closure.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
tahierhussain
left a comment
There was a problem hiding this comment.
@abhizipstack Please add screenshots for the UI changes addressed in this PR.
envInfo.id updates only after getRunHistoryList completes, creating a race window where pagination could fetch data for the previously selected job if the user switches jobs and changes page before the new data arrives. filterQueries.job updates immediately on job selection, so pagination always targets the correct job. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| useEffect(() => { | ||
| let filtered = backUpData; | ||
| if (filterQueries.status) { | ||
| filtered = filtered.filter((el) => el.status === filterQueries.status); | ||
| } | ||
| if (filterQueries.trigger) { | ||
| filtered = filtered.filter( | ||
| (el) => getRunTriggerScope(el).trigger === filterQueries.trigger | ||
| ); | ||
| } | ||
| if (filterQueries.scope) { | ||
| filtered = filtered.filter( | ||
| (el) => getRunTriggerScope(el).scope === filterQueries.scope | ||
| ); | ||
| } | ||
| setJobHistoryData(filtered); | ||
| if (!filterQueries.job) return; | ||
| getRunHistoryList(filterQueries.job, 1, pageSize, { | ||
| status: filterQueries.status, | ||
| trigger: filterQueries.trigger, | ||
| scope: filterQueries.scope, | ||
| }); | ||
| }, [ | ||
| filterQueries.status, | ||
| filterQueries.trigger, | ||
| filterQueries.scope, | ||
| backUpData, | ||
| filterQueries.job, | ||
| getRunHistoryList, | ||
| pageSize, | ||
| ]); |
There was a problem hiding this comment.
Double-fetch race condition on job change
Adding filterQueries.job to this effect's dep array means the effect fires whenever the job changes — but handleJobChange (line 229) and getJobList (line 181) already call getRunHistoryList directly after calling setFilterQuery. This produces two concurrent requests on every job switch. The direct call uses currentPage (which may be > 1 from the previous job), while the effect always passes page=1. Whichever response resolves last wins; if the stale-page response lands second it overwrites the correct page-1 data.
The fix is to remove the direct getRunHistoryList(value) calls from handleJobChange and getJobList and let this effect be the sole fetch trigger.
Prompt To Fix With AI
This is a comment left during a code review.
Path: frontend/src/ide/run-history/Runhistory.jsx
Line: 195-209
Comment:
**Double-fetch race condition on job change**
Adding `filterQueries.job` to this effect's dep array means the effect fires whenever the job changes — but `handleJobChange` (line 229) and `getJobList` (line 181) already call `getRunHistoryList` directly after calling `setFilterQuery`. This produces two concurrent requests on every job switch. The direct call uses `currentPage` (which may be > 1 from the previous job), while the effect always passes `page=1`. Whichever response resolves last wins; if the stale-page response lands second it overwrites the correct page-1 data.
The fix is to remove the direct `getRunHistoryList(value)` calls from `handleJobChange` and `getJobList` and let this effect be the sole fetch trigger.
How can I resolve this? If you propose a fix, please make it concise.
What
Implements 6 of 7 items from the deferred follow-ups ticket, plus bug fixes found during testing.
Deep-link toast → Run History (#5)
renderMarkdown: falseso JSX renders correctly).Pre-fill create-job from 0-candidates (#6)
/project/job/list?create=1&project=<pid>&model=<name>.First-class trigger + scope DB columns (#4)
trigger(scheduled/manual) andscope(job/model) as realCharFieldcolumns onTaskRunHistorywith DB indexes.0002_taskrunhistory_trigger_scope.trigger_scheduled_runwrites both columns and kwargs (backward compat).getRunTriggerScopeprefers top-level fields, falls back to kwargs for pre-migration rows.Server-side Run History filtering (#7)
task_run_historyendpoint accepts?trigger=,?scope=,?status=query params.Live deploy progress polling (#3)
getLatestRunStatus.Runtime metrics in Run History (#1)
trigger_scheduled_runserializesBASE_RESULTintoTaskRunHistory.resultas JSON.SourceMdoela) filtered out — only user-created models appear.result.total > 0.Deferred to separate tickets:
Why
These follow-ups close gaps identified during the Quick Deploy and Run History work: no way to navigate from a toast to the specific run, no way to create a job from the 0-candidates flow, filters were client-side only (broke pagination), no live feedback during deploy, and no execution metrics on completed runs.
How
task_run_history, BASE_RESULT serialization intrigger_scheduled_run+_mark_failure.renderMarkdown: falsefor JSX toasts,useSearchParamsfor deep-link + pre-fill, polling viasetInterval+getLatestRunStatus, metrics bar in insights panel.Can this PR break any existing features?
Low risk:
scheduled/job) so existing rows are valid post-migration. Serializer usesfields = "__all__"so new columns auto-expose.resultfield was already on the model (never written); now populated. Frontend guards withresult?.total > 0.startswith("Source")convention from the no-code model generator.Database Migrations
backend/backend/core/scheduler/migrations/0002_taskrunhistory_trigger_scope.py— addstrigger,scopecolumns + indexes.Env Config
None.
Relevant Docs
None.
Related Issues or PRs
Dependencies Versions
No changes.
Notes on Testing
Tested locally (gunicorn + Celery worker + React dev server):
Checklist
I have read and understood the Contribution Guidelines.
🤖 Generated with Claude Code