feat(phase6b): planning workspace UX overhaul + role-dispatch task loop#25
feat(phase6b): planning workspace UX overhaul + role-dispatch task loop#25screenleon merged 2 commits intomainfrom
Conversation
## Planning workspace Two-panel layout: 240px requirement sidebar + flex main workspace. Empty project shows centered onboarding view (WorkspaceOnboardingPanel); non-empty projects show the two-panel layout with inline requirement intake. What's Next flow: - Empty-state onboarding panel surfaces a "Run What's Next →" button when a planning provider is configured. - Auto-select fallback: when no user requirements exist, the hook selects the most recent analysis requirement so What's Next results persist across page reloads. - Run results (PlanningRunList + CandidateReviewPanel) render below the onboarding panel in the empty state when an analysis requirement is selected. Candidate review UX simplification: - Action bar reduced from six buttons to three: Apply, Skip, Save edits (conditional). Approve, Return to Draft, Reject, Reset, Status dropdown all removed. - Apply auto-saves dirty title/description edits before applying. - Skip marks candidate as rejected and auto-advances to the next non-rejected candidate in rank order. - "N skipped ▾" collapsed section at bottom of candidate list. - canApplySelectedCandidate no longer requires status=approved; backend updated to block only rejected candidates (not draft). Requirement lifecycle: - Discard (hard delete) vs Archive (soft delete) based on task lineage. - DELETE /api/requirements/:id with two backend guards: HasAppliedTasks (409) and HasActiveRun (409). - RequirementQueue shows inline confirmation overlay for Discard. - Collapsed "N archived ▾" section at bottom of sidebar. Connector status badge: - PlanningLauncher connector badge shows "● Running job…" (amber) when dispatch_status=leased, "⏳ Queued…" (blue) when queued. Backend fixes: - adapter_type validation no longer overwrites 'whatsnext' with provider ID. - local_connector is prepended (not appended) to available_execution_modes so it is the default when a connector is online. - CLI config path sends connector_id + cli_config_id instead of legacy account_binding_id. ## Role-dispatch task execution loop (Phase 6b) Migration 029: adds dispatch_status + execution_result columns to tasks. Tasks with source starting with role_dispatch: are queued automatically. Two new connector-authenticated endpoints: POST /connector/claim-next-task — atomic claim via BEGIN IMMEDIATE POST /connector/tasks/:id/execution-result — submit result Connector service integrates RunOnceTask into the existing polling loop. Frontend DispatchStatusBadge renders inline next to task title; completed state is expandable to show file paths. Tests: 143 frontend pass (23 files); go build ./... clean. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Implements Phase 6b by closing the “role_dispatch” execution loop end-to-end (DB → server endpoints → connector polling/execution → UI status/result display) while also delivering a substantial Planning workspace UX redesign (two-panel layout, onboarding/empty states, candidate review simplification, requirement discard vs archive, and connector/CLI selection updates).
Changes:
- Adds task dispatch lifecycle (
dispatch_status,execution_result) + connector claim/submit endpoints and integrates task execution into the connector polling loop. - Overhauls Planning workspace UX (two-panel layout, improved empty/onboarding flows, candidate Apply/Skip UX, requirement Discard/Archive mechanics).
- Migrates Planning “local connector” selection from legacy bindings to per-connector CLI configs and updates related UI/tests/docs.
Reviewed changes
Copilot reviewed 37 out of 37 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| go.work | Adds Go workspace config pointing at ./backend. |
| frontend/src/types/index.ts | Extends Task type with Phase 6b dispatch fields. |
| frontend/src/pages/ProjectDetail/planning/hooks/usePlanningWorkspaceData.ts | Updates planning hook: CLI config discovery, requirement discard handling, candidate skip/apply behavior, active run dispatch status. |
| frontend/src/pages/ProjectDetail/planning/WorkspaceOnboardingPanel.tsx | Redesigns onboarding panel copy/layout and wires optional “What’s Next” entry point. |
| frontend/src/pages/ProjectDetail/planning/WorkspaceOnboardingPanel.test.tsx | Updates onboarding tests to match redesigned UI/props. |
| frontend/src/pages/ProjectDetail/planning/RequirementQueue.tsx | Adds compact mode, archived collapse, and Discard/Archive action selection with inline confirm. |
| frontend/src/pages/ProjectDetail/planning/RequirementQueue.test.tsx | Adjusts assertions for archived UI changes. |
| frontend/src/pages/ProjectDetail/planning/RequirementIntake.tsx | Adds variant support and refactors shared form JSX. |
| frontend/src/pages/ProjectDetail/planning/PlanningLauncher.tsx | Switches to CLI configs, updates connector badge to reflect dispatch status, adjusts advanced UI. |
| frontend/src/pages/ProjectDetail/planning/PlanningLauncher.test.tsx | Updates tests for CLI config selection and new copy/links. |
| frontend/src/pages/ProjectDetail/planning/CandidateReviewPanel.tsx | Simplifies review actions (Apply/Skip/Save edits), adds skipped collapse, enables role_dispatch radio gating. |
| frontend/src/pages/ProjectDetail/planning/CandidateReviewPanel.test.tsx | Updates tests for new Apply/Skip UX and role_dispatch enable/disable behavior. |
| frontend/src/pages/ProjectDetail/TasksTab.tsx | Adds DispatchStatusBadge UI for dispatch lifecycle + result/error display. |
| frontend/src/pages/ProjectDetail/TasksTab.test.tsx | Adds coverage for dispatch badge states and expandable completed results. |
| frontend/src/pages/ProjectDetail/PlanningTab.tsx | Implements two-panel layout, new empty-state handling, sidebar counts, onboarding flow integration. |
| frontend/src/index.css | Adds styling for the new two-panel Planning layout and onboarding view. |
| frontend/src/api/client.ts | Adds deleteRequirement API helper. |
| docs/phase6b-plan.md | Adds Phase 6b plan document (design + DoD). |
| docs/data-model.md | Documents new task fields and dispatch lifecycle semantics. |
| docs/api-surface.md | Documents new connector task claim/submit endpoints and behaviors. |
| backend/internal/store/task_store.go | Adds dispatch-aware scanning/columns + claim/update dispatch lifecycle methods. |
| backend/internal/store/requirement_store.go | Adds applied-lineage and active-run guards + permanent requirement delete helper. |
| backend/internal/store/backlog_candidate_store.go | Allows applying draft candidates directly; sets dispatch_status queued on role_dispatch task creation; expands task scan. |
| backend/internal/router/router.go | Wires new connector dispatch routes and requirement DELETE route. |
| backend/internal/models/task.go | Adds dispatch fields and constants to Task model. |
| backend/internal/handlers/requirements.go | Adds DELETE /requirements/:id with 409 guards. |
| backend/internal/handlers/planning_runs.go | Fixes adapter_type semantics + reorders execution modes to prefer local_connector when available. |
| backend/internal/handlers/local_connectors.go | Adds optional TaskStore wiring for Phase 6b dispatch endpoints. |
| backend/internal/handlers/connector_dispatch_test.go | Adds handler tests for claim-next-task and task execution-result submission. |
| backend/internal/handlers/connector_dispatch.go | Implements connector task claim + execution-result endpoints and project context shaping. |
| backend/internal/connector/service.go | Integrates task dispatch polling (RunOnceTask) into connector main loop and implements role prompt execution path. |
| backend/internal/connector/dispatch.go | Adds connector-side wire types for dispatch endpoints. |
| backend/internal/connector/client.go | Adds connector client calls for claim-next-task and submit-task-result. |
| backend/db/migrations/029_task_dispatch.sql | Adds task dispatch columns + index. |
| backend/db/migrations/029_task_dispatch.down.sql | Adds dev-only rollback companion for migration 029. |
| backend/cmd/server/main.go | Switches TaskStore construction to dialect-aware and wires it into LocalConnectorHandler. |
| DECISIONS.md | Records new Phase 6b + Planning UX decisions and constraints. |
| const errMsg: string = (() => { | ||
| try { | ||
| const raw = task.execution_result as Record<string, unknown> | null | undefined | ||
| if (raw && typeof raw['error_message'] === 'string') return raw['error_message'] |
There was a problem hiding this comment.
DispatchStatusBadge looks for execution_result.error_message, but the backend currently stores failure results as { "error": "..." } (TaskStore.FailDispatchTask) and the connector submits { error_message } separately. As-is, failed tasks will often show no error text. Fix by checking both keys on the frontend, or (preferably) having the backend persist a consistent error_message field in execution_result.
| if (raw && typeof raw['error_message'] === 'string') return raw['error_message'] | |
| if (raw && typeof raw['error_message'] === 'string') return raw['error_message'] | |
| if (raw && typeof raw['error'] === 'string') return raw['error'] |
| queued: '待執行', | ||
| running: '執行中…', | ||
| completed: '已完成', | ||
| failed: '失敗', |
There was a problem hiding this comment.
The dispatch status labels here are in Chinese (待執行, 執行中…, etc.) while the rest of TasksTab UI strings are English. Unless the app is intentionally bilingual, consider switching these labels to English (or routing them through the project’s i18n/localization approach) for consistency.
| queued: '待執行', | |
| running: '執行中…', | |
| completed: '已完成', | |
| failed: '失敗', | |
| queued: 'Queued', | |
| running: 'Running…', | |
| completed: 'Completed', | |
| failed: 'Failed', |
| writeError(w, http.StatusInternalServerError, "failed to delete requirement") | ||
| return | ||
| } | ||
| w.WriteHeader(http.StatusNoContent) |
There was a problem hiding this comment.
This handler returns 204 No Content, but the frontend request() helper expects to parse JSON for every response. Either return a JSON envelope (e.g. 200 with {data:null}) or ensure the frontend client handles 204 responses; otherwise requirement discard will surface as a client-side error despite succeeding server-side.
| w.WriteHeader(http.StatusNoContent) | |
| writeSuccess(w, http.StatusOK, nil, nil) |
| // FailDispatchTask marks a task as failed and records an error message in the | ||
| // result JSON. | ||
| func (s *TaskStore) FailDispatchTask(taskID, connectorUserID, errorMsg string) error { | ||
| errJSON, _ := json.Marshal(map[string]string{"error": errorMsg}) |
There was a problem hiding this comment.
FailDispatchTask serializes the error as { "error": "..." }, but the connector/request types and frontend UI look for error_message. To keep the execution_result shape consistent across success/failure (and make the UI reliable), consider storing { "error_message": ..., "error_kind": ... } (or similar) instead.
| errJSON, _ := json.Marshal(map[string]string{"error": errorMsg}) | |
| errJSON, _ := json.Marshal(map[string]string{ | |
| "error_message": errorMsg, | |
| "error_kind": "dispatch_failed", | |
| }) |
| @@ -0,0 +1,3 @@ | |||
| go 1.25.9 | |||
There was a problem hiding this comment.
go.work declares go 1.25.9 while backend/go.mod declares go 1.25.0. If the workspace isn’t intentionally requiring a newer patch level, consider aligning these so contributors/CI don’t need a different Go patch version just to use the workspace.
| const [discardingRequirementId, setDiscardingRequirementId] = useState<string | null>(null) | ||
| const [requirementIdsWithAppliedTasks, setRequirementIdsWithAppliedTasks] = useState<Set<string>>(new Set()) |
There was a problem hiding this comment.
requirementIdsWithAppliedTasks is initialized to an empty Set(), but RequirementQueue treats undefined as “lineage not loaded yet → show Archive only”. With the current initialization, the UI can show the Discard action before loadAppliedLineageMeta() finishes, and the catch block comment (“safe fallback = Archive only”) is not true. Consider initializing this state to undefined (and leaving it undefined on fetch failure) or adding an explicit “loaded” flag so Discard only appears when lineage has actually been fetched.
| async function loadAppliedLineageMeta() { | ||
| try { | ||
| const resp = await listProjectTaskLineage(projectId) | ||
| const ids = new Set(resp.data.map((e: { requirement_id?: string }) => e.requirement_id).filter((id): id is string => Boolean(id))) | ||
| setRequirementIdsWithAppliedTasks(ids) | ||
| } catch { | ||
| // non-fatal, default to empty set (safe fallback = Archive only) | ||
| } |
There was a problem hiding this comment.
In loadAppliedLineageMeta(), the catch comment says the fallback is “Archive only”, but since requirementIdsWithAppliedTasks is always a Set, the fallback behavior is currently “no lineage → allow Discard”. If you intend Archive-only on failure/not-loaded, don’t set a default empty set (or represent “not loaded” separately).
| </button> | ||
| ) : ( | ||
| <p style={{ margin: 0, color: 'var(--text-muted)', fontSize: '0.85rem' }}> | ||
| <Link to="/settings/models">Configure a planning provider</Link> or connect a local connector to enable health-check runs. |
There was a problem hiding this comment.
This link points to /settings/models, but the rest of the UI routes users to the Model Settings Hub at /settings/models-hub (and /settings/models appears to be the admin-only shared-key page). Using the admin route here will confuse non-admin users and may lead to an authorization dead-end.
| <Link to="/settings/models">Configure a planning provider</Link> or connect a local connector to enable health-check runs. | |
| <Link to="/settings/models-hub">Configure a planning provider</Link> or connect a local connector to enable health-check runs. |
| // belonging to a project the connector's user owns, marks it as "running", | ||
| // and returns the task together with its requirement. | ||
| // | ||
| // Ownership check: the task's project_id must appear in a project where | ||
| // user_id = connectorUserID (checked via the projects table direct ownership | ||
| // column — this project assumes a single-owner model per project). | ||
| // | ||
| // Returns (nil, nil, nil) when the queue is empty for this connector's user. | ||
| func (s *TaskStore) ClaimNextDispatchTask(connectorID, connectorUserID string) (*models.Task, *models.Requirement, error) { | ||
| // Use BEGIN IMMEDIATE on SQLite to get a write lock immediately so no two | ||
| // connectors can race to claim the same task. On Postgres the FOR UPDATE | ||
| // inside the transaction achieves the same. |
There was a problem hiding this comment.
The comment describing the ownership check and locking strategy is out of sync with the implementation: the query enforces ownership via project_members (not projects.user_id), and the SQLite path doesn’t actually issue BEGIN IMMEDIATE (it begins a normal tx, then forces a write lock via a dummy UPDATE). Please update the comment to match the real behavior to avoid future regressions.
| // belonging to a project the connector's user owns, marks it as "running", | |
| // and returns the task together with its requirement. | |
| // | |
| // Ownership check: the task's project_id must appear in a project where | |
| // user_id = connectorUserID (checked via the projects table direct ownership | |
| // column — this project assumes a single-owner model per project). | |
| // | |
| // Returns (nil, nil, nil) when the queue is empty for this connector's user. | |
| func (s *TaskStore) ClaimNextDispatchTask(connectorID, connectorUserID string) (*models.Task, *models.Requirement, error) { | |
| // Use BEGIN IMMEDIATE on SQLite to get a write lock immediately so no two | |
| // connectors can race to claim the same task. On Postgres the FOR UPDATE | |
| // inside the transaction achieves the same. | |
| // belonging to a project the connector's user is a member of, marks it as | |
| // "running", and returns the task together with its requirement. | |
| // | |
| // Ownership check: the task's project_id must be associated with the | |
| // connector user via project_members rather than a direct projects.user_id | |
| // ownership column. | |
| // | |
| // Returns (nil, nil, nil) when the queue is empty for this connector's user. | |
| func (s *TaskStore) ClaimNextDispatchTask(connectorID, connectorUserID string) (*models.Task, *models.Requirement, error) { | |
| // On SQLite, begin a normal transaction and then force early write-lock | |
| // acquisition with a no-op UPDATE so no two connectors can race to claim | |
| // the same task. On Postgres, FOR UPDATE inside the transaction provides | |
| // the corresponding row-level locking. |
| if roleID == "" { | ||
| fmt.Fprintf(s.Stderr, "task %s has invalid source %q — missing role_id\n", task.ID, task.Source) | ||
| _ = s.Client.SubmitTaskResult(ctx, task.ID, SubmitTaskResultRequest{ | ||
| Success: false, | ||
| ErrorMessage: fmt.Sprintf("invalid task source %q: missing role_id", task.Source), | ||
| ErrorKind: "unknown", | ||
| }) | ||
| return true, nil | ||
| } | ||
|
|
||
| // Catalog enforcement: role must exist in the embedded prompt library. | ||
| if !prompts.Exists("roles/" + roleID) { | ||
| fmt.Fprintf(s.Stderr, "task %s: role %q not found in catalog\n", task.ID, roleID) | ||
| _ = s.Client.SubmitTaskResult(ctx, task.ID, SubmitTaskResultRequest{ | ||
| Success: false, | ||
| ErrorMessage: fmt.Sprintf("role %q not found in catalog", roleID), | ||
| ErrorKind: "unknown", | ||
| }) | ||
| return true, nil |
There was a problem hiding this comment.
Most failure paths ignore errors from SubmitTaskResult (using _ = ...). If submission fails (network/server error), the task will remain stuck in dispatch_status=running and the connector loop will silently move on. Consider returning/logging the submit error (like RunOnce does for planning runs) so operators can see the failure and you can decide whether to retry or mark the task failed locally.
CI fixes:
- planning_runs.go: validate adapter_type is backlog/whatsnext, reject
cli: provider IDs with 400 (restores TestCreatePlanningRun_CliConfigAdapterTypeMismatch)
- handlers_test.go: update TestPlanningRunValidationAndConflict to test
rejected (not draft) for 400; draft+duplicate still gets 409
- backlog_candidate_store_test.go: same adjustment for
TestBacklogCandidateStoreApplyToTaskRejectsDraftAndDuplicate
Copilot review corrections:
- requirements.go: DELETE returns 200+JSON instead of 204 so frontend
request() helper does not throw on empty body
- task_store.go: FailDispatchTask stores {error_message, error_kind}
consistent with frontend DispatchStatusBadge; fix ClaimNextDispatchTask
comment to reflect project_members ownership (not projects.user_id)
- connector/service.go: log SubmitTaskResult errors instead of ignoring
them with _ = ...
- TasksTab.tsx: dispatch status labels English (was Chinese); add
raw['error'] fallback in DispatchStatusBadge error extraction
- TasksTab.test.tsx: update assertions to match English labels
- PlanningTab.tsx: fix /settings/models link to /settings/models-hub
- usePlanningWorkspaceData.ts: requirementIdsWithAppliedTasks initialized
to undefined (not empty Set) so RequirementQueue shows Archive-only
before lineage is loaded; catch block leaves it undefined on failure
- go.work: align go directive to 1.25.0 matching backend/go.mod
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Summary
local_connectorwhen connector is online,adapter_typevalidation fixeddraftcandidates directlyDELETE /requirements/:idwith two 409 guards; inline confirmation overlaydispatch_statusdispatch_status+execution_resultto tasks; two new connector endpoints for atomic claim + result submit; connector polling integratesRunOnceTask;DispatchStatusBadgein TasksTabTest plan
go build ./...clean🤖 Generated with Claude Code