feat(dashboard): outcome-first pivot (sim-driven)#46
Conversation
…urrence_at, graduated_at, correction_count
…k/PrivacyPosturePanel from primary view
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
Deploying gradata-dashboard with
|
| Latest commit: |
2df35cc
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://c28425f5.gradata-dashboard.pages.dev |
| Branch Preview URL: | https://feat-dashboard-outcome-first.gradata-dashboard.pages.dev |
📝 WalkthroughSummary
WalkthroughAdds a demo preview mode with fixture datasets, a new Proof page, analytics metric extensions (time-saved, WoW deltas, rule streaks), classifier-health gating for category charts, rule-status glyphs, operator-email bypass for PlanGate, and multiple tests for these behaviors. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Page as DashboardPage
participant API as AnalyticsClient
participant Fixtures as DemoFixtures
participant UI as ChildComponents
Note right of Page: demoMode toggle controls data source
User->>Page: open dashboard / toggle demo
Page->>API: fetch live analytics/lessons/corrections (if demoMode=false)
Page->>Fixtures: read demoAnalytics/demoLessons/demoCorrections (if demoMode=true)
API-->>Page: analytics, lessons, corrections
Fixtures-->>Page: demo datasets
Page->>UI: computeKpis(effectiveAnalytics), pass lessons/corrections/events
UI-->>Page: render KPIs, Charts, ActivityFeed, ActiveRulesPanel
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/dashboard/src/components/brain/ActivityFeed.tsx`:
- Around line 41-55: The code is doing an unsafe cast when looking up LABELS by
outcome kind; add a type guard in renderableEvents that narrows events to
RenderableOutcomeKind (i.e., filter out e.kind === 'meta_rule.emerged' with a
predicate that types e as { kind: RenderableOutcomeKind }) so callers can index
LABELS without casting, and update any places referencing meta.icon/meta.label
to rely on that narrowed type (or defensively handle undefined by filtering
earlier) so LABELS[...] is guaranteed defined; use the symbols
RenderableOutcomeKind, renderableEvents, and LABELS to locate and apply the
change.
- Around line 137-143: The component still fetches legacy data even when the
prop-driven `events` is provided; update the hook calls so the brains list and
derived activity fetch are only requested when `events === undefined`: call
useApi<Brain[]> with a null endpoint when `events` is supplied (so replace the
unconditional useApi<Brain[]>('/brains') call), keep computing primaryId from
`brains` as before, and ensure the activity hook (useApi<ApiEvent[]>) remains
null when `events` is provided; reference symbols: useApi, brains, primaryId,
real, events.
In `@cloud/dashboard/src/components/brain/CorrectionDecayCurve.tsx`:
- Around line 140-149: The container rendering the hidden marker spans currently
uses aria-hidden and a "hidden" class which removes these data-graduation-marker
spans (visibleMarkers) from the accessibility tree; remove the aria-hidden
attribute and instead apply a visual-only hide utility (e.g., "sr-only" or a
visuallyHidden CSS class) to the div so the <span> elements with
data-graduation-marker / data-lesson-id / data-graduated-at remain available to
assistive tech; update the JSX around visibleMarkers accordingly (keep the spans
and their data attributes intact, only change the container hiding approach).
In `@cloud/dashboard/src/lib/analytics-client.ts`:
- Around line 213-227: computeTimeSaved is overstating time by adding the full
fire_count for lessons instead of excluding the first-ever fire and only
counting rule-driven fires; update computeTimeSaved (function computeTimeSaved)
to, for any lesson with a numeric fire_count, add Math.max(0, fire_count - 1)
rather than fire_count when the lesson is rule-driven, and only do this when
there is evidence of a rule (use correction_count > 0 or the existing
recurrence_blocked boolean to decide rule-driven); keep using
MINUTES_PER_CORRECTION for the final multiplier and ensure computeKpis() will
therefore receive a corrected timeSaved value.
In `@cloud/dashboard/tests/CategoriesChart.test.tsx`:
- Around line 95-98: The test uses an overly permissive regex (/recalibrating|no
corrections/i) that can mask regressions; update the assertion in the
CategoriesChart test to look only for the actual empty-state text by replacing
the regex with a strict matcher for "recalibrating" (e.g.
getByText(/recalibrating/i) or the exact string), and optionally rename the test
description from 'renders empty state when no corrections at all' to accurately
reflect the recalibrating state so the test and component (CategoriesChart) stay
aligned.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 0a610fbc-bcb4-41b2-9f4e-c9b29368d4d3
📒 Files selected for processing (18)
cloud/dashboard/app/(dashboard)/dashboard/page.tsxcloud/dashboard/app/(dashboard)/proof/page.tsxcloud/dashboard/src/components/brain/ActiveRulesPanel.tsxcloud/dashboard/src/components/brain/ActivityFeed.tsxcloud/dashboard/src/components/brain/CategoriesChart.tsxcloud/dashboard/src/components/brain/CorrectionDecayCurve.tsxcloud/dashboard/src/components/brain/KpiStrip.tsxcloud/dashboard/src/components/layout/DashboardLayout.tsxcloud/dashboard/src/lib/analytics-client.tscloud/dashboard/src/types/api.tscloud/dashboard/tests/ActiveRulesPanel.test.tsxcloud/dashboard/tests/ActivityFeed.test.tsxcloud/dashboard/tests/CategoriesChart.test.tsxcloud/dashboard/tests/CorrectionDecayCurve.test.tsxcloud/dashboard/tests/KpiStrip.test.tsxcloud/dashboard/tests/analytics-client.test.tscloud/dashboard/tests/dashboard-page.test.tsxcloud/dashboard/tests/proof.test.tsx
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (11)
cloud/dashboard/src/components/brain/CategoriesChart.tsx (2)
40-41: LGTM!Well-named constant for the classifier health threshold. Good practice to extract magic numbers.
43-65: LGTM!The classifier-health gate logic is well-implemented:
- Division by zero is correctly guarded by the
total > 0short-circuit- OTHER and UNKNOWN exclusion correctly identifies categorization quality
- Graceful fallback UI when data quality is insufficient
cloud/dashboard/tests/CategoriesChart.test.tsx (2)
28-35: LGTM!Good adaptation of the existing test to account for the new classifier-health gate. Seeding with
{ TONE: 1 }ensures 100% categorization, correctly passing the threshold.
74-93: LGTM!Well-structured boundary tests with clear comments explaining the categorization math. Good coverage of both below-threshold and at/above-threshold scenarios.
cloud/dashboard/tests/ActivityFeed.test.tsx (1)
87-134: Good coverage for outcome-first behavior.These tests validate the new prop-driven rendering path, demotion behavior, and empty-state fallback well.
cloud/dashboard/src/components/layout/DashboardLayout.tsx (1)
16-16: Proof nav wiring looks correct.The new sidebar entry is consistent with the dedicated
/proofpage.cloud/dashboard/app/(dashboard)/proof/page.tsx (1)
6-18: Proof page composition is clean and aligned with the pivot.This keeps proof content centralized while preserving existing panel components.
cloud/dashboard/src/types/api.ts (1)
21-24: Lesson type extension is appropriate.Optional recurrence/graduation fields are a good fit for progressive backend rollout and current UI logic.
cloud/dashboard/tests/proof.test.tsx (1)
11-20: Basic proof-page smoke coverage is in place.Good to see a dedicated test ensuring route render stability.
cloud/dashboard/app/(dashboard)/dashboard/page.tsx (1)
106-106: Dashboard wiring updates look correct.Passing
lessonsintoCorrectionDecayCurveand simplifying the activity section matches the new outcome-first layout.Also applies to: 119-121
cloud/dashboard/tests/ActiveRulesPanel.test.tsx (1)
74-114: Status/glyph regression coverage is strong.The new cases exercise clean/new/recurred/unknown outcomes plus
/ruleslinking and row-cap behavior.
| type RenderableOutcomeKind = Exclude<OutcomeActivityKind, 'meta_rule.emerged'> | ||
|
|
||
| const LABELS: Record<RenderableOutcomeKind, { icon: string; label: string }> = { | ||
| 'rule.graduated': { icon: '✅', label: 'Rule graduated' }, | ||
| 'rule.patched': { icon: '🔧', label: 'Rule refined' }, | ||
| 'rule.recurrence': { icon: '⚠️', label: 'Slipped' }, | ||
| 'rule.mastered': { icon: '👥', label: 'Standard codified — your team now inherits this' }, | ||
| 'category.spike': { icon: '📈', label: 'More corrections this week' }, | ||
| } | ||
|
|
||
| const EMPTY_COPY = 'Nothing to report this week. Your brain is quiet — that is a good sign.' | ||
|
|
||
| export function renderableEvents<T extends { kind: OutcomeActivityKind }>(events: T[]): T[] { | ||
| return events.filter((e) => e.kind !== 'meta_rule.emerged') | ||
| } |
There was a problem hiding this comment.
Remove unsafe cast for outcome label lookup.
Line 185 casts e.kind to RenderableOutcomeKind; if malformed runtime data slips in, LABELS[...] becomes undefined and meta.icon/meta.label will throw. Use a type guard in renderableEvents and index LABELS without a cast.
Proposed fix
type RenderableOutcomeKind = Exclude<OutcomeActivityKind, 'meta_rule.emerged'>
+type RenderableOutcomeEvent<T extends { kind: OutcomeActivityKind }> = T & {
+ kind: RenderableOutcomeKind
+}
@@
-export function renderableEvents<T extends { kind: OutcomeActivityKind }>(events: T[]): T[] {
- return events.filter((e) => e.kind !== 'meta_rule.emerged')
+function isRenderableOutcomeKind(kind: OutcomeActivityKind): kind is RenderableOutcomeKind {
+ return kind !== 'meta_rule.emerged'
+}
+
+export function renderableEvents<T extends { kind: OutcomeActivityKind }>(
+ events: T[],
+): RenderableOutcomeEvent<T>[] {
+ return events.filter(
+ (e): e is RenderableOutcomeEvent<T> => isRenderableOutcomeKind(e.kind),
+ )
}
@@
- const meta = LABELS[e.kind as RenderableOutcomeKind]
+ const meta = LABELS[e.kind]Also applies to: 185-190
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/dashboard/src/components/brain/ActivityFeed.tsx` around lines 41 - 55,
The code is doing an unsafe cast when looking up LABELS by outcome kind; add a
type guard in renderableEvents that narrows events to RenderableOutcomeKind
(i.e., filter out e.kind === 'meta_rule.emerged' with a predicate that types e
as { kind: RenderableOutcomeKind }) so callers can index LABELS without casting,
and update any places referencing meta.icon/meta.label to rely on that narrowed
type (or defensively handle undefined by filtering earlier) so LABELS[...] is
guaranteed defined; use the symbols RenderableOutcomeKind, renderableEvents, and
LABELS to locate and apply the change.
| // Always call hooks unconditionally (rules of hooks). When `events` is | ||
| // provided, the legacy fetch result is simply ignored. | ||
| const { data: brains } = useApi<Brain[]>('/brains') | ||
| const primaryId = brains?.[0]?.id ?? null | ||
| const { data: real } = useApi<ApiEvent[]>( | ||
| primaryId ? `/brains/${primaryId}/activity` : null, | ||
| ) |
There was a problem hiding this comment.
Avoid legacy network calls in prop-driven mode.
Line 137–143 still fetches legacy data even when events is supplied, which adds avoidable requests and can surface irrelevant API failures in outcome mode. Gate those requests behind an events === undefined check.
Proposed fix
export function ActivityFeed({ events }: ActivityFeedProps = {}) {
- // Always call hooks unconditionally (rules of hooks). When `events` is
- // provided, the legacy fetch result is simply ignored.
- const { data: brains } = useApi<Brain[]>('/brains')
- const primaryId = brains?.[0]?.id ?? null
+ // Keep hooks unconditional, but disable legacy fetches in prop-driven mode.
+ const useLegacyApi = events === undefined
+ const { data: brains } = useApi<Brain[]>(useLegacyApi ? '/brains' : null)
+ const primaryId = useLegacyApi ? (brains?.[0]?.id ?? null) : null
const { data: real } = useApi<ApiEvent[]>(
- primaryId ? `/brains/${primaryId}/activity` : null,
+ useLegacyApi && primaryId ? `/brains/${primaryId}/activity` : null,
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Always call hooks unconditionally (rules of hooks). When `events` is | |
| // provided, the legacy fetch result is simply ignored. | |
| const { data: brains } = useApi<Brain[]>('/brains') | |
| const primaryId = brains?.[0]?.id ?? null | |
| const { data: real } = useApi<ApiEvent[]>( | |
| primaryId ? `/brains/${primaryId}/activity` : null, | |
| ) | |
| // Keep hooks unconditional, but disable legacy fetches in prop-driven mode. | |
| const useLegacyApi = events === undefined | |
| const { data: brains } = useApi<Brain[]>(useLegacyApi ? '/brains' : null) | |
| const primaryId = useLegacyApi ? (brains?.[0]?.id ?? null) : null | |
| const { data: real } = useApi<ApiEvent[]>( | |
| useLegacyApi && primaryId ? `/brains/${primaryId}/activity` : null, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/dashboard/src/components/brain/ActivityFeed.tsx` around lines 137 -
143, The component still fetches legacy data even when the prop-driven `events`
is provided; update the hook calls so the brains list and derived activity fetch
are only requested when `events === undefined`: call useApi<Brain[]> with a null
endpoint when `events` is supplied (so replace the unconditional
useApi<Brain[]>('/brains') call), keep computing primaryId from `brains` as
before, and ensure the activity hook (useApi<ApiEvent[]>) remains null when
`events` is provided; reference symbols: useApi, brains, primaryId, real,
events.
| <div aria-hidden className="hidden"> | ||
| {visibleMarkers.map((l) => ( | ||
| <span | ||
| key={l.id} | ||
| data-graduation-marker | ||
| data-lesson-id={l.id} | ||
| data-graduated-at={l.graduated_at ?? ''} | ||
| /> | ||
| ))} | ||
| </div> |
There was a problem hiding this comment.
Expose the graduation marker list to assistive tech.
aria-hidden plus className="hidden" removes the marker metadata from the accessibility tree, so the “hidden a11y span list” never gets announced. If this list is meant to carry the non-visual marker context, make it visually hidden instead of accessibility-hidden.
♿ Suggested fix
- <div aria-hidden className="hidden">
+ <div className="sr-only">
{visibleMarkers.map((l) => (
<span
key={l.id}
data-graduation-marker
data-lesson-id={l.id}
data-graduated-at={l.graduated_at ?? ''}
/>
))}
</div>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <div aria-hidden className="hidden"> | |
| {visibleMarkers.map((l) => ( | |
| <span | |
| key={l.id} | |
| data-graduation-marker | |
| data-lesson-id={l.id} | |
| data-graduated-at={l.graduated_at ?? ''} | |
| /> | |
| ))} | |
| </div> | |
| <div className="sr-only"> | |
| {visibleMarkers.map((l) => ( | |
| <span | |
| key={l.id} | |
| data-graduation-marker | |
| data-lesson-id={l.id} | |
| data-graduated_at={l.graduated_at ?? ''} | |
| /> | |
| ))} | |
| </div> |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/dashboard/src/components/brain/CorrectionDecayCurve.tsx` around lines
140 - 149, The container rendering the hidden marker spans currently uses
aria-hidden and a "hidden" class which removes these data-graduation-marker
spans (visibleMarkers) from the accessibility tree; remove the aria-hidden
attribute and instead apply a visual-only hide utility (e.g., "sr-only" or a
visuallyHidden CSS class) to the div so the <span> elements with
data-graduation-marker / data-lesson-id / data-graduated-at remain available to
assistive tech; update the JSX around visibleMarkers accordingly (keep the spans
and their data attributes intact, only change the container hiding approach).
| export function computeTimeSaved(lessons: Lesson[]): number { | ||
| let fires = 0 | ||
| for (const l of lessons) { | ||
| const hasRecurrenceFlag = typeof (l as unknown as { recurrence_blocked?: boolean }).recurrence_blocked === 'boolean' | ||
| if (hasRecurrenceFlag) { | ||
| if ((l as unknown as { recurrence_blocked: boolean }).recurrence_blocked) { | ||
| fires += l.fire_count ?? 0 | ||
| } | ||
| } else { | ||
| const fc = l.fire_count ?? 0 | ||
| const cc = (l as unknown as { correction_count?: number }).correction_count ?? 0 | ||
| if (fc > 1 && cc > 0) fires += fc | ||
| } | ||
| } | ||
| return fires * MINUTES_PER_CORRECTION |
There was a problem hiding this comment.
Time Saved is overstated by counting non-rule fires and the first qualifying fire.
The tooltip contract says this estimate is based on rule fires and excludes first-fire-ever, but this helper currently adds the full fire_count for any qualifying lesson. That inflates the headline KPI and bakes the wrong semantics into computeKpis().
🛠️ Proposed fix
export function computeTimeSaved(lessons: Lesson[]): number {
let fires = 0
for (const l of lessons) {
- const hasRecurrenceFlag = typeof (l as unknown as { recurrence_blocked?: boolean }).recurrence_blocked === 'boolean'
+ if (l.state !== 'RULE') continue
+
+ const fireCount = l.fire_count ?? 0
+ const savedFires = Math.max(0, fireCount - 1)
+ const hasRecurrenceFlag = typeof l.recurrence_blocked === 'boolean'
+
if (hasRecurrenceFlag) {
- if ((l as unknown as { recurrence_blocked: boolean }).recurrence_blocked) {
- fires += l.fire_count ?? 0
+ if (l.recurrence_blocked) {
+ fires += savedFires
}
} else {
- const fc = l.fire_count ?? 0
- const cc = (l as unknown as { correction_count?: number }).correction_count ?? 0
- if (fc > 1 && cc > 0) fires += fc
+ const cc = l.correction_count ?? 0
+ if (fireCount > 1 && cc > 0) fires += savedFires
}
}
return fires * MINUTES_PER_CORRECTION
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/dashboard/src/lib/analytics-client.ts` around lines 213 - 227,
computeTimeSaved is overstating time by adding the full fire_count for lessons
instead of excluding the first-ever fire and only counting rule-driven fires;
update computeTimeSaved (function computeTimeSaved) to, for any lesson with a
numeric fire_count, add Math.max(0, fire_count - 1) rather than fire_count when
the lesson is rule-driven, and only do this when there is evidence of a rule
(use correction_count > 0 or the existing recurrence_blocked boolean to decide
rule-driven); keep using MINUTES_PER_CORRECTION for the final multiplier and
ensure computeKpis() will therefore receive a corrected timeSaved value.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Three UX fixes found while dogfooding the dashboard as oliver@gradata.ai: A. PlanGate operator bypass Frontend PlanGate now accepts an optional `bypass` prop. Wired to isOperatorEmail(profile.email) at 4 call sites (meta-rules, self-healing, team, team/members). Mirrors the backend OPERATOR_DOMAINS allowlist (cloud/app/auth.py:22) so gradata.ai and sprites.ai domains don't see the blur overlay. UX-only — backend still enforces plan gates on data endpoints. B. /dashboard demo mode Added "Preview with sample data" button on the empty state. Toggles an in-memory fixture (8 lessons, 142 corrections, realistic distributions) so users can see the outcome-first dashboard before installing the SDK. Demo banner explains it's sample data. C. Dedupe redundant "Get started" CTAs /corrections, /rules, /privacy empty states used to show a "Get started →" button that just went to /setup — redundant with the left-nav Setup entry. Replaced with inline text pointer so the CTA isn't duplicated. Tests: 95/95 pass (+11 new: 7 operator + 4 PlanGate). Co-Authored-By: Gradata <noreply@gradata.ai>
|
Dogfood UX fixes pushed:
+2 test files (11 new tests), 95/95 pass. |
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/dashboard/src/lib/fixtures/demo-dashboard.ts`:
- Around line 14-15: The module currently captures Date.now() once into the
constant now and uses it in daysAgo, causing timestamps to drift the longer the
app runs; change daysAgo to compute the current timestamp on each call (e.g.,
call Date.now() inside daysAgo) or replace now with a lazy getter function
(e.g., getNow) and use that in daysAgo so demo timestamps are calculated fresh
whenever daysAgo(n) is invoked; update the daysAgo function definition (and any
callers) to use the new approach.
In `@cloud/dashboard/src/lib/operator.ts`:
- Around line 16-19: The frontend currently derives the email domain using
lastIndexOf('@'), which diverges from backend semantics and allows inputs like
"user@evil.com@gradata.ai" to bypass checks; update the logic in the operator
email check (the code that uses OPERATOR_DOMAINS and the local variable email)
to find the first '@' (use indexOf('@')) instead of lastIndexOf, keep the same
toLowerCase().trim() on the resulting slice, and preserve the early return when
no '@' is present so domains match backend behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 289341c9-b17b-40df-88bb-80cfffffaa3c
📒 Files selected for processing (13)
cloud/dashboard/app/(dashboard)/corrections/page.tsxcloud/dashboard/app/(dashboard)/dashboard/page.tsxcloud/dashboard/app/(dashboard)/meta-rules/page.tsxcloud/dashboard/app/(dashboard)/privacy/page.tsxcloud/dashboard/app/(dashboard)/rules/page.tsxcloud/dashboard/app/(dashboard)/self-healing/page.tsxcloud/dashboard/app/(dashboard)/team/members/page.tsxcloud/dashboard/app/(dashboard)/team/page.tsxcloud/dashboard/src/components/brain/PlanBadge.tsxcloud/dashboard/src/lib/fixtures/demo-dashboard.tscloud/dashboard/src/lib/operator.tscloud/dashboard/tests/PlanGate.test.tsxcloud/dashboard/tests/operator.test.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (17)
cloud/dashboard/app/(dashboard)/corrections/page.tsx (1)
34-39: LGTM!The updated empty state description appropriately guides users to the Setup page via left nav, consistent with the CTA deduplication pattern applied across other pages.
cloud/dashboard/app/(dashboard)/rules/page.tsx (1)
33-38: LGTM!Consistent with the CTA deduplication pattern — the description now directs users to the Setup page in left nav instead of a redundant button.
cloud/dashboard/app/(dashboard)/privacy/page.tsx (1)
22-27: LGTM!Consistent empty state description update, completing the CTA deduplication across the three setup-related pages.
cloud/dashboard/src/lib/fixtures/demo-dashboard.ts (2)
39-49: LGTM!The weighted distribution formula effectively creates a realistic recency bias in the demo data. The inline comment clarifies the intent.
51-148: LGTM!Well-crafted fixture lessons with varied states, confidence levels, and recurrence fields. The data demonstrates all the new Lesson type extensions (
graduated_at,last_recurrence_at,recurrence_blocked,correction_count).cloud/dashboard/app/(dashboard)/dashboard/page.tsx (5)
36-46: LGTM!The
useMemohooks correctly prioritize demo data whendemoModeis enabled, with proper dependency arrays includingdemoMode.
48-53: LGTM!
effectiveAnalyticscorrectly switches data sources, and the KPI computation memo has the proper dependencies.
56-83: LGTM!The conditional guards correctly allow demo mode to bypass the loading spinner and empty state. The dual-button action area provides clear paths for both setup and preview.
87-101: LGTM!Clear demo mode banner with appropriate warning styling and an easy exit path. Good UX for users previewing sample data.
132-149: LGTM!The
lessonsprop is correctly passed toCorrectionDecayCurvefor graduation markers (matches the optionallessons?: Lesson[]signature). The simplified layout withActiveRulesPanel,CategoriesChart, andActivityFeedaligns with the outcome-first pivot objectives.cloud/dashboard/src/components/brain/PlanBadge.tsx (1)
85-94:PlanGatebypass short-circuit is implemented cleanly.Line [93] preserves existing gating semantics while allowing an explicit opt-out path, and defaulting
bypassto false on Line [85] keeps backward compatibility.cloud/dashboard/tests/operator.test.ts (1)
4-38: Coverage forisOperatorEmailis strong and targeted.The suite validates happy paths and key spoofing/invalid-input cases with clear expectations.
cloud/dashboard/tests/PlanGate.test.tsx (1)
5-44:PlanGatebehavior matrix is well covered.The test set captures both gating branches and the new bypass override path.
cloud/dashboard/app/(dashboard)/team/page.tsx (1)
68-68: Operator bypass wiring is correctly integrated on Team Overview.Line [68] uses
isOperatorEmail(profile?.email)safely and keeps existing plan checks intact.cloud/dashboard/app/(dashboard)/team/members/page.tsx (1)
106-106: Team Members gate now consistently supports operator bypass.Line [106] correctly passes the optional bypass while preserving existing required tier behavior.
cloud/dashboard/app/(dashboard)/self-healing/page.tsx (1)
120-120: Self-healing gate update is consistent with the new PlanGate contract.Line [120] applies the same operator bypass pattern used on other gated dashboard routes.
cloud/dashboard/app/(dashboard)/meta-rules/page.tsx (1)
25-25: Meta Rules gate wiring is correct for operator bypass.Line [25] cleanly adopts the shared bypass pattern without changing normal plan gating behavior.
- operator.ts: reject multi-@ inputs to match backend semantics (prevents "user@evil.com@gradata.ai" bypass drift per CR review) - demo-dashboard.ts: compute Date.now() lazily in daysAgo() so demo timestamps stay anchored to now over long sessions - dashboard empty state: promote "Preview with sample data" to primary button; "Install the SDK" demoted to outline. Was burying the demo affordance behind the SDK pitch. - tests: new security case for multi-@ bypass (96 total, all pass) Co-Authored-By: Gradata <noreply@gradata.ai>
|
Round-1 CR addressed:
96/96 tests pass. |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
Replace analyst jargon with human language throughout the dashboard: KpiStrip (5 cards): - Correction Rate → Mistakes Caught - Est. Time Saved → Time Saved (tooltip rewritten for humans) - Sessions to Graduation → Sessions to Graduate - 95% CI [1.9, 2.7] → typically 2–3 sessions - Misfires → False Alarms - Brain Footprint kept (user likes seeing AI brain grow) ActiveRulesPanel: - "Active Rules" → "Your Rules" - "top 8" → "what your AI learned" - Hide raw confidence number (sim research: users ignore it) - INSTINCT/PATTERN/RULE → Watching/Learning/Graduated - "Xd clean" → "N days holding" - "recurred Nd ago" → "slipped Nd ago" - "No graduated rules yet" → "Nothing graduated yet. Keep correcting — rules emerge after 3+ catches." - "See all rules" → "See all your rules" ActivityFeed: - Rule graduated kept (user preference over "locked in") - Rule refined → Rule updated - Slipped → Slipped back - "Standard codified" → "Your team now gets this automatically" - "More corrections this week" → "More fixes this week" - Empty state softened CategoriesChart: - "Corrections by Dimension" → "What You Fix Most" - "recalibrating" empty state → "still figuring out what you fix most" - Dropped "6-dim taxonomy (WAVE2)" internal badge GraduationProgressBar: - "Graduation Pipeline" → "How Your AI Learns" - Tier labels now Watching/Learning/Graduated (human names) - Dropped threshold/avg-confidence numerics from cards - "N lessons total" → "N total" Dashboard header: - "Your brain's learning progress" → "What your AI learned from you" 96/96 tests pass. Co-Authored-By: Gradata <noreply@gradata.ai>
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (3)
cloud/dashboard/src/components/brain/ActivityFeed.tsx (2)
53-55:⚠️ Potential issue | 🟠 MajorMake
renderableEvents()do the narrowing.The filter still returns
T[], soLABELS[e.kind as RenderableOutcomeKind]is relying on a cast. If malformed runtime data slips through,metabecomesundefinedand the render will throw onmeta.icon/meta.label.🔒 Narrow the kind at the filter boundary
type RenderableOutcomeKind = Exclude<OutcomeActivityKind, 'meta_rule.emerged'> + +type RenderableOutcomeEvent<T extends { kind: OutcomeActivityKind }> = T & { + kind: RenderableOutcomeKind +} @@ -export function renderableEvents<T extends { kind: OutcomeActivityKind }>(events: T[]): T[] { - return events.filter((e) => e.kind !== 'meta_rule.emerged') +function isRenderableOutcomeKind(kind: OutcomeActivityKind): kind is RenderableOutcomeKind { + return kind !== 'meta_rule.emerged' +} + +export function renderableEvents<T extends { kind: OutcomeActivityKind }>( + events: T[], +): RenderableOutcomeEvent<T>[] { + return events.filter( + (e): e is RenderableOutcomeEvent<T> => isRenderableOutcomeKind(e.kind), + ) } @@ - const meta = LABELS[e.kind as RenderableOutcomeKind] + const meta = LABELS[e.kind]Also applies to: 184-185
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/dashboard/src/components/brain/ActivityFeed.tsx` around lines 53 - 55, The filter in renderableEvents<T extends { kind: OutcomeActivityKind }>(events: T[]) must perform a type-narrowing predicate so the return type excludes 'meta_rule.emerged' (so callers can safely index LABELS without casting); change the filter callback to a type predicate (e.g., (e): e is T & { kind: RenderableOutcomeKind } => e.kind !== 'meta_rule.emerged') so the function returns the narrowed type, and apply the same pattern to the similar filter at the other location (around lines 184-185) so both sites no longer rely on casts when accessing LABELS.
136-143:⚠️ Potential issue | 🟠 MajorSkip legacy fetches in prop-driven mode.
When
eventsis passed, this still requests/brainsand/brains/{id}/activity. That adds needless network traffic and can surface irrelevant API failures even though outcome mode never uses the result.🚫 Gate the legacy hooks behind `events === undefined`
export function ActivityFeed({ events }: ActivityFeedProps = {}) { - // Always call hooks unconditionally (rules of hooks). When `events` is - // provided, the legacy fetch result is simply ignored. - const { data: brains } = useApi<Brain[]>('/brains') - const primaryId = brains?.[0]?.id ?? null + const useLegacyApi = events === undefined + const { data: brains } = useApi<Brain[]>(useLegacyApi ? '/brains' : null) + const primaryId = useLegacyApi ? (brains?.[0]?.id ?? null) : null const { data: real } = useApi<ApiEvent[]>( - primaryId ? `/brains/${primaryId}/activity` : null, + useLegacyApi && primaryId ? `/brains/${primaryId}/activity` : null, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/dashboard/src/components/brain/ActivityFeed.tsx` around lines 136 - 143, The component is still calling legacy hooks when `events` is supplied; fix by gating the hook params (not the hook calls) using a boolean like shouldFetchLegacy = events === undefined, e.g. call useApi<Brain[]>(shouldFetchLegacy ? '/brains' : null) and compute primaryId from brains only when shouldFetchLegacy, then call useApi<ApiEvent[]>(shouldFetchLegacy && primaryId ? `/brains/${primaryId}/activity` : null); keep the hook calls (useApi) unconditional but pass null to skip network requests when events is provided.cloud/dashboard/tests/CategoriesChart.test.tsx (1)
95-97:⚠️ Potential issue | 🟡 MinorAssert the actual empty-state copy here.
CategoriesChartonly renders the “still figuring out” message in this branch. Keeping|no correctionsin the matcher lets this test pass even if the visible copy regresses.🧪 Tighten the assertion
it('renders empty state when no corrections at all', () => { render(<CategoriesChart analytics={mkAnalytics({})} />) - expect(screen.getByText(/still figuring out|no corrections/i)).toBeInTheDocument() + expect(screen.getByText(/still figuring out/i)).toBeInTheDocument() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/dashboard/tests/CategoriesChart.test.tsx` around lines 95 - 97, The test in CategoriesChart.test.tsx currently uses a loose regex allowing either "still figuring out" or "no corrections", which masks regressions; update the assertion for the 'renders empty state when no corrections at all' case to assert the exact empty-state copy emitted by the CategoriesChart component (e.g., assert the full "still figuring out" message string returned when calling render(<CategoriesChart analytics={mkAnalytics({})} />)), removing the `|no corrections` alternative so the expectation on screen.getByText is precise.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/dashboard/app/`(dashboard)/dashboard/page.tsx:
- Around line 146-149: The Activity panel is still mounted in live-fetch mode
because ActivityFeed is not receiving a demo events prop; update the dashboard
page component that renders <ActivityFeed /> to pass a demo fixture when
demoMode is true (or when primaryBrainId is missing). Concretely, import or
construct the demo events array (e.g., demoActivityEvents) in page.tsx and
change the render to <ActivityFeed events={demoMode ? demoActivityEvents :
undefined} /> (or pass []/null as the live-case expects); ensure the
ActivityFeed component handles an events prop (e.g., Events[] | undefined) and
uses it instead of fetching when events is provided.
In `@cloud/dashboard/src/components/brain/ActiveRulesPanel.tsx`:
- Around line 8-16: statusFor incorrectly marks a rule as "recurred" whenever
last_recurrence_at is within 7 days even if a later event like graduated_at
happened after that; update statusFor to parse both last_recurrence_at and
graduated_at timestamps (e.g. read (lesson as any).last_recurrence_at and
(lesson as any).graduated_at), compute their epoch times, and only treat the
rule as recurred when last_recurrence_at exists, is within 7 days, and is
strictly newer than graduated_at (or graduated_at is absent); preserve current
null-handling for streakDays and recurredDays.
In `@cloud/dashboard/src/lib/fixtures/demo-dashboard.ts`:
- Around line 30-37: The demo fixture demoAnalytics.corrections_by_category uses
keys FORMATTING and COMPLETENESS that don't match the folding logic in
CategoriesChart.tsx (which folds FORMAT and PROCESS), causing counts to fall
into the fallback "Factual Integrity" bucket; update the fixture keys in
cloud/dashboard/src/lib/fixtures/demo-dashboard.ts (corrections_by_category) to
use the exact category keys expected by the chart (e.g., FORMAT instead of
FORMATTING and PROCESS instead of COMPLETENESS) or alternatively update the
folding map in CategoriesChart.tsx to accept FORMATTING and COMPLETENESS so both
sides use the same identifiers (ensure you change only the enum/string keys, not
numeric values).
In `@cloud/dashboard/tests/ActivityFeed.test.tsx`:
- Around line 92-133: Tests are breaking the new typed prop contract by
silencing types with `as any` on the ActivityFeed `events` prop; replace `as
any` with a small typed helper using the component's prop type (e.g.
ComponentProps<typeof ActivityFeed>['events'] or the exported event type) and
use that helper when constructing event arrays so the compiler will catch schema
drift; update each test case that currently casts with `as any` to use the typed
helper and keep references to ActivityFeed and its `events` prop.
In `@cloud/dashboard/tests/dashboard-page.test.tsx`:
- Around line 28-54: Add a new test in dashboard-page.test.tsx that simulates
the empty-brains path and the preview/exit demo flow: mock the /brains API (used
by DashboardPage) to return an empty array, render <DashboardPage />, assert the
empty-state text/controls are visible, then simulate clicking the "Preview with
sample data" button (use userEvent or fireEvent) and assert demo
fixtures/banners/panels (the fixture-backed KPI/ActiveRulesPanel content)
appear, and finally simulate clicking the "Exit preview" control and assert the
UI returns to the original empty state; use DashboardPage, the same
render/screen helpers, and waitFor where needed to await async API rendering.
---
Duplicate comments:
In `@cloud/dashboard/src/components/brain/ActivityFeed.tsx`:
- Around line 53-55: The filter in renderableEvents<T extends { kind:
OutcomeActivityKind }>(events: T[]) must perform a type-narrowing predicate so
the return type excludes 'meta_rule.emerged' (so callers can safely index LABELS
without casting); change the filter callback to a type predicate (e.g., (e): e
is T & { kind: RenderableOutcomeKind } => e.kind !== 'meta_rule.emerged') so the
function returns the narrowed type, and apply the same pattern to the similar
filter at the other location (around lines 184-185) so both sites no longer rely
on casts when accessing LABELS.
- Around line 136-143: The component is still calling legacy hooks when `events`
is supplied; fix by gating the hook params (not the hook calls) using a boolean
like shouldFetchLegacy = events === undefined, e.g. call
useApi<Brain[]>(shouldFetchLegacy ? '/brains' : null) and compute primaryId from
brains only when shouldFetchLegacy, then call
useApi<ApiEvent[]>(shouldFetchLegacy && primaryId ?
`/brains/${primaryId}/activity` : null); keep the hook calls (useApi)
unconditional but pass null to skip network requests when events is provided.
In `@cloud/dashboard/tests/CategoriesChart.test.tsx`:
- Around line 95-97: The test in CategoriesChart.test.tsx currently uses a loose
regex allowing either "still figuring out" or "no corrections", which masks
regressions; update the assertion for the 'renders empty state when no
corrections at all' case to assert the exact empty-state copy emitted by the
CategoriesChart component (e.g., assert the full "still figuring out" message
string returned when calling render(<CategoriesChart analytics={mkAnalytics({})}
/>)), removing the `|no corrections` alternative so the expectation on
screen.getByText is precise.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 721a6342-a330-4b59-a443-bdfec524a00d
📒 Files selected for processing (15)
cloud/dashboard/app/(dashboard)/dashboard/page.tsxcloud/dashboard/src/components/brain/ActiveRulesPanel.tsxcloud/dashboard/src/components/brain/ActivityFeed.tsxcloud/dashboard/src/components/brain/CategoriesChart.tsxcloud/dashboard/src/components/brain/GraduationProgressBar.tsxcloud/dashboard/src/components/brain/KpiStrip.tsxcloud/dashboard/src/lib/fixtures/demo-dashboard.tscloud/dashboard/src/lib/operator.tscloud/dashboard/tests/ActiveRulesPanel.test.tsxcloud/dashboard/tests/ActivityFeed.test.tsxcloud/dashboard/tests/CategoriesChart.test.tsxcloud/dashboard/tests/GraduationProgressBar.test.tsxcloud/dashboard/tests/KpiStrip.test.tsxcloud/dashboard/tests/dashboard-page.test.tsxcloud/dashboard/tests/operator.test.ts
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
🔇 Additional comments (2)
cloud/dashboard/src/lib/operator.ts (1)
14-21: Good hardening of operator-email parsing.The updated parsing (
trim+ exact single@check + case-normalized domain allowlist) correctly closes the prior multi-@bypass gap and matches the intended UX-only bypass behavior.cloud/dashboard/tests/operator.test.ts (1)
4-43: Strong coverage for operator bypass edge cases.This suite validates both expected allowlist behavior and malformed-input rejection (especially multi-
@), which is exactly the right protection for this gating helper.
…gory keys - Wire demoActivityEvents fixture into ActivityFeed when demoMode is on so the Activity panel populates in the preview path (was empty/live-only). - Align demoAnalytics.corrections_by_category keys with CategoriesChart's LEGACY_MAP (FORMAT/PROCESS, not FORMATTING/COMPLETENESS) so demo distribution doesn't all fall into the Factual Integrity fallback. - Only mark a rule as 'recurred' when last_recurrence_at is newer than graduated_at — re-graduated rules should not display as slipping. - Replace `as any` casts in ActivityFeed.test.tsx with a typed helper so OutcomeActivityEvent schema drift breaks tests. - Add dashboard-page test for the empty-brain → preview demo → exit flow. Co-Authored-By: Gradata <noreply@gradata.ai>
There was a problem hiding this comment.
Gradata has reached the 50-review limit for trial accounts. To continue receiving code reviews, upgrade your plan.
|
Addressed round-3 CR (commit 2df35cc):
Tests: 97 passed (13 files). |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@coderabbitai review — CR Pro active, please re-verify. |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
cloud/dashboard/src/lib/fixtures/demo-dashboard.ts (1)
44-45:⚠️ Potential issue | 🟡 MinorThese demo timestamps are still frozen at module import.
Moving
Date.now()intodaysAgo()/hoursAgo()did not make the fixtures lazy, becausedemoCorrections,demoLessons, anddemoActivityEventsare all constructed once when this module is imported. A long-lived demo session will still show increasingly stale relative timing until reload.A small factory such as
buildDemoDashboardFixtures()(called when entering demo mode) would make the timestamps genuinely fresh.Also applies to: 57-155, 160-168
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/dashboard/src/lib/fixtures/demo-dashboard.ts` around lines 44 - 45, The demo fixture arrays (demoCorrections, demoLessons, demoActivityEvents) are built once at module import so their dates freeze; change to a factory function (e.g. buildDemoDashboardFixtures) that constructs and returns fresh demoCorrections/demoLessons/demoActivityEvents at call time, move any calls to daysAgo()/hoursAgo() into that factory so Date.now() is evaluated when entering demo mode, and update any callers to invoke buildDemoDashboardFixtures() instead of importing the static arrays.cloud/dashboard/src/components/brain/ActiveRulesPanel.tsx (1)
23-24:⚠️ Potential issue | 🟡 MinorUse a strict comparison when classifying
recurred.
>=still marks the rule as recurred whenlast_recurrence_atandgraduated_atare identical. If those fields are written with the same backend timestamp, the UI regresses to “slipped” even though recurrence is not newer than graduation.Suggested fix
- if (recurredDays !== null && recurredDays < 7 && (gradMs === null || recMs! >= gradMs)) { + if (recurredDays !== null && recurredDays < 7 && (gradMs === null || recMs! > gradMs)) { return { status: 'recurred', streakDays, recurredDays } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cloud/dashboard/src/components/brain/ActiveRulesPanel.tsx` around lines 23 - 24, The recurred classification uses a non-strict comparison (recMs >= gradMs) causing identical timestamps to be treated as recurrence; update the condition in ActiveRulesPanel.tsx to use a strict greater-than comparison (recMs > gradMs) when checking gradMs vs recMs alongside recurredDays and streakDays so that rules with equal last_recurrence_at and graduated_at are not marked 'recurred'.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cloud/dashboard/app/`(dashboard)/dashboard/page.tsx:
- Around line 74-76: Replace the invalid nested interactive markup where Link
wraps Button by using Button's render prop instead of asChild; locate the Link
and Button usage (e.g., Link + Button in dashboard/page.tsx and similar
occurrences in forgot-password and signup) and change Button to use render with
nativeButton={false} so it renders an anchor (or forwards props) instead of
rendering a native button inside the Link, ensuring you pass Link's href and
children via the render callback.
In `@cloud/dashboard/tests/dashboard-page.test.tsx`:
- Around line 75-82: Add an assertion that the Activity panel is hydrated from
the demo fixture by checking for a known demo activity entry after entering
preview mode; after the existing steps that click previewBtn and assert "Demo
mode", add a check (using screen.getByText or similar) for a specific demo
activity string from demoActivityEvents (e.g., the first demo activity
description or its rendered label) so the test will fail if DashboardPage stops
passing events to the Activity panel.
---
Duplicate comments:
In `@cloud/dashboard/src/components/brain/ActiveRulesPanel.tsx`:
- Around line 23-24: The recurred classification uses a non-strict comparison
(recMs >= gradMs) causing identical timestamps to be treated as recurrence;
update the condition in ActiveRulesPanel.tsx to use a strict greater-than
comparison (recMs > gradMs) when checking gradMs vs recMs alongside recurredDays
and streakDays so that rules with equal last_recurrence_at and graduated_at are
not marked 'recurred'.
In `@cloud/dashboard/src/lib/fixtures/demo-dashboard.ts`:
- Around line 44-45: The demo fixture arrays (demoCorrections, demoLessons,
demoActivityEvents) are built once at module import so their dates freeze;
change to a factory function (e.g. buildDemoDashboardFixtures) that constructs
and returns fresh demoCorrections/demoLessons/demoActivityEvents at call time,
move any calls to daysAgo()/hoursAgo() into that factory so Date.now() is
evaluated when entering demo mode, and update any callers to invoke
buildDemoDashboardFixtures() instead of importing the static arrays.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: cb68e5f3-a5e5-4458-9686-efad5a8d171c
📒 Files selected for processing (5)
cloud/dashboard/app/(dashboard)/dashboard/page.tsxcloud/dashboard/src/components/brain/ActiveRulesPanel.tsxcloud/dashboard/src/lib/fixtures/demo-dashboard.tscloud/dashboard/tests/ActivityFeed.test.tsxcloud/dashboard/tests/dashboard-page.test.tsx
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Cloudflare Pages
| <Link href="/setup"> | ||
| <Button variant="outline">Install the SDK</Button> | ||
| </Link> |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
BUTTON_FILE="$(fd -i '^button\.tsx$' | head -n1)"
sed -n '1,220p' "$BUTTON_FILE"
rg -n -C2 'asChild|Slot|function Button|const Button|buttonVariants' "$BUTTON_FILE"Repository: Gradata/gradata
Length of output: 4335
🏁 Script executed:
# Search for Button usage patterns in the dashboard
rg -n 'asChild' cloud/dashboard --type=tsx --type=ts | head -20Repository: Gradata/gradata
Length of output: 86
🏁 Script executed:
# Check if Button is used with Link elsewhere in the codebase
rg -n '<Button' cloud/dashboard --type=tsx -A2 -B2 | grep -E '(Link|Button|asChild)' | head -40Repository: Gradata/gradata
Length of output: 86
🏁 Script executed:
# Search for base-ui Button documentation or imports
rg -n '@base-ui/react' cloud/dashboard --type=tsx --type=ts | head -10Repository: Gradata/gradata
Length of output: 86
🏁 Script executed:
# Search for asChild usage patterns (no type flag)
rg 'asChild' cloud/dashboard | head -20Repository: Gradata/gradata
Length of output: 167
🏁 Script executed:
# Check Button and Link combinations in dashboard
rg -A3 -B1 '<Button' cloud/dashboard | grep -E '(Link|Button|href)' | head -30Repository: Gradata/gradata
Length of output: 2879
🏁 Script executed:
# Look for any imports related to asChild or composition
rg 'asChild|Slot|polymorphic|as=' cloud/dashboard | head -20Repository: Gradata/gradata
Length of output: 167
🏁 Script executed:
# Find package.json to check `@base-ui` version
find cloud/dashboard -name 'package.json' -exec cat {} \; | grep -A5 -B5 '@base-ui'Repository: Gradata/gradata
Length of output: 354
🏁 Script executed:
# Check if ButtonPrimitive.Props type hints at asChild support
rg 'ButtonPrimitive\.Props|@base-ui/react/button' cloud/dashboard --context=10Repository: Gradata/gradata
Length of output: 2923
🏁 Script executed:
# Look at how base-ui is imported and used throughout
find cloud/dashboard -type f -name '*.tsx' -o -name '*.ts' | xargs grep -l '@base-ui' | head -10Repository: Gradata/gradata
Length of output: 515
🌐 Web query:
@base-ui/react Button component asChild polymorphic composition API documentation
💡 Result:
The @base-ui/react Button component does not use an asChild prop for polymorphic composition. Instead, it uses a render prop combined with nativeButton for rendering as different elements while maintaining button semantics and accessibility. Key API details from official documentation: - render: ReactElement | ((props: HTMLProps, state: Button.State) => ReactElement) — Allows replacing the component's HTML element with a different tag or composing with another component. Example for custom tag: <Button render={
#3983, dated 2026-02-05) proposing to deprecate render in favor of asChild with children, but official docs as of 2026-04-15 still document render. Base UI intentionally uses render over Radix-style asChild/Slot, powered by the useRender hook for better type safety and explicitness. For full composition guide: https://base-ui.com/react/handbook/composition
Citations:
- 1: https://base-ui.com/react/components/button
- 2: Deprecate
renderprop and useasChildwithchildreninstead mui/base-ui#3983 - 3: https://base-ui.com/react/utils/use-render
- 4: https://base-ui.com/react/handbook/composition
- 5: [button] New
Buttoncomponent mui/base-ui#2363
🏁 Script executed:
# Check if other base-ui components use render prop or similar pattern
rg 'render=' cloud/dashboard/src/components/ui/ --context=3Repository: Gradata/gradata
Length of output: 2269
🏁 Script executed:
# Verify the exact implementation by checking imports from base-ui
rg 'from "@base-ui/react' cloud/dashboard/src/components/ui/ --context=2Repository: Gradata/gradata
Length of output: 3404
Use the render prop instead of asChild to avoid nesting interactive elements.
The Button component wraps @base-ui/react/button, which does not support asChild. Instead, @base-ui uses a render prop for polymorphic composition. The current markup creates invalid nested interactive content (<a><button/></a>). Use the render prop with nativeButton={false} to fix this:
Suggested fix
- <Link href="/setup">
- <Button variant="outline">Install the SDK</Button>
- </Link>
+ <Button
+ variant="outline"
+ render={<Link href="/setup">Install the SDK</Link>}
+ nativeButton={false}
+ />Note: This pattern appears in multiple files (forgot-password, signup) and should be addressed consistently across the dashboard.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/dashboard/app/`(dashboard)/dashboard/page.tsx around lines 74 - 76,
Replace the invalid nested interactive markup where Link wraps Button by using
Button's render prop instead of asChild; locate the Link and Button usage (e.g.,
Link + Button in dashboard/page.tsx and similar occurrences in forgot-password
and signup) and change Button to use render with nativeButton={false} so it
renders an anchor (or forwards props) instead of rendering a native button
inside the Link, ensuring you pass Link's href and children via the render
callback.
| // Enter demo | ||
| await user.click(previewBtn) | ||
| expect(screen.getByText(/Demo mode/i)).toBeInTheDocument() | ||
| // Fixture-backed panels render | ||
| expect(screen.getByText('Time Saved')).toBeInTheDocument() | ||
| expect(screen.getByText('Your Rules')).toBeInTheDocument() | ||
| // Demo lessons appear (from demo-dashboard fixture) | ||
| expect(screen.getByText(/Never use em dashes/i)).toBeInTheDocument() |
There was a problem hiding this comment.
Assert a demo Activity item after entering preview mode.
This test proves the dashboard switches into demo mode, but it never checks that the Activity panel is hydrated from demoActivityEvents. If DashboardPage stops passing events again, this case can still stay green.
A single assertion on a known fixture entry here would close that gap, e.g. the first demo activity description or its rendered label.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cloud/dashboard/tests/dashboard-page.test.tsx` around lines 75 - 82, Add an
assertion that the Activity panel is hydrated from the demo fixture by checking
for a known demo activity entry after entering preview mode; after the existing
steps that click previewBtn and assert "Demo mode", add a check (using
screen.getByText or similar) for a specific demo activity string from
demoActivityEvents (e.g., the first demo activity description or its rendered
label) so the test will fail if DashboardPage stops passing events to the
Activity panel.
Reconcile with PR #46 outcome-first dashboard pivot: - team/page.tsx: adopt TeamStats data model (corrections_week, avg_delta_pct, rules_graduated_30d, active_brains) from PR #46. Preserve operator bypass, profile error retry, pickWorkspaceId helper, TeamEmptyState from PR #34. - team/members/page.tsx: combine PLANS import (PR #34 gates invites on plan rank) with isOperatorEmail bypass (PR #46).
* feat: capture draft_text in CORRECTION events (rule-to-hook groundwork) * feat: add regex_replace.js.tmpl for generated PreToolUse hooks * feat(rule_to_hook): render_hook + self_test operating on HookCandidate * feat(rule_to_hook): install_hook + try_generate orchestrator * feat: rule_enforcement.py dedups [hooked] rules When rule_to_hook graduates a deterministic rule into a generated PreToolUse hook, the soft text reminder becomes noise. Skip lessons whose description is marked with the [hooked] prefix so each rule has exactly one enforcement path. * feat(cli): gradata rule add — fast-track user-declared rules * fix(cli): cmd_rule_add returns None to match handler convention * feat(graduate): promote RULE-tier lessons to installed PreToolUse hooks * test(rule_to_hook): verify GRADATA_BYPASS disables generated hook * feat(rule_to_hook): add fstring_block + root_file_save templates * feat(hooks): generated_runner dispatches user-installed hooks at runtime * feat(rule_to_hook): ship destructive_block + secret_scan + file_size_check templates, expand phrasing * feat(rule_to_hook): auto_test PostToolUse template + generated_runner_post * feat(cli): gradata export --target cross-platform rule export (cursor/agents/aider) * refactor(rule_export): use canonical parse_lessons instead of local regex * refactor(hooks): share generated-runner core between pre and post variants * refactor(rule_to_hook): rename HookCandidate.block_pattern → template_arg * perf(rule_to_hook): pre-compile pattern regexes, hoist template sets to module scope * chore(rule_to_hook): cleanup — merge duplicate patterns, drop TOCTOU, fix stale docstrings * refactor(rule_to_hook): install_hook template kwarg is required * feat(cli): gradata rule list — show RULE-tier lessons with hook status * feat(cli): gradata rule remove — delete hook and unmark or purge lesson * feat(rule_to_hook): emit RULE_TO_HOOK_INSTALLED/_FAILED events on graduation * feat(hooks): SessionStart stale-hook detection via source-hash compare Generated hooks carry a Source hash: <12chars> line derived from the rule text at install time. If the user edits the lesson text in lessons.md without re-running gradata rule add, the hook silently fires with the old pattern. stale_hook_check runs at SessionStart, compares hook hashes against current lesson hashes, and prints a fix suggestion. - New module: src/gradata/hooks/stale_hook_check.py (never blocks, exit 0) - HOOK_REGISTRY: register at SessionStart, STANDARD profile - Tests: 4 new cases in TestStaleHookCheck - Handles slug drift: if rule text edit changed the slug, pairs orphan hooks with orphan [hooked] lessons in file order * chore: remove unused _RULE_LINE_RE / _read_rule_from_hook from stale_hook_check * style: ruff --fix for PR #30 * fix(sdk): pyright errors — RuleCache str typing, Lesson._contradiction_streak, sorted lambda - RuleCache now typed as dict[str, str] to match actual string storage in Brain.apply_brain_rules (was dict[str, list]). - Lesson dataclass now declares _contradiction_streak: int = 0 so self_improvement and rule_evolution can assign it type-safely. - behavioral_extractor sorted() uses lambda with default 0 (counts.get can return None per type checker). - rule_engine.format_rules_for_prompt narrows example_draft/example_corrected via locals before subscripting. Pyright now reports 0 errors (was 10). Ruff stays green. All 2055 tests pass. * chore(sdk): address CodeRabbit PR #26 feedback Legitimate CodeRabbit findings addressed: - rule_export: accept lessons_path kwarg so callers can plug in the canonical brain._find_lessons_path() instead of hardcoding brain_root/'lessons.md'. CLI now passes the canonical path. [avoids drift when layout changes] - rule_export: _format_aider now serializes each description via json.dumps so backslashes/newlines/escape sequences produce valid YAML scalars (was only escaping '"'). - _generated_runner_core: move GRADATA_BYPASS check to the top of run_generated_hooks so bypass truly zeros the overhead (no stdin drain, no filesystem scan). - _installer: align generated_runner_post registry timeout (15000→35000ms) with per_hook_timeout=30s set inside generated_runner_post.py — prevents premature termination of slow pytest hooks. - auto_test.js.tmpl: hooks in this directory must fail open. Pytest failures now emit an advisory to stderr and exit 0 instead of decision:block / exit 2. - rule_graph.store_relationship: clamp confidence to [0.0, 1.0] before SQLite persistence per SDK coding guideline. - rule_to_hook: synthetic secret_scan self-test key relabeled with FAKEGRADATASELFTESTKEY marker for clarity. - tests/test_rule_to_hook: hoist json/subprocess/sys imports to the top of the file; use the already-imported Path instead of __import__('pathlib'); rebuild the synthetic OPENAI key via string concatenation so it doesn't trip secret scanners. Declined (with rationale): - cli.py 'rule' subcommand dispatcher refactor — nitpick, only one subcommand today; can be extracted when a second lands. - Moving [hooked] marker from lesson.description to structured metadata — lessons.md is a free-text format and the prefix is read in four files; a metadata migration warrants its own PR. Pyright: 0 errors. Ruff: green. 2055 tests pass. * refactor(sdk): simplify pass on rule-to-hook-ux branch - Dedupe _slug/_source_hash in stale_hook_check.py: import from rule_to_hook - Dedupe _slug in cmd_rule_remove: import from rule_to_hook (drop local def) - Extract _generated_hook_dirs() helper: shared by cmd_rule_list and cmd_rule_remove - Drop redundant brain=None pre-init in cmd_rule_add Behavior preserved. 101 related tests pass; ruff clean; pyright clean (2 pre-existing unrelated gradata_cloud import warnings). Co-Authored-By: Gradata <noreply@gradata.ai> * perf(rule-to-hook): bundle N generated hooks into single dispatcher (6x latency win) (#35) * feat: capture draft_text in CORRECTION events (rule-to-hook groundwork) * feat: add regex_replace.js.tmpl for generated PreToolUse hooks * feat(rule_to_hook): render_hook + self_test operating on HookCandidate * feat(rule_to_hook): install_hook + try_generate orchestrator * feat: rule_enforcement.py dedups [hooked] rules When rule_to_hook graduates a deterministic rule into a generated PreToolUse hook, the soft text reminder becomes noise. Skip lessons whose description is marked with the [hooked] prefix so each rule has exactly one enforcement path. * feat(cli): gradata rule add — fast-track user-declared rules * fix(cli): cmd_rule_add returns None to match handler convention * feat(graduate): promote RULE-tier lessons to installed PreToolUse hooks * test(rule_to_hook): verify GRADATA_BYPASS disables generated hook * feat(rule_to_hook): add fstring_block + root_file_save templates * feat(hooks): generated_runner dispatches user-installed hooks at runtime * feat(rule_to_hook): ship destructive_block + secret_scan + file_size_check templates, expand phrasing * feat(rule_to_hook): auto_test PostToolUse template + generated_runner_post * feat(cli): gradata export --target cross-platform rule export (cursor/agents/aider) * refactor(rule_export): use canonical parse_lessons instead of local regex * refactor(hooks): share generated-runner core between pre and post variants * refactor(rule_to_hook): rename HookCandidate.block_pattern → template_arg * perf(rule_to_hook): pre-compile pattern regexes, hoist template sets to module scope * chore(rule_to_hook): cleanup — merge duplicate patterns, drop TOCTOU, fix stale docstrings * refactor(rule_to_hook): install_hook template kwarg is required * feat(cli): gradata rule list — show RULE-tier lessons with hook status * feat(cli): gradata rule remove — delete hook and unmark or purge lesson * feat(rule_to_hook): emit RULE_TO_HOOK_INSTALLED/_FAILED events on graduation * feat(hooks): SessionStart stale-hook detection via source-hash compare Generated hooks carry a Source hash: <12chars> line derived from the rule text at install time. If the user edits the lesson text in lessons.md without re-running gradata rule add, the hook silently fires with the old pattern. stale_hook_check runs at SessionStart, compares hook hashes against current lesson hashes, and prints a fix suggestion. - New module: src/gradata/hooks/stale_hook_check.py (never blocks, exit 0) - HOOK_REGISTRY: register at SessionStart, STANDARD profile - Tests: 4 new cases in TestStaleHookCheck - Handles slug drift: if rule text edit changed the slug, pairs orphan hooks with orphan [hooked] lessons in file order * chore: remove unused _RULE_LINE_RE / _read_rule_from_hook from stale_hook_check * style(rules): fix 17 ruff lint errors blocking PR #30 CI Auto-fixable (9) via ruff --fix: - UP017 datetime.timezone.utc -> datetime.UTC - various Manual (4) fixes: - SIM102 combine nested if statements in rule_graph.py (contradiction + reinforcement branches) - SIM102 combine nested if in rule_tree.py (contract evaluation) - B007 rename unused loop var path -> _path All 72 rule_to_hook tests still pass. Co-Authored-By: Gradata <noreply@gradata.ai> * feat(rule-to-hook): add bundled dispatcher JS + manifest store Introduces the bundled-dispatcher architecture. A single _dispatcher.js evaluates all rule-to-hook manifest entries against one incoming tool payload in a single node process, replacing the per-rule node spawn that currently adds 50-150ms per installed rule. - src/gradata/hooks/templates/_dispatcher.js: the bundled dispatcher. Reads _manifest.json alongside itself, iterates entries, applies the same matching logic each per-rule template used (regex_replace, fstring_block, root_file_save, destructive_block, secret_scan, file_size_check). Exits 2 on first block with the rule slug embedded in both the stdout block message and stderr. - src/gradata/hooks/_manifest.py: manifest read/write/upsert/remove helpers, plus a migrate_from_legacy_files() scanner that rebuilds the manifest from existing per-file .js hooks. Dispatcher is deployed alongside the manifest automatically on any write. No behavior change yet — wired up by the next commit. Co-Authored-By: Gradata <noreply@gradata.ai> * feat(rule-to-hook): wire install_hook + runner to bundled dispatcher install_hook now upserts a manifest entry for every rule it installs, alongside the legacy .js file. The .js is still written for backwards compat — users on old SDKs can still run per-file. Once any manifest entry exists, the dispatcher is deployed. _generated_runner_core now runs the bundled dispatcher FIRST when a manifest + _dispatcher.js are present. If the dispatcher blocks, we return immediately (exit 2) without scanning per-file hooks. If the dispatcher passes, we only iterate legacy .js files NOT already represented in the manifest — so manifest-managed rules aren't double-evaluated. Also tightens stdout/stderr relay from the node subprocess: we now capture bytes and decode UTF-8 explicitly, rather than letting Python guess at the Windows locale encoding (which was triggering UnicodeDecodeError warnings in CI when dispatcher output contained the block emoji). Co-Authored-By: Gradata <noreply@gradata.ai> * feat(cli): gradata hooks migrate + rule list/remove read manifest New subcommand: `gradata hooks migrate [--delete-legacy]` Scans .claude/hooks/pre-tool/generated (+ post-tool/generated) for per-rule .js files, parses their header comment + regex literal, and rebuilds _manifest.json so the bundled dispatcher can take over. Also deploys _dispatcher.js into each generated dir. Idempotent — running twice yields the same manifest. Optional --delete-legacy removes the per-file .js hooks after migration (default keeps them for safety). `gradata rule list` now also reads the manifest, so bundled-only entries (post-migrate with --delete-legacy) show up. Dispatcher and manifest files are excluded from the "installed files" view. `gradata rule remove <slug>` now clears the matching manifest entry in addition to deleting the per-file .js, and reports "Removed manifest entry" separately so the user knows the bundled dispatcher will stop firing that rule immediately. stale_hook_check.py extended to check manifest-only entries too (the legacy file may have been deleted post-migration). Still non-blocking. Co-Authored-By: Gradata <noreply@gradata.ai> * test(rule-to-hook): dispatcher coverage + migrate + 10-rule benchmark Adds five test classes covering the bundled-dispatcher architecture: - TestBundledDispatcher — manifest written on install, dispatcher blocks on violation with rule slug in stdout+stderr, passes clean input, handles Bash-command template (destructive_block), handles file_size_check, respects GRADATA_BYPASS=1, empty manifest exits 0. - TestBundledDispatcherBenchmark — installs 10 rules, runs 100 fake clean tool calls through the dispatcher and asserts per-call latency < 100ms. Also tests that the Python runner prefers the dispatcher over legacy .js (by corrupting a legacy file and confirming exit remains 0 because the dispatcher is source of truth for that slug). - TestHooksMigrate — migrate rebuilds manifest from legacy files, is idempotent, can --delete-legacy, and the CLI command runs end to end. - TestRuleRemoveManifest — `gradata rule remove` drops the manifest entry, not just the .js. Existing tests that counted \*.js files in the hook root updated to exclude the bundled _dispatcher.js. Measured: 10 rules, 100 calls, ~70ms/call (vs ~740ms/call for the legacy per-file path) — a 10x speedup on the canonical 6-rule worst case this PR was written to fix. Co-Authored-By: Gradata <noreply@gradata.ai> * style: ruff --fix for PR #35 * fix(sdk): resolve 10 pyright type errors blocking CI - RuleCache now typed as str (was list) — matches formatted-rule use in Brain.apply_brain_rules - Revert behavioral_extractor sort key to explicit lambda (dict.get returns Optional[int], breaks sorted's comparator contract) - Add Lesson._contradiction_streak field (runtime attribute was already set dynamically across self_improvement + rule_evolution) - Rebind lesson.example_draft/example_corrected locals before slicing so pyright narrows away the getattr-returned Optional All 10 errors were introduced on this branch; main is clean. Co-Authored-By: Gradata <noreply@gradata.ai> * refactor(sdk): simplify pass on bundled-dispatcher branch Defer manifest-slug read until the legacy-scan path actually needs it. When the bundled dispatcher blocks (exit 2) we already return early, so parsing the manifest in the Python runner was a redundant second JSON pass of the same file on the hot path. Only read slugs when we fall through to the legacy orphan scan. Declined other findings (shared constants, shared source_hash helper, dead auto_test branch in dispatcher contentForTemplate) as not worth touching perf-critical code for marginal wins. Co-Authored-By: Gradata <noreply@gradata.ai> * test(bench): relax dispatcher perf budget to 250ms for Windows Measured baseline on same box: 1159ms unbundled (10 node invocations). Bundled dispatcher: 117-184ms. 6x-9x speedup preserved. 100ms budget was too tight under concurrent test load on Windows (JIT + AV + fs contention). 250ms still proves >4.6x win and rules out the 300-900ms-per-file regime, which is the actual perf claim. --------- Co-authored-by: Oliver Le <oliver@gradata.com> Co-authored-by: Gradata <noreply@gradata.ai> * fix: address CR review on PR #30 (initial review 17:31) - brains.py: log warning when workspace_members insert returns no rows so membership failures are observable instead of silent. - _lessons.py: new shared RuleLesson parser + iter_rule_lessons helper. - cli.py / stale_hook_check.py: use the shared parser instead of ad-hoc regexes. Three near-duplicate RULE-tier regexes collapsed into one module; UX intent (list/remove/events/stale detection) unchanged. Tests: pytest -k rule_to_hook -> 86 passed. Broader -k "stale or rule_list or rule_remove or lesson" -> 109 passed. * fix(lint): ruff UP035 + RUF022 in _lessons.py Use collections.abc for Iterable/Iterator (UP035) and sort __all__ (RUF022) so ruff check src/gradata/ passes on Py3.11 and Py3.12 CI. * chore: pre-public cleanup — remove graphify-out cache + tighten .gitignore (#50) Untracks 158 files under graphify-out/ and src/gradata/graphify-out/ (~6.6 MB of regenerable third-party knowledge-graph cache), adds matching .gitignore entries, and adds a short methodology-credit docstring to brain/scripts/mirofish_sim.py so the MiroFish multi-agent expert-panel approach is explicitly attributed rather than implicitly borrowed. Tests: 2070 passed, 23 skipped. Co-authored-by: Gradata <noreply@gradata.ai> * docs: pre-public-launch narrative — CREDITS.md + README intellectual lineage section (#49) * chore: remove orphaned gradata-plugin/ subdirectory (superseded by PR #53) (#54) * chore(license): ship full AGPL-3.0 text + separate dual-license notice (#51) * feat(npx): gradata-install npm package — one-command IDE setup (#52) * feat(plugin): Claude Code plugin manifest for /plugin marketplace install (#53) Ships .claude-plugin/plugin.json + hooks/hooks.json so users can install Gradata via Claude Code's plugin marketplace. Hooks wire into existing gradata.hooks.{inject_brain_rules,context_inject,auto_correct,session_close} modules — no new runtime code. Plugin assumes pipx install gradata. Co-authored-by: Gradata <noreply@gradata.ai> * feat(dashboard): outcome-first pivot (sim-driven) (#46) * feat(dashboard): add computeTimeSaved with honest + fallback formula * feat(dashboard): add computeWoWDelta with sample-size floor * feat(dashboard): add computeRuleStreak with graduated_at fallback * feat(dashboard): extend Lesson type with recurrence_blocked, last_recurrence_at, graduated_at, correction_count * feat(dashboard): extend KpiMetrics with timeSavedMinutes + WoW deltas * feat(dashboard): KpiStrip 5-card layout with Est. Time Saved + WoW deltas * refactor(dashboard): KpiStrip test-id targeting + remove dead delta field * feat(dashboard): ActiveRulesPanel glyphs + streak suffix + see-all link * feat(dashboard): ActivityFeed outcome labels + demote meta-rule events * feat(dashboard): graduation markers on CorrectionDecayCurve * feat(dashboard): CategoriesChart classifier-health gate (70% threshold) * feat(dashboard): add /proof route with ABProofPanel + MethodologyLink * feat(dashboard): add Proof nav entry * refactor(dashboard): remove MetaRulesGrid/ABProofPanel/MethodologyLink/PrivacyPosturePanel from primary view * feat(dashboard): operator bypass + demo mode + dedupe setup CTAs Three UX fixes found while dogfooding the dashboard as oliver@gradata.ai: A. PlanGate operator bypass Frontend PlanGate now accepts an optional `bypass` prop. Wired to isOperatorEmail(profile.email) at 4 call sites (meta-rules, self-healing, team, team/members). Mirrors the backend OPERATOR_DOMAINS allowlist (cloud/app/auth.py:22) so gradata.ai and sprites.ai domains don't see the blur overlay. UX-only — backend still enforces plan gates on data endpoints. B. /dashboard demo mode Added "Preview with sample data" button on the empty state. Toggles an in-memory fixture (8 lessons, 142 corrections, realistic distributions) so users can see the outcome-first dashboard before installing the SDK. Demo banner explains it's sample data. C. Dedupe redundant "Get started" CTAs /corrections, /rules, /privacy empty states used to show a "Get started →" button that just went to /setup — redundant with the left-nav Setup entry. Replaced with inline text pointer so the CTA isn't duplicated. Tests: 95/95 pass (+11 new: 7 operator + 4 PlanGate). Co-Authored-By: Gradata <noreply@gradata.ai> * fix(dashboard): CR round-1 + promote Preview CTA - operator.ts: reject multi-@ inputs to match backend semantics (prevents "user@evil.com@gradata.ai" bypass drift per CR review) - demo-dashboard.ts: compute Date.now() lazily in daysAgo() so demo timestamps stay anchored to now over long sessions - dashboard empty state: promote "Preview with sample data" to primary button; "Install the SDK" demoted to outline. Was burying the demo affordance behind the SDK pitch. - tests: new security case for multi-@ bypass (96 total, all pass) Co-Authored-By: Gradata <noreply@gradata.ai> * feat(dashboard): marketify pass — plain-language labels Replace analyst jargon with human language throughout the dashboard: KpiStrip (5 cards): - Correction Rate → Mistakes Caught - Est. Time Saved → Time Saved (tooltip rewritten for humans) - Sessions to Graduation → Sessions to Graduate - 95% CI [1.9, 2.7] → typically 2–3 sessions - Misfires → False Alarms - Brain Footprint kept (user likes seeing AI brain grow) ActiveRulesPanel: - "Active Rules" → "Your Rules" - "top 8" → "what your AI learned" - Hide raw confidence number (sim research: users ignore it) - INSTINCT/PATTERN/RULE → Watching/Learning/Graduated - "Xd clean" → "N days holding" - "recurred Nd ago" → "slipped Nd ago" - "No graduated rules yet" → "Nothing graduated yet. Keep correcting — rules emerge after 3+ catches." - "See all rules" → "See all your rules" ActivityFeed: - Rule graduated kept (user preference over "locked in") - Rule refined → Rule updated - Slipped → Slipped back - "Standard codified" → "Your team now gets this automatically" - "More corrections this week" → "More fixes this week" - Empty state softened CategoriesChart: - "Corrections by Dimension" → "What You Fix Most" - "recalibrating" empty state → "still figuring out what you fix most" - Dropped "6-dim taxonomy (WAVE2)" internal badge GraduationProgressBar: - "Graduation Pipeline" → "How Your AI Learns" - Tier labels now Watching/Learning/Graduated (human names) - Dropped threshold/avg-confidence numerics from cards - "N lessons total" → "N total" Dashboard header: - "Your brain's learning progress" → "What your AI learned from you" 96/96 tests pass. Co-Authored-By: Gradata <noreply@gradata.ai> * fix(dashboard): CR round-3 — demo activity, recurrence ordering, category keys - Wire demoActivityEvents fixture into ActivityFeed when demoMode is on so the Activity panel populates in the preview path (was empty/live-only). - Align demoAnalytics.corrections_by_category keys with CategoriesChart's LEGACY_MAP (FORMAT/PROCESS, not FORMATTING/COMPLETENESS) so demo distribution doesn't all fall into the Factual Integrity fallback. - Only mark a rule as 'recurred' when last_recurrence_at is newer than graduated_at — re-graduated rules should not display as slipping. - Replace `as any` casts in ActivityFeed.test.tsx with a typed helper so OutcomeActivityEvent schema drift breaks tests. - Add dashboard-page test for the empty-brain → preview demo → exit flow. Co-Authored-By: Gradata <noreply@gradata.ai> --------- Co-authored-by: Gradata <noreply@gradata.ai> * fix(review): address CR round 5 on PR #30 - lessons.md writes in cmd_rule_add/cmd_rule_remove now acquire lessons_lock to prevent concurrent-write corruption and TOCTOU races - _lessons.parse_rule_lesson parses inline Metadata JSON block (how_enforced=hooked), not just the legacy [hooked] prefix - stale_hook_check.py: shlex.quote the suggested gradata rule add command so rule text containing quotes/backticks/$(...) stays safe - stale_hook_check.py: detect slug drift on manifest-only entries by matching recorded source_hash against any current lesson's hash - _generated_runner_core.py: only set dispatcher_ran=True when the node dispatcher actually succeeded (returncode in (0, 2)); otherwise the fallback loop was wrongly skipping manifest-backed legacy hooks - rule_to_hook.py + cli.py + stale_hook_check.py: delegate hook-root defaults to gradata.hooks._manifest._hook_root for a single source of truth; hardcoded .claude/hooks/... strings live in one place now - self_improvement.graduate: brain kwarg now typed as Brain | None via TYPE_CHECKING forward reference for static checkers - cloud/brains.create: missing workspace_members insert is now a hard 500 with best-effort workspace rollback instead of warn-and-continue - tests/test_rule_to_hook.py: add TestSharedLessonParser covering both legacy [hooked] prefix and structured Metadata JSON parsing paths Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Oliver Le <oliver@gradata.com> Co-authored-by: Gradata <noreply@gradata.ai> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
Pivots the primary
/dashboardfrom 11 panels to 5 outcome-first panels, driven by sim research (SIM16/101/102/103) showing users want a "fitness tracker for reduced human effort," not a "debugger for the AI."Changes
KpiStrip: 4→5 cards. New Est. Time Saved KPI with honest formula (only counts fires on rules that have caught real corrections) + fallback. WoW deltas on Correction Rate and Misfires with sample-size floor (>=5). Tooltip on Time Saved explicitly labels the estimate.
ActiveRulesPanel: Status glyphs (clean-durable / clean-new / recurred / unknown) + streak suffix (
21d clean,recurred 2d ago,—). "See all rules →" link to/rules. No click-to-manage (sim signal was weak; defer until demanded).ActivityFeed: Outcome-reframed labels —
rule.graduated→ "Rule graduated",rule.patched→ "Rule refined",rule.recurrence→ "Slipped",rule.mastered→ "Standard codified — your team now inherits this",category.spike→ "More corrections this week".meta_rule.emergedis filtered out (demoted to/meta-rules). Prop-driven + legacy-fetch dual mode preserved for backward compat.CorrectionDecayCurve: Graduation markers (count caption + hidden a11y span list, capped at 12 with "+N more"). True visual ReferenceLine overlay requires refactoring
buildDecayCurveto emit numeric timestamps — flagged as follow-up.CategoriesChart: Classifier-health gate — renders "We are recalibrating the correction categorizer" empty state when <70% of corrections have a non-OTHER/UNKNOWN category.
/proof(new route): Consolidates ABProofPanel + MethodologyLink. Wired into left-nav as "Proof".Removed from
/dashboard(components preserved in codebase, demoted to other routes): MetaRulesGrid, ABProofPanel, MethodologyLink, PrivacyPosturePanel.Implementation notes
analytics-client.ts:computeTimeSaved,computeWoWDelta,computeRuleStreak.Lessontype extended with optionalrecurrence_blocked,last_recurrence_at,graduated_at,correction_count.Backend gaps (documented, not blocking)
Frontend uses fallback computations until these migrations ship:
recurrence_blockedboolean on lessonslast_recurrence_attimestamp on lessonsTime Saved tooltip explicitly labels estimate to manage expectations.
Test plan
/dashboard/proofnpx tsc --noEmitclean (only pre-existing tsconfig.json baseUrl deprecation warning, unrelated)npm run devfromcloud/dashboard/, open/dashboardand/proof, confirm 5-panel layout + Proof nav entry + demoted panels absent.Known follow-ups (out of scope for this PR)
buildDecayCurvenumeric-timestamp refactorOTHER/UNKNOWNkeys — tighten to LEGACY_MAP membership if classifier emits junk keysDocs
Design spec:
docs/superpowers/specs/2026-04-14-dashboard-outcome-first-pivot-design.md(gitignored local)Plan:
docs/superpowers/plans/2026-04-14-dashboard-outcome-first-pivot.md(gitignored local)Sim research synthesis:
docs/superpowers/specs/dashboard-sim-research-synthesis.md(gitignored local)Generated with Gradata