-
Notifications
You must be signed in to change notification settings - Fork 1
refactor: consolidate pipeline system with single source of truth #247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Create steps.ts as single source for step definitions and display names - Remove duplicated PIPELINE_CONFIGS and STEP_DISPLAY_NAMES (3 copies → 1) - Consolidate 4 background task wrappers into 1 generic runBackgroundTask - Remove duplicate runPipelineInBackground from metric.ts (100+ lines) - Delete configs.ts and steps/delete-old-data.ts (no longer needed) - Update hooks to use shared detectPipelineType and getPipelineStepCount - Simplify PipelineRunner to use composable operations
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughCentralizes pipeline execution and progress: removes per-metric polling hooks/UI, adds a unified runBackgroundTask runner and batchProgress endpoint, introduces operation-driven pipeline steps, and updates dashboard clients/cards to consume pipelineStatus map with team-scoped invalidation and simplified regenerate/refresh flows. Changes
Sequence Diagram(s)sequenceDiagram
participant Page as DashboardPageClient
participant Client as DashboardClient
participant API as api.pipeline
participant Runner as runBackgroundTask
participant DB as Database/BackgroundState
participant Cache as QueryCache
participant Card as DashboardMetricCard
Page->>API: getDashboardCharts(teamId)
API-->>Page: dashboardCharts + refreshStatus
Page->>Client: render(dashboardCharts, isFetching)
Client->>API: getBatchProgress({ metricIds }) %% periodic batch poll %%
loop while any metric isProcessing
API-->>Client: { metricId: PipelineProgressData } %% batch progress response %%
Client->>Card: pipelineStatus[metricId], teamId
Note over Client,Card: Client detects justCompleted / justFailed via prevProcessingRef
alt completed
Client->>Cache: invalidateCacheByTags(['dashboard', teamId])
Client->>Page: show toast(success)
else failed
Client->>Page: show toast(error)
end
end
%% Triggering a regenerate/refresh from a card %%
Card->>API: mutate.regenerate({ metricId, type, teamId })
API->>Runner: runBackgroundTask(config) %% fire-and-forget %%
Runner->>DB: persist progress & logs
Runner->>Cache: invalidateCacheByTags(['dashboard', teamId]) on completion/error
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (6)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/server/api/routers/pipeline.ts (1)
263-272: Potential null dereference ifintegrationIdis null.Line 264 uses
metric.integrationId!which assumes the metric has an integration. If a metric hastemplateIdbut nointegrationId(edge case), this would throw. The subsequent check for!integration(line 267) wouldn't help since the query would have usedundefinedas the ID.Consider adding an explicit check:
+ if (!metric.integrationId) { + throw new TRPCError({ + code: "BAD_REQUEST", + message: "Metric has no associated integration", + }); + } + const integration = await ctx.db.integration.findUnique({ - where: { id: metric.integrationId! }, + where: { id: metric.integrationId }, });
🧹 Nitpick comments (5)
src/server/api/routers/metric.ts (1)
172-186: Fire-and-forget pipeline with proper cache invalidation.The refactor to use
refreshMetricAndChartswithforceRegenerate: trueis clean and aligns with the PR's goal of consolidating to a single pipeline entry point. The cache invalidation in the.then()callback only fires on success, which is correct.One consideration: if
refreshMetricAndChartsthrows before returning a result (e.g., during initial setup), the error is silently swallowed by thevoidprefix. The function internally handles failures viarunner.fail(), but adding a.catch()for logging unexpected errors could help with debugging production issues.🔎 Optional: Add catch handler for unexpected errors
void refreshMetricAndCharts({ metricId: dashboardChart.metricId, forceRegenerate: true, }).then(async (result) => { if (result.success) { const cacheTags = [`dashboard_org_${ctx.workspace.organizationId}`]; if (input.teamId) { cacheTags.push(`dashboard_team_${input.teamId}`); } await invalidateCacheByTags(ctx.db, cacheTags); } + }).catch((error) => { + console.error("[Metric Create] Unexpected pipeline error:", error); });src/hooks/use-wait-for-pipeline.ts (1)
54-59: Pipeline type detection duplicated across hooks.This logic is nearly identical to
use-pipeline-progress.ts(lines 59-64). While both hooks serve different purposes, the step aggregation and type detection could be extracted to a shared helper function.🔎 Optional: Extract shared helper
// In @/lib/pipeline or a hooks utility file export function getPipelineTypeFromProgress( completedSteps: Array<{ step: string }>, currentStep: string | null ): { pipelineType: PipelineType; totalSteps: number } { const allSteps = [...completedSteps.map((s) => s.step), currentStep].filter( Boolean ) as string[]; const pipelineType = detectPipelineType(allSteps); return { pipelineType, totalSteps: getPipelineStepCount(pipelineType) }; }src/server/api/services/transformation/data-pipeline.ts (2)
133-146: Consider documenting the timestamp offset pattern.The snapshot mode adds
indexmilliseconds to the base timestamp (line 141) to satisfy the unique constraint on(metricId, timestamp). This is a valid workaround, but the intent may not be obvious to future maintainers.Consider adding a brief inline comment explaining why timestamps are offset, or alternatively, consider whether the database schema should use a different unique constraint for snapshot metrics.
269-270: FragileisNewdetection heuristic.The 5-second window (
Date.now() - 5000) to determine if the transformer was newly created is unreliable—slow DB operations or clock drift could produce incorrect results. Sinceupsertdoesn't directly indicate whether it created or updated, consider comparingcreatedAttoupdatedAtinstead:- const isNew = transformer.createdAt.getTime() > Date.now() - 5000; + const isNew = transformer.createdAt.getTime() === transformer.updatedAt.getTime();This is more robust as a newly created record will have identical
createdAtandupdatedAtvalues.src/server/api/routers/pipeline.ts (1)
90-96: Non-null assertions could cause silent runtime failures.The non-null assertions (
!) on config fields (lines 91-96) will throw at runtime if the calling code doesn't provide these values. Since this runs in a fire-and-forget background task, failures would only appear in logs and potentially leave the metric in an inconsistent state.Consider defensive validation at the start of each case:
case "ingestion-only": { if (!config.templateId || !config.integrationId || !config.connectionId) { throw new Error("Missing required fields for ingestion-only task"); } // ... rest of logic }Alternatively, use stricter TypeScript discriminated unions where each task type has its own interface with required fields.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/hooks/use-pipeline-progress.tssrc/hooks/use-wait-for-pipeline.tssrc/lib/pipeline/configs.tssrc/lib/pipeline/index.tssrc/lib/pipeline/runner.tssrc/lib/pipeline/steps.tssrc/lib/pipeline/steps/delete-old-data.tssrc/lib/pipeline/types.tssrc/server/api/routers/metric.tssrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.tssrc/server/api/services/transformation/index.ts
💤 Files with no reviewable changes (3)
- src/server/api/services/transformation/index.ts
- src/lib/pipeline/configs.ts
- src/lib/pipeline/steps/delete-old-data.ts
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Use inline type imports with @trivago/prettier-plugin-sort-imports for consistent import orderingUse TypeScript 5.9 with strict type checking for all frontend and backend code
Files:
src/lib/pipeline/steps.tssrc/lib/pipeline/index.tssrc/server/api/routers/metric.tssrc/hooks/use-wait-for-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/hooks/use-pipeline-progress.tssrc/lib/pipeline/types.tssrc/server/api/routers/pipeline.tssrc/lib/pipeline/runner.tssrc/server/api/services/transformation/data-pipeline.ts
src/server/api/routers/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/server/api/routers/**/*.ts: Always verify resource access using authorization helpers (getMetricAndVerifyAccess, getRoleAndVerifyAccess, getTeamAndVerifyAccess) to ensure resources belong to the user's organization in tRPC procedures
Use protectedProcedure or workspaceProcedure for new tRPC procedures instead of public procedures
Invalidate cache tags after mutations using invalidateCacheByTags(ctx.db, [tagName]) to maintain consistency
src/server/api/routers/**/*.ts: In tRPC procedures, never assume authentication; always use helpers likegetTeamAndVerifyAccessto verify access before operations
Be careful when modifying team save logic related toEditSessionmodel, which handles locking for concurrent team editing
Files:
src/server/api/routers/metric.tssrc/server/api/routers/pipeline.ts
src/app/dashboard/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/dashboard/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/**/*/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Client Components must use
import { api } from '@/trpc/react'for standard HTTP/Hooks wrapper
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Use Tailwind CSS 4 for styling with shadcn/ui and Radix UI primitive components
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
🧠 Learnings (21)
📓 Common learnings
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
Applied to files:
src/lib/pipeline/steps.tssrc/lib/pipeline/index.tssrc/server/api/routers/metric.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/lib/pipeline/types.tssrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Applied to files:
src/lib/pipeline/steps.tssrc/lib/pipeline/index.tssrc/server/api/routers/metric.tssrc/hooks/use-wait-for-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/hooks/use-pipeline-progress.tssrc/lib/pipeline/types.tssrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Metrics Pipeline processes data through 3 stages: DataIngestionTransformer (Raw API → MetricDataPoint), ChartTransformer (MetricDataPoint → Chart Config), and Visualization (Chart Config → Recharts UI)
Applied to files:
src/lib/pipeline/steps.tssrc/server/api/routers/metric.tssrc/hooks/use-wait-for-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/hooks/use-pipeline-progress.tssrc/lib/pipeline/types.tssrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.tsx : Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Applied to files:
src/server/api/routers/metric.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to prisma/schema.prisma : When modifying the Prisma schema, especially for Metric Transformers, ensure AI-generated code generation and execution logic is updated accordingly
Applied to files:
src/server/api/routers/metric.tssrc/server/api/services/transformation/data-pipeline.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/server/api/routers/**/*.ts : Invalidate cache tags after mutations using invalidateCacheByTags(ctx.db, [tagName]) to maintain consistency
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/**/*.ts : Server Components must use `import { api } from '@/trpc/server'` for direct DB calls without HTTP overhead
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to **/*.{ts,tsx} : Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/server/api/routers/**/*.ts : Always verify resource access using authorization helpers (getMetricAndVerifyAccess, getRoleAndVerifyAccess, getTeamAndVerifyAccess) to ensure resources belong to the user's organization in tRPC procedures
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Dual API pattern separates Server Components (using `@/trpc/server` with direct DB calls) from Client Components (using `@/trpc/react` with HTTP), eliminating HTTP overhead for server-side operations
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/server/api/routers/**/*.ts : Use protectedProcedure or workspaceProcedure for new tRPC procedures instead of public procedures
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*/*.tsx : Client Components must use `import { api } from '@/trpc/react'` for standard HTTP/Hooks wrapper
Applied to files:
src/server/api/routers/metric.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Consolidate role node duplication: extract shared RoleNodeTemplate from role-node.tsx and public-role-node.tsx with isEditable prop instead of maintaining separate components
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.{ts,tsx} : For new canvas node types: create component in teams/[teamId]/_components/, add to TeamNode union in types/canvas.ts, register in nodeTypes in team-canvas.tsx, and update serialization in canvas-serialization.ts
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Avoid modifying `enrichNodesWithRoleData` flow without understanding the complete canvas serialization logic for saving/loading React Flow nodes to the database
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : React Flow nodes must store minimal data (e.g., just `roleId`); fetch full Role data from TanStack Query cache in the Node component to keep canvas and sidebars in sync
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to **/*canvas*.{ts,tsx} : Use React Flow primitives (BaseNode, BaseHandle, ZoomSlider) from src/components/react-flow/ for consistent canvas styling
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
🧬 Code graph analysis (5)
src/server/api/routers/metric.ts (4)
src/server/api/utils/authorization.ts (1)
getIntegrationAndVerifyAccess(181-206)src/server/api/services/transformation/data-pipeline.ts (1)
refreshMetricAndCharts(412-548)src/server/api/services/transformation/index.ts (1)
refreshMetricAndCharts(29-29)src/server/api/utils/cache-strategy.ts (1)
invalidateCacheByTags(165-179)
src/hooks/use-wait-for-pipeline.ts (1)
src/lib/pipeline/steps.ts (3)
detectPipelineType(158-177)getPipelineStepCount(151-153)getStepShortName(60-63)
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (3)
src/lib/pipeline/index.ts (1)
PipelineStepName(4-4)src/lib/pipeline/steps.ts (1)
PipelineStepName(47-47)src/lib/pipeline/types.ts (1)
PipelineStepName(5-5)
src/hooks/use-pipeline-progress.ts (2)
src/lib/pipeline/index.ts (3)
detectPipelineType(18-18)getPipelineStepCount(17-17)getStepShortName(16-16)src/lib/pipeline/steps.ts (3)
detectPipelineType(158-177)getPipelineStepCount(151-153)getStepShortName(60-63)
src/server/api/routers/pipeline.ts (8)
src/server/api/services/transformation/data-pipeline.ts (2)
refreshMetricAndCharts(412-548)ingestMetricData(642-658)src/server/api/services/transformation/index.ts (3)
refreshMetricAndCharts(29-29)ingestMetricData(28-28)createChartTransformer(35-35)src/server/db.ts (1)
db(26-26)src/server/api/services/transformation/chart-generator.ts (1)
createChartTransformer(147-275)src/server/api/trpc.ts (1)
workspaceProcedure(151-154)src/server/api/utils/authorization.ts (1)
getMetricAndVerifyAccess(238-258)src/lib/pipeline/index.ts (1)
getStepDisplayName(15-15)src/lib/pipeline/steps.ts (1)
getStepDisplayName(52-55)
🔇 Additional comments (25)
src/server/api/routers/metric.ts (1)
80-87: Clear documentation of the unified pipeline approach.The updated JSDoc accurately describes the pipeline's responsibilities. This improves maintainability by making the architectural decision explicit.
src/hooks/use-pipeline-progress.ts (2)
59-64: Clean refactor to type-driven pipeline detection.The logic correctly combines completed steps with the current step to detect the pipeline type, then derives the total step count dynamically. This eliminates the previous hard-coded step counts.
1-5: Centralized imports from @/lib/pipeline.Good consolidation of pipeline utilities into a single import source, aligning with the PR's single-source-of-truth goal.
src/lib/pipeline/index.ts (1)
1-22: Well-organized barrel file establishing clear public API.The file cleanly separates type exports, step definitions, and runner functionality. This provides a single import point for consumers while maintaining internal modularity.
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (2)
40-46: INITIAL_STEPS now uses explicit string literals with type assertions.The explicit
as PipelineStepNamecasts ensure type safety when mapping to pipeline steps. Theas constassertion appropriately marks the object as readonly.
20-20: Import path consolidated to barrel file.Aligns with the PR's goal of establishing
@/lib/pipelineas the single import source.src/lib/pipeline/steps.ts (2)
88-128: Excellent use ofas const satisfiesfor type-safe configuration.The
as const satisfies Record<string, readonly PipelineOperation[]>pattern provides both:
- Literal type inference for pipeline type keys
- Structural validation that all values are operation arrays
This is a TypeScript best practice for configuration objects.
135-146: Intentional many-to-one operation-to-step mapping.Lines 138-139 map both
delete-ingestion-transformeranddelete-chart-transformerto"deleting-old-transformer". This simplifies the UI by showing one "Removing old transformer..." message regardless of which transformer is being deleted.src/lib/pipeline/types.ts (2)
3-8: Clean re-export pattern establishing single source of truth.Re-exporting types from
./stepsensuressteps.tsremains the canonical definition while allowingtypes.tsto serve as a type-focused import for consumers who only need interfaces.
23-24:stepfield typed asstringfor flexibility.Using
stringinstead ofPipelineStepNameallowsStepResultto accommodate steps that may not be predefined inPIPELINE_STEPS, providing forward compatibility.src/lib/pipeline/runner.ts (3)
42-89: Clean operation-based run() API.The new
run<T>(operation: PipelineOperation, fn: () => Promise<T>)signature is simpler than the previous approach. It:
- Looks up the step name from the operation
- Updates refresh status
- Executes the function with timing
- Logs the result
The error handling correctly re-throws after logging, allowing callers to handle failures.
149-154: Useful debugging accessors.
getPipelineType()andgetOperations()provide visibility into the runner's configuration, helpful for debugging and testing. Thereadonlyreturn type ongetOperations()correctly prevents mutation.
13-26: Clear usage documentation in JSDoc.The example demonstrates the new operation-based API pattern, making it easy for developers to understand the intended usage.
src/server/api/services/transformation/data-pipeline.ts (7)
1-12: LGTM!Clear and informative header documentation explaining the soft/hard refresh design and key architectural decisions (single API fetch, transaction-locked creation, batch saves).
58-63: LGTM!Proper handling of Prisma's JSON null semantics.
65-120: LGTM!Good pattern: logging errors are swallowed to prevent cascading failures, while the main fetch result is cleanly returned as a discriminated union.
340-399: LGTM!The function correctly orchestrates fetch → create transformer → execute → save flow. The use of
metricIdas the transformer cache key is consistent withfetchAndTransformData.
501-535: LGTM!Good error isolation—individual chart transformer failures are caught and logged without failing the entire pipeline. The dynamic import keeps chart-generator lazily loaded.
567-636: LGTM!Clean implementation for manual metrics with appropriate fallback to create a chart transformer if one doesn't exist.
638-658: LGTM!Good backward-compatibility wrapper. The
isTimeSeries !== falsedefault ensures existing callers without this parameter continue to work correctly.src/server/api/routers/pipeline.ts (5)
33-54: LGTM!Clean type definitions for the background task system. The discriminated union
BackgroundTaskTypeand theBackgroundTaskConfiginterface clearly define all supported operations and their required parameters.
195-215: LGTM!Correct implementation: verifies access with
getMetricAndVerifyAccess, sets initial status, and usesvoidfor fire-and-forget pattern. As per coding guidelines,workspaceProcedureand authorization helpers are properly used.
221-241: LGTM!Consistent pattern with the
refreshprocedure. Authorization and fire-and-forget handling are correct.
301-358: LGTM!Good validation: checks for dashboard chart existence and data points before triggering regeneration. The error message "run a data refresh first" provides clear user guidance.
441-534: LGTM!Both
getAvailableDimensionsandgetTransformerInfocorrectly implement authorization and provide useful metadata for the frontend. Good use of aggregation for data point statistics.
| // Query recent pipeline step logs (last 5 minutes) | ||
| const fiveMinutesAgo = new Date(Date.now() - 5 * 60 * 1000); | ||
| const stepLogs = await ctx.db.metricApiLog.findMany({ | ||
| where: { | ||
| metricId: input.metricId, | ||
| endpoint: { startsWith: "pipeline-step:" }, | ||
| fetchedAt: { gte: fiveMinutesAgo }, | ||
| }, | ||
| orderBy: { fetchedAt: "asc" }, | ||
| select: { endpoint: true, fetchedAt: true }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: endpoint prefix mismatch prevents progress tracking.
Line 396 filters for logs with endpoint: { startsWith: "pipeline-step:" }, but logPipelineEvent (line 175) writes logs with the prefix "transformer:":
endpoint: `transformer:${action}`,This mismatch means completedSteps will always be empty, breaking progress tracking for the frontend.
🔎 Proposed fix
Either update the query to match the actual prefix:
const stepLogs = await ctx.db.metricApiLog.findMany({
where: {
metricId: input.metricId,
- endpoint: { startsWith: "pipeline-step:" },
+ endpoint: { startsWith: "transformer:" },
fetchedAt: { gte: fiveMinutesAgo },
},Or update logPipelineEvent to use "pipeline-step:" prefix for consistency.
🤖 Prompt for AI Agents
In src/server/api/routers/pipeline.ts around lines 391 to 401 the query filters
logs with endpoint starting "pipeline-step:" but logPipelineEvent emits
endpoints with the "transformer:" prefix so completedSteps is always empty; fix
by changing the query filter to endpoint: { startsWith: "transformer:" } (or
alternatively change logPipelineEvent to emit "pipeline-step:" consistently) so
the query matches the actual log prefix and progress tracking works.
| // Get existing transformer | ||
| const transformer = await db.dataIngestionTransformer.findUnique({ | ||
| where: { templateId: cacheKey }, | ||
| where: { templateId: input.metricId }, | ||
| }); | ||
|
|
||
| if (!transformer) { | ||
| // No transformer for this metric - need to create one | ||
| // This handles backward compatibility for old metrics | ||
| console.info( | ||
| `[RefreshMetric] No transformer found for metric ${input.metricId}, creating new one`, | ||
| ); | ||
| return ingestMetricData({ | ||
| ...input, | ||
| isTimeSeries: input.isTimeSeries, | ||
| }); | ||
| return { success: false, error: "No transformer found for this metric" }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the data-pipeline.ts file
find . -type f -name "data-pipeline.ts" | head -5Repository: drifter089/orgOS
Length of output: 117
🏁 Script executed:
# Check the repository structure to understand the context
git ls-files "src/server/api/services/transformation/" | head -20Repository: drifter089/orgOS
Length of output: 652
🏁 Script executed:
# Read the specific file to understand the context around lines 277-335
wc -l src/server/api/services/transformation/data-pipeline.tsRepository: drifter089/orgOS
Length of output: 119
🏁 Script executed:
# Read the specific section with context around the problematic lines
sed -n '270,340p' src/server/api/services/transformation/data-pipeline.ts | cat -nRepository: drifter089/orgOS
Length of output: 2521
🏁 Script executed:
# Also check the area around line 373 mentioned in the scratchpad
sed -n '365,380p' src/server/api/services/transformation/data-pipeline.ts | cat -nRepository: drifter089/orgOS
Length of output: 545
🏁 Script executed:
# Check the Prisma schema to understand the dataIngestionTransformer model
find . -type f -name "schema.prisma" | head -3Repository: drifter089/orgOS
Length of output: 82
🏁 Script executed:
# Read the Prisma schema to understand dataIngestionTransformer model
cat prisma/schema.prisma | grep -A 20 "model DataIngestionTransformer"Repository: drifter089/orgOS
Length of output: 1041
🏁 Script executed:
# Search for getOrCreateDataIngestionTransformer function definition
rg -n "getOrCreateDataIngestionTransformer" src/server/api/services/transformation/data-pipeline.ts -A 25 | head -50Repository: drifter089/orgOS
Length of output: 1458
🏁 Script executed:
# Verify if there are other metrics that use the same template to understand the impact
rg -n "fetchAndTransformData\|fetchGenerateAndSaveData" src/server/api/services/transformation/ -A 5 | grep -E "(templateId|metricId)" | head -20Repository: drifter089/orgOS
Length of output: 42
🏁 Script executed:
# Check if there's any documentation about the intended design
rg -n "templateId.*metricId\|metricId.*templateId" src/server/api/services/transformation/ -C 2Repository: drifter089/orgOS
Length of output: 42
Fix transformer lookup to use templateId instead of metricId.
Line 313 queries for the transformer using the wrong parameter. The DataIngestionTransformer model is keyed by templateId (the template type), not metricId (the metric instance). Multiple metrics using the same template should share one transformer.
Change line 313 from:
where: { templateId: input.metricId },to:
where: { templateId: input.templateId },Also fix fetchGenerateAndSaveData at line 373, which has the same issue—it passes input.metricId to getOrCreateDataIngestionTransformer instead of input.templateId.
🤖 Prompt for AI Agents
In src/server/api/services/transformation/data-pipeline.ts around lines 311–318,
the transformer lookup uses the wrong field: replace the where clause that
currently passes input.metricId to use input.templateId so the
DataIngestionTransformer is found by its templateId key; additionally, in
fetchGenerateAndSaveData at/near line 373 update the call to
getOrCreateDataIngestionTransformer to pass input.templateId instead of
input.metricId so multiple metrics that share a template use the same
transformer.
| // Fetch data and generate new transformer | ||
| await runner.setStatus("fetching-api-data"); | ||
| transformResult = await runner.run( | ||
| "generate-ingestion-transformer", | ||
| async () => { | ||
| return fetchGenerateAndSaveData({ | ||
| metricId: metric.id, | ||
| templateId: metric.templateId!, | ||
| integrationId: metric.integration!.providerId, | ||
| connectionId: metric.integration!.connectionId, | ||
| endpointConfig, | ||
| isTimeSeries, | ||
| }); | ||
| }, | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Status and step name mismatch may confuse progress tracking.
Line 462 sets status to "fetching-api-data" but the actual step runs as "generate-ingestion-transformer" (line 464). If the frontend uses refreshStatus to display progress, this inconsistency could show misleading step names.
Consider aligning these, or if intentional (showing a different status during a compound step), add a comment explaining the design.
🤖 Prompt for AI Agents
In src/server/api/services/transformation/data-pipeline.ts around lines 461 to
475, the status string "fetching-api-data" (line 462) does not match the step
name "generate-ingestion-transformer" used in runner.run (line 464); update the
code so the status and step name are aligned (e.g.,
setStatus("generate-ingestion-transformer")) or, if the mismatch is intentional,
add a concise comment above the setStatus call explaining why a different status
is shown during this compound operation so frontend consumers are not confused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/hooks/use-wait-for-pipeline.ts (1)
48-62: Consider potential edge case with rapid metricId changes.If
metricIdchanges while callbacks are firing, there's a small window where callbacks might fire for the wrong metric since the effect on line 65-69 runs separately from this effect. However, given the ref reset happens on metricId change, this is likely acceptable for typical use cases.🔎 Optional: Combine effects for tighter synchronization
- // Handle completion/error callbacks - useEffect(() => { - if (result.hasData) { - // Detect transition from processing -> done - if (previousIsProcessing.current === true && !result.isProcessing) { - if (result.error && !hasCalledError.current) { - hasCalledError.current = true; - onError?.(result.error); - } else if (!result.error && !hasCalledComplete.current) { - hasCalledComplete.current = true; - onComplete?.(); - } - } - previousIsProcessing.current = result.isProcessing; - } - }, [result.isProcessing, result.error, result.hasData, onComplete, onError]); - - // Reset refs when metricId changes - useEffect(() => { - hasCalledComplete.current = false; - hasCalledError.current = false; - previousIsProcessing.current = null; - }, [metricId]); + // Handle completion/error callbacks and reset on metricId change + useEffect(() => { + // Reset refs when metricId changes + hasCalledComplete.current = false; + hasCalledError.current = false; + previousIsProcessing.current = null; + }, [metricId]); + + useEffect(() => { + if (result.hasData) { + if (previousIsProcessing.current === true && !result.isProcessing) { + if (result.error && !hasCalledError.current) { + hasCalledError.current = true; + onError?.(result.error); + } else if (!result.error && !hasCalledComplete.current) { + hasCalledComplete.current = true; + onComplete?.(); + } + } + previousIsProcessing.current = result.isProcessing; + } + }, [result.isProcessing, result.error, result.hasData, onComplete, onError]);src/hooks/use-pipeline-query.ts (1)
83-88: Guard against potential division by zero.While
getPipelineStepCountshould always return a positive number based onPIPELINE_OPERATIONS, adding a defensive check prevents NaN if the step definitions are ever misconfigured.🔎 Optional: Add defensive check
// Calculate progress (cap at 95% until complete) const completedCount = completedSteps.length; + const safeTotal = totalSteps > 0 ? totalSteps : 1; const progressPercent = Math.min( - Math.round((completedCount / totalSteps) * 100), + Math.round((completedCount / safeTotal) * 100), 95, );src/lib/pipeline/runner.ts (2)
92-110: Consider logging consistency in setStatus.The
setStatusmethod logs withstatus: "started"whilelogStepuses"completed"or"failed". This creates an inconsistent log schema where some entries have"started"status but no corresponding completion entry. If this is intentional for tracking step transitions, consider documenting the log status values.🔎 Optional: Add comment clarifying log status semantics
async setStatus(step: PipelineStepName): Promise<void> { await this.updateRefreshStatus(step); - // Log step transition for progress tracking + // Log step start (no completion log - used when step is handled by called functions) await this.ctx.db.metricApiLog
109-109: Silent error swallowing may hide issues.The
.catch(() => undefined)silently discards errors from the log write. While non-blocking logging is appropriate, consider at minimum logging to console in development for debugging.🔎 Optional: Log errors in development
}) - .catch(() => undefined); // Non-blocking + .catch((err) => { + // Non-blocking, but log in development for debugging + if (process.env.NODE_ENV === "development") { + console.warn("Failed to log pipeline step status:", err); + } + }); }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
src/hooks/use-pipeline-progress.tssrc/hooks/use-pipeline-query.tssrc/hooks/use-wait-for-pipeline.tssrc/lib/pipeline/runner.tssrc/lib/pipeline/steps.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/hooks/use-pipeline-progress.ts
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Use inline type imports with @trivago/prettier-plugin-sort-imports for consistent import orderingUse TypeScript 5.9 with strict type checking for all frontend and backend code
Files:
src/hooks/use-pipeline-query.tssrc/hooks/use-wait-for-pipeline.tssrc/lib/pipeline/steps.tssrc/lib/pipeline/runner.ts
🧠 Learnings (3)
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Metrics Pipeline processes data through 3 stages: DataIngestionTransformer (Raw API → MetricDataPoint), ChartTransformer (MetricDataPoint → Chart Config), and Visualization (Chart Config → Recharts UI)
Applied to files:
src/hooks/use-wait-for-pipeline.tssrc/lib/pipeline/steps.ts
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Applied to files:
src/lib/pipeline/steps.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
Applied to files:
src/lib/pipeline/steps.ts
🧬 Code graph analysis (2)
src/hooks/use-pipeline-query.ts (2)
src/trpc/react.tsx (1)
api(28-28)src/lib/pipeline/steps.ts (3)
detectPipelineType(164-191)getPipelineStepCount(151-153)getStepShortName(60-63)
src/hooks/use-wait-for-pipeline.ts (1)
src/hooks/use-pipeline-query.ts (1)
usePipelineQuery(44-103)
🔇 Additional comments (10)
src/hooks/use-wait-for-pipeline.ts (2)
42-45: LGTM - Clean completion detection logic.The transition detection using
previousIsProcessing.current === true && !result.isProcessing && result.hasDatacorrectly identifies when a pipeline finishes processing. This avoids false positives from initial load states.
72-78: Progress calculation looks correct.The ternary chain properly handles all states: processing shows live progress, errors and completed pipelines show 100%, and idle state shows 0%. Good edge case handling.
src/hooks/use-pipeline-query.ts (2)
1-6: LGTM - Clean imports using inline type pattern.Good use of inline type imports per coding guidelines. The centralized pipeline utilities from
@/lib/pipelineestablish the single source of truth.
51-57: Non-null assertion is safe here.The
metricId!on line 52 is guarded byshouldQuerywhich requires!!metricId, so the assertion is safe when the query is enabled.src/lib/pipeline/steps.ts (4)
8-45: LGTM - Well-structured step definitions.Clean single source of truth for all pipeline step metadata. The
as constassertion ensures type safety and immutability.
135-146: Note: Both delete operations map to same step name.
delete-ingestion-transformeranddelete-chart-transformerboth map to"deleting-old-transformer". This is intentional for progress display (single UI message), but means the step name alone can't distinguish which transformer was deleted. The currentdetectPipelineTypelogic handles this correctly by checking for other distinguishing steps.
164-191: Previous detection bug has been fixed.The updated logic now correctly distinguishes pipeline types:
chart-only: no fetch step (line 175-177)hard-refresh: hasdeleting-old-datastep, which is unique to hard-refresh (line 180-182)ingestion-only: has delete-transformer + generate-ingestion, but not delete-data (line 185-187)soft-refresh: default fallbackThis addresses the previous review concern where both hard-refresh and ingestion-only were incorrectly matched.
88-128: PIPELINE_OPERATIONS is a reference definition for progress tracking, not an execution manifest.The actual
hard-refreshimplementation does not execute operations in the order defined in PIPELINE_OPERATIONS. Chart transformers are not deleted or regenerated during hard refresh—only data points are deleted and ingestion transformers are regenerated. After new data is saved, existing chart transformers are re-executed with the fresh data viaexecuteChartTransformerWithData(), not deleted and regenerated. The operationsdelete-chart-transformer,generate-chart-transformer, andsave-chartin PIPELINE_OPERATIONS are defined but not called during hard refresh. PIPELINE_OPERATIONS is used only for progress step counting and detecting which pipeline type ran based on completed steps, not as an execution loop.Likely an incorrect or invalid review comment.
src/lib/pipeline/runner.ts (2)
39-86: LGTM - Clean operation execution with proper error handling.The
runmethod correctly:
- Maps operation to step name and display name
- Tracks timing for duration metrics
- Logs both success and failure cases
- Preserves error propagation via re-throw
175-191: LGTM - Clean factory function.The
createPipelineRunnerfactory properly initializes the context and returns a new runner instance. Parameter ordering is logical with optionaldashboardChartIdat the end.
src/hooks/use-pipeline-status.ts
Outdated
| // Not processing or no data yet | ||
| if (!data?.isProcessing) { | ||
| return { | ||
| isProcessing: false, | ||
| currentStep: null, | ||
| currentStepDisplayName: null, | ||
| completedSteps: [], | ||
| totalSteps: 0, | ||
| progressPercent: data?.completedSteps?.length ? 100 : 0, | ||
| error: data?.error ?? null, | ||
| hasData: data !== undefined, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and read it
cat -n src/hooks/use-pipeline-query.ts | head -100Repository: drifter089/orgOS
Length of output: 3540
🏁 Script executed:
# Search for completedSteps usage in the codebase
rg "completedSteps" -B 2 -A 2Repository: drifter089/orgOS
Length of output: 7696
🏁 Script executed:
# Look for pipeline-related server/API code
fd -e ts -e tsx | grep -i pipeline | head -20Repository: drifter089/orgOS
Length of output: 394
🏁 Script executed:
# Read the pipeline router to understand completion logic
cat -n src/server/api/routers/pipeline.ts | head -150Repository: drifter089/orgOS
Length of output: 5875
🏁 Script executed:
# Look for where isProcessing is set to false in the pipeline
rg "isProcessing.*false" -B 3 -A 3Repository: drifter089/orgOS
Length of output: 2698
🏁 Script executed:
# Read more of the pipeline router to see the getProgress endpoint
cat -n src/server/api/routers/pipeline.ts | sed -n '200,350p'Repository: drifter089/orgOS
Length of output: 5322
🏁 Script executed:
# Check the PipelineRunner to see when completedSteps is populated
cat -n src/lib/pipeline/runner.ts | head -150Repository: drifter089/orgOS
Length of output: 5055
🏁 Script executed:
# Look at the refreshMetricAndCharts function to see if it can complete without running steps
rg "refreshMetricAndCharts" -A 10 src/server/api/services/transformationRepository: drifter089/orgOS
Length of output: 1728
🏁 Script executed:
# Check the complete pipeline flow to see all code paths
cat -n src/lib/pipeline/runner.ts | sed -n '115,200p'Repository: drifter089/orgOS
Length of output: 2267
🏁 Script executed:
# Search for where getProgress is defined in the pipeline router
rg "getProgress.*useQuery" -B 5 -A 15 src/server/api/routers/pipeline.tsRepository: drifter089/orgOS
Length of output: 42
🏁 Script executed:
# Search for getProgress definition in the pipeline router
rg "getProgress" src/server/api/routers/pipeline.ts -B 2 -A 30Repository: drifter089/orgOS
Length of output: 5905
🏁 Script executed:
# Look at the full getProgress endpoint implementation
cat -n src/server/api/routers/pipeline.ts | sed -n '350,450p'Repository: drifter089/orgOS
Length of output: 3671
🏁 Script executed:
# Check how use-wait-for-pipeline handles this scenario
cat -n src/hooks/use-wait-for-pipeline.tsRepository: drifter089/orgOS
Length of output: 3146
🏁 Script executed:
# Look for any scenario where pipeline completes without logging steps
rg "refreshStatus.*null" -B 3 -A 3 src/serverRepository: drifter089/orgOS
Length of output: 4828
Fix progress calculation for completed pipelines.
On line 67, the logic data?.completedSteps?.length ? 100 : 0 incorrectly shows 0% for completed pipelines with no logged steps. When a pipeline completes successfully, the server sets refreshStatus to null and returns an empty completedSteps array. This causes progressPercent to show 0 instead of 100. The check should be: data !== undefined ? 100 : 0 to distinguish between "no data yet" (0%) and "pipeline complete" (100%).
🤖 Prompt for AI Agents
In src/hooks/use-pipeline-query.ts around lines 59 to 71, the progressPercent
calculation inside the "not processing or no data yet" return incorrectly uses
data?.completedSteps?.length to decide 100% vs 0%, which shows 0% for completed
pipelines that have an empty completedSteps array; replace that expression with
data !== undefined ? 100 : 0 so that any received data (including an empty
completedSteps array indicating a finished pipeline) yields 100%, while
undefined (no data yet) yields 0%.
- Create new component with framer-motion animations - Sequential step reveal with spring physics - Smooth progress bar with shimmer effect - Pulsing indicator on current step - Animated checkmark on completed steps - Two variants: card (220px) and drawer (400px) - Integrate into dashboard chart and drawer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/server/api/services/transformation/data-pipeline.ts (2)
313-337: Critical: UsetemplateIdinstead ofmetricIdfor transformer lookups.Lines 318 and 331 incorrectly use
input.metricIdwhereinput.templateIdshould be used. TheDataIngestionTransformermodel is keyed bytemplateId(the template type shared across metrics), notmetricId(the individual metric instance).Impact: Each metric creates its own transformer instead of sharing transformers across metrics that use the same template, causing:
- Wasted database storage and AI generation costs
- Potential inconsistencies if transformers diverge
- Breaking the intended design where multiple metrics share one template's transformer
🔎 Proposed fix
if (input.generateTransformer) { const { transformer, isNew } = await getOrCreateDataIngestionTransformer( - input.metricId, + input.templateId, input.integrationId, template, fetchResult.data, input.endpointConfig, ); if (!transformer) { return { success: false, error: "Failed to get or create transformer" }; } transformerCode = transformer.transformerCode; transformerCreated = isNew; } else { const transformer = await db.dataIngestionTransformer.findUnique({ - where: { templateId: input.metricId }, + where: { templateId: input.templateId }, }); if (!transformer) { return { success: false, error: "No transformer found for this metric" }; } transformerCode = transformer.transformerCode; }
414-426: Clarify intentional status/step name mismatch or align them.Line 415 sets
refreshStatusto"fetching-api-data", but line 416 runs the step as"generate-ingestion-transformer". If the frontend displaysrefreshStatusto users, this mismatch could show misleading progress (e.g., "Fetching API data" while actually generating a transformer).If intentional (showing a simplified status during a compound operation), add a comment explaining the design. Otherwise, align them for consistency.
🔎 Suggested alignment
- // Fetch data and generate new transformer - await runner.setStatus("fetching-api-data"); - transformResult = await runner.run("generate-ingestion-transformer", () => + // Fetch data and generate new transformer + transformResult = await runner.run("generate-ingestion-transformer", () => fetchTransformAndSave({Or add a clarifying comment if the mismatch is intentional:
// Fetch data and generate new transformer + // Note: Status shows "fetching-api-data" for user-facing simplicity, + // but the actual step includes both fetching and transformer generation await runner.setStatus("fetching-api-data"); transformResult = await runner.run("generate-ingestion-transformer", () =>
🧹 Nitpick comments (1)
src/server/api/routers/pipeline.ts (1)
40-57: Consider discriminated unions for type-safe task configuration.The current design uses optional fields to accommodate different task types, which means TypeScript won't catch missing required fields at compile time. For example,
chart-onlytasks requiredashboardChartId, but it's optional in the interface.🔎 Proposed discriminated union approach
-interface BackgroundTaskConfig { - metricId: string; - type: BackgroundTaskType; - organizationId: string; - teamId?: string; - // Ingestion-only fields - templateId?: string; - integrationId?: string; - connectionId?: string; - endpointConfig?: Record<string, string>; - // Chart-only fields - dashboardChartId?: string; - metricName?: string; - metricDescription?: string; - chartType?: string; - cadence?: "DAILY" | "WEEKLY" | "MONTHLY"; - selectedDimension?: string; +type BackgroundTaskConfig = + | { + type: "soft-refresh" | "hard-refresh"; + metricId: string; + organizationId: string; + teamId?: string; + } + | { + type: "ingestion-only"; + metricId: string; + organizationId: string; + teamId?: string; + templateId: string; + integrationId: string; + connectionId: string; + endpointConfig: Record<string, string>; + } + | { + type: "chart-only"; + metricId: string; + organizationId: string; + teamId?: string; + dashboardChartId: string; + metricName: string; + metricDescription: string; + chartType: string; + cadence: "DAILY" | "WEEKLY" | "MONTHLY"; + selectedDimension?: string; + };
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/server/api/routers/metric.tssrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/server/api/routers/metric.ts
🧰 Additional context used
📓 Path-based instructions (2)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Use inline type imports with @trivago/prettier-plugin-sort-imports for consistent import orderingUse TypeScript 5.9 with strict type checking for all frontend and backend code
Files:
src/server/api/services/transformation/data-pipeline.tssrc/server/api/routers/pipeline.ts
src/server/api/routers/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/server/api/routers/**/*.ts: Always verify resource access using authorization helpers (getMetricAndVerifyAccess, getRoleAndVerifyAccess, getTeamAndVerifyAccess) to ensure resources belong to the user's organization in tRPC procedures
Use protectedProcedure or workspaceProcedure for new tRPC procedures instead of public procedures
Invalidate cache tags after mutations using invalidateCacheByTags(ctx.db, [tagName]) to maintain consistency
src/server/api/routers/**/*.ts: In tRPC procedures, never assume authentication; always use helpers likegetTeamAndVerifyAccessto verify access before operations
Be careful when modifying team save logic related toEditSessionmodel, which handles locking for concurrent team editing
Files:
src/server/api/routers/pipeline.ts
🧠 Learnings (5)
📓 Common learnings
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Applied to files:
src/server/api/services/transformation/data-pipeline.tssrc/server/api/routers/pipeline.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Metrics Pipeline processes data through 3 stages: DataIngestionTransformer (Raw API → MetricDataPoint), ChartTransformer (MetricDataPoint → Chart Config), and Visualization (Chart Config → Recharts UI)
Applied to files:
src/server/api/services/transformation/data-pipeline.tssrc/server/api/routers/pipeline.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
Applied to files:
src/server/api/services/transformation/data-pipeline.tssrc/server/api/routers/pipeline.ts
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to prisma/schema.prisma : When modifying the Prisma schema, especially for Metric Transformers, ensure AI-generated code generation and execution logic is updated accordingly
Applied to files:
src/server/api/services/transformation/data-pipeline.tssrc/server/api/routers/pipeline.ts
🔇 Additional comments (10)
src/server/api/routers/pipeline.ts (4)
21-21: LGTM!The new imports support the consolidated pipeline system:
getStepDisplayNameprovides unified step display names for progress tracking, andinvalidateCacheByTagsensures cache consistency after background operations.Also applies to: 27-27
64-155: LGTM! Well-structured background task runner with proper error handling.The unified
runBackgroundTaskfunction consolidates four previously separate implementations into a single code path. Key design decisions are sound:
- Delete operations use
.catch(() => null)for "delete if exists" semantics- Cache invalidation occurs after successful operations with appropriate tags
- Error handling captures and records errors without rethrowing (correct for fire-and-forget)
- Status updates track progress through
refreshStatusfield
166-316: LGTM! Mutations follow secure patterns and consistent fire-and-forget design.All four mutations properly:
- Verify resource access using
getMetricAndVerifyAccess(per coding guidelines)- Set initial
refreshStatusbefore dispatching background tasks- Use
voidfor fire-and-forget execution- Return immediately to prevent blocking the frontend
The consolidation eliminates ~100 lines of duplication while maintaining the same security and UX guarantees.
As per coding guidelines, always verify resource access using authorization helpers.
349-359: The prefix verification confirms the code is correct. PipelineRunner logs with thepipeline-step:${step}format in bothlogStep()andsetStatus()methods, which matches the filterstartsWith: "pipeline-step:"in the getProgress query. The completedSteps array will populate correctly, and frontend progress tracking will work as expected.src/server/api/services/transformation/data-pipeline.ts (6)
35-52: LGTM! Well-defined types for the refactored pipeline.The new types (
RefreshMetricInput,RefreshMetricResult) cleanly separate refresh semantics from the underlying operations, supporting the soft/hard refresh distinction.
66-120: LGTM! Proper error handling for logging operations.The function correctly swallows logging errors with
.catch(() => undefined)(lines 99, 116), ensuring that database failures during logging don't crash the API fetch flow. This is the same pattern recommended in past reviews.
174-271: LGTM! Transformer management with proper optimistic concurrency.The function correctly:
- Uses
templateIdconsistently for lookups and upserts- Generates and tests transformer code outside transactions (preventing connection pool exhaustion)
- Handles AI generation failures with retry logic
- Uses upsert to handle race conditions (line 257)
448-488: LGTM! Chart updates with proper error isolation.The chart update logic correctly:
- Fetches fresh data points after the ingestion phase completes
- Uses dynamic imports to lazy-load chart transformer dependencies (line 466-468)
- Isolates errors per chart (lines 472-485) so one chart failure doesn't affect others
- Logs chart-specific errors without failing the entire pipeline
This design maintains resilience when updating multiple charts.
594-611: LGTM! Backward-compatible wrapper maintains public API.The
ingestMetricDatafunction is now a thin wrapper aroundfetchTransformAndSave, maintaining backward compatibility for existing callers while benefiting from the centralized implementation. The function signature is preserved and behavior remains equivalent.
133-146: No issue with timestamp generation in snapshot mode.The concern about overlapping timestamps from rapid calls is invalid. In snapshot mode,
deleteMany({ where: { metricId } })(line 137) atomically deletes all existing data for the metric before inserting the new batch in the same transaction. Each call completely replaces the prior data, eliminating any risk of timestamp overlap. The sequential timestamps (incrementing by milliseconds) are an intentional design pattern that preserves ordering within the batch while avoiding unique constraint violations.Likely an incorrect or invalid review comment.
feat: add animated PipelineProgressDisplay component
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/hooks/use-pipeline-status.ts (1)
114-126: Progress calculation may show 0% for completed pipelines.When a pipeline completes successfully but the server returns an empty
completedStepsarray (e.g., fast-path completions), line 123's logicdata?.completedSteps?.length ? 100 : 0evaluates to0instead of100.Suggested fix
- progressPercent: data?.completedSteps?.length ? 100 : 0, + progressPercent: data !== undefined ? 100 : 0,This distinguishes "no data yet" (0%) from "pipeline complete with response" (100%).
🧹 Nitpick comments (2)
src/components/pipeline-progress-display.tsx (1)
121-153: Smooth progress bar with spring-based interpolation.Using
useSpringwithuseTransformprovides smooth value transitions rather than jarring jumps. The shimmer effect adds visual polish during loading.Consider adding
will-change: transformto the shimmer element if performance issues arise on lower-end devices, as infinite animations can be costly.src/hooks/use-pipeline-status.ts (1)
89-105: Callback stability consideration.The
useEffectdependencies includeonCompleteandonError. If parent components pass inline arrow functions, this effect will re-run on every render. Consider documenting that callers should memoize callbacks withuseCallback, or wrap them internally withuseRefto stabilize them.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/components/pipeline-progress-display.tsxsrc/components/pipeline-progress.tsxsrc/hooks/use-pipeline-progress.tssrc/hooks/use-pipeline-status.tssrc/hooks/use-wait-for-pipeline.ts
💤 Files with no reviewable changes (2)
- src/hooks/use-pipeline-progress.ts
- src/hooks/use-wait-for-pipeline.ts
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Use inline type imports with @trivago/prettier-plugin-sort-imports for consistent import orderingUse TypeScript 5.9 with strict type checking for all frontend and backend code
Files:
src/components/pipeline-progress-display.tsxsrc/hooks/use-pipeline-status.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/**/*/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Client Components must use
import { api } from '@/trpc/react'for standard HTTP/Hooks wrapper
Files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/components/**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Place colocated components in
_components/folders next to their parent component
Files:
src/components/pipeline-progress-display.tsxsrc/components/pipeline-progress.tsx
**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Use Tailwind CSS 4 for styling with shadcn/ui and Radix UI primitive components
Files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/dashboard/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/dashboard/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/metric/_components/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
src/app/metric/_components/**/*.tsx: Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Files:
src/app/metric/_components/base/MetricDialogBase.tsx
🧠 Learnings (14)
📓 Common learnings
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Applied to files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Applied to files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Applied to files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to **/*canvas*.{ts,tsx} : Use React Flow primitives (BaseNode, BaseHandle, ZoomSlider) from src/components/react-flow/ for consistent canvas styling
Applied to files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.tsx : Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Applied to files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Applied to files:
src/components/pipeline-progress-display.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Metrics Pipeline processes data through 3 stages: DataIngestionTransformer (Raw API → MetricDataPoint), ChartTransformer (MetricDataPoint → Chart Config), and Visualization (Chart Config → Recharts UI)
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*/*.tsx : Client Components must use `import { api } from '@/trpc/react'` for standard HTTP/Hooks wrapper
Applied to files:
src/components/pipeline-progress.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Consolidate role node duplication: extract shared RoleNodeTemplate from role-node.tsx and public-role-node.tsx with isEditable prop instead of maintaining separate components
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Avoid modifying `enrichNodesWithRoleData` flow without understanding the complete canvas serialization logic for saving/loading React Flow nodes to the database
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : React Flow nodes must store minimal data (e.g., just `roleId`); fetch full Role data from TanStack Query cache in the Node component to keep canvas and sidebars in sync
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*.tsx : Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
🧬 Code graph analysis (5)
src/components/pipeline-progress-display.tsx (2)
src/hooks/use-pipeline-status.ts (2)
CompletedStep(10-15)usePipelineStatus(59-159)src/lib/utils.ts (1)
cn(4-6)
src/hooks/use-pipeline-status.ts (1)
src/trpc/react.tsx (1)
api(28-28)
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (1)
src/components/pipeline-progress-display.tsx (1)
PipelineProgressDisplay(161-230)
src/components/pipeline-progress.tsx (1)
src/hooks/use-pipeline-status.ts (1)
usePipelineStatus(59-159)
src/app/metric/_components/base/MetricDialogBase.tsx (1)
src/hooks/use-pipeline-status.ts (1)
usePipelineStatus(59-159)
🔇 Additional comments (14)
src/components/pipeline-progress.tsx (1)
6-6: Clean migration to the new unified hook.The import and hook usage correctly switch from
usePipelineProgresstousePipelineStatus. The hook call signature with{ metricId, enabled: isActive }aligns with the new API, and all consumed properties (isProcessing,currentStepDisplayName,progressPercent,completedSteps,currentStep) are correctly typed in the returnedPipelineStatusinterface.Also applies to: 20-23
src/app/metric/_components/base/MetricDialogBase.tsx (1)
101-115: Well-structured pipeline status integration with proper callback handling.The hook usage correctly wires
onCompleteto advance the dialog flow andonErrorto reset state with user feedback. The destructuredpipelineErroris properly consumed in the UI at line 226. The error callback appropriately clearspipelineMetricIdto stop polling, preventing stale state.src/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx (1)
399-415: Correct delegation of progress display to the chart component.Adding
metricIdtoDashboardMetricChartenables the chart to internally manage its progress overlay viaPipelineProgressDisplay. This consolidation removes the need for the drawer to separately renderPipelineProgress, aligning with the PR's goal of establishing a single source of truth for pipeline UI.src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (2)
104-104: Well-designed prop additions with sensible defaults.The
showProgressOverlayprop (defaulting totrue) provides flexibility for parent components that may want to handle progress display separately. MakingmetricIdrequired ensures metric-scoped progress tracking works correctly.Also applies to: 120-122, 124-139
786-794: Appropriate conditional rendering for the progress overlay.The condition
showProgressOverlay && (isProcessing || loadingPhase) && !hasChartDataensures the animated progress display only appears when there's no existing chart to show, avoiding visual interference with rendered charts. WhenhasChartDatais true, the simpler overlay at lines 775-784 provides feedback without obscuring the chart.src/components/pipeline-progress-display.tsx (3)
1-57: Clean animation setup with proper spring physics.The animation variants are well-configured with appropriate stiffness and damping values for smooth, natural motion. The staggered children animation (
staggerChildren: 0.08) provides a polished sequential reveal effect.
59-119: Polished step indicator components.
CompletedStepItemwith the rotating checkmark animation andCurrentStepItemwith the pulsing ring create clear visual distinction between step states. The duration timing aligns well with expected pipeline step intervals.
161-229: Well-structured main component with clear responsibility boundaries.The component correctly delegates all state management to
usePipelineStatusand focuses purely on presentation. Early return when!progress.isProcessingprevents unnecessary DOM elements. The variant-based height classes provide appropriate sizing for different contexts.src/hooks/use-pipeline-status.ts (2)
66-79: Solid ref-based transition detection and query setup.The use of refs for tracking previous processing state and preventing duplicate callback invocations is a robust pattern. The non-null assertion
metricId!at line 74 is safe because the query is disabled when!shouldQuery(which requires!!metricId).
128-158: Clean progress computation when processing.The pipeline type detection and step count calculation provide accurate progress tracking. Capping at 95% while processing is a good UX pattern that reserves 100% for confirmed completion. The mapping of completed steps to include display names via
getStepShortNamemaintains consistency with the UI.src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (4)
18-21: Clean import updates and step name consolidation.The import path change for
PipelineStepNamealigns with the PR's goal of consolidating pipeline types. Theas constassertion onINITIAL_STEPSensures proper type narrowing for the step names.Also applies to: 42-47
124-143: Well-integrated pipeline status hook with appropriate callbacks.The
usePipelineStatushook replaces the previous manual polling approach. TheonCompletecallback correctly invalidates both the general chart cache and the team-specific cache when applicable. TheonErrorcallback withduration: 10000ensures error messages remain visible long enough for users to read.
145-170: Correct state derivation from pipeline status.The effect at lines 146-150 properly clears
activeOperationwhen processing completes. TheisProcessingandisRegeneratingPipelinederivations correctly combine multiple signals (active operation, mutation pending states, pipeline status, and refresh status from the metric).The
loadingPhasederivation at lines 167-170 correctly prioritizes the livecurrentStepfrompipelineStatusover the fallbackINITIAL_STEPSvalues.
405-420: Proper metricId propagation to chart component.Adding
metricId={metric.id}enables the chart to internally manage its pipeline progress overlay viaPipelineProgressDisplay, completing the consolidation of progress UI rendering.
- Remove activeOperation state and complex loadingPhase fallback logic - Pass pipelineStatus as prop to PipelineProgressDisplay (eliminates duplicate polling) - Consolidate isProcessing/isRegeneratingPipeline into single pipelineStatus prop - Remove unused transformerInfo query and regenerateIngestion mutation - Reduce dashboard-metric-card from ~477 to ~350 lines
- Remove usePipelineStatus hook and pipeline-progress-display components - Simplify MetricDialogBase to close immediately after creation - Add centralized batch polling in DashboardClient - Pass isProcessing as simple boolean to chart components - Preserve refreshStatus in optimistic updates for loading indicator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/server/api/routers/pipeline.ts (1)
396-406: Bug: endpoint prefix mismatch may still exist.The query filters for
endpoint: { startsWith: "pipeline-step:" }(Line 401), but as noted in a past review, iflogPipelineEventwrites logs with"transformer:"prefix,completedStepswill always be empty. Verify that the logging code now uses"pipeline-step:"prefix, or update this filter to match the actual prefix.#!/bin/bash # Verify what endpoint prefix is used when logging pipeline events rg -n "endpoint:" --type ts -C3 | grep -E "(transformer:|pipeline-step:)"
🧹 Nitpick comments (3)
src/server/api/routers/pipeline.ts (1)
34-57: Consider runtime validation for task-specific required fields.The
BackgroundTaskConfiginterface makes ingestion-only fields (templateId,integrationId,connectionId) and chart-only fields (dashboardChartId,metricName) optional, butrunBackgroundTaskuses non-null assertions (!) on them in Lines 100-104 and 131-136. If a caller incorrectly invokes withtype: "ingestion-only"but omits required fields, this will throw at runtime.🔎 Suggested approach: Add runtime guards
case "ingestion-only": { + if (!config.templateId || !config.integrationId || !config.connectionId) { + throw new Error("ingestion-only requires templateId, integrationId, and connectionId"); + } await db.dataIngestionTransformer .delete({ where: { templateId: metricId } }) .catch(() => null);Alternatively, consider discriminated union types for
BackgroundTaskConfigto get compile-time safety.src/app/dashboard/[teamId]/_components/dashboard-client.tsx (1)
65-115: Consider using useMemo for processingMetricIds stability.The
useEffectdependency onprocessingMetricIdscould cause frequent re-runs since the array reference changes wheneverdashboardChartschanges, even if the actual IDs are the same. This may trigger unnecessary processing in the effect body.However, since the effect early-returns when
!batchProgressand the actual logic is guarded, this is likely acceptable in practice.🔎 Optional: Stabilize with JSON comparison or Set
const processingMetricIdsKey = useMemo( () => processingMetricIds.join(','), [processingMetricIds] ); // Then use processingMetricIdsKey in dependency arraysrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx (1)
106-108: Unused destructured props detected.
_isUpdating(Line 106) and_onRegenerate(Line 108) are destructured with underscore prefix indicating intentional non-use, but they're still defined as required props. Consider removing them from the interface if truly unused, or using them if intended.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
src/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/components/pipeline-progress.tsxsrc/hooks/use-metric-mutations.tssrc/server/api/routers/pipeline.ts
💤 Files with no reviewable changes (1)
- src/components/pipeline-progress.tsx
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Use inline type imports with @trivago/prettier-plugin-sort-imports for consistent import orderingUse TypeScript 5.9 with strict type checking for all frontend and backend code
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/server/api/routers/pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
src/app/dashboard/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
src/app/dashboard/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
src/**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
src/**/*/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Client Components must use
import { api } from '@/trpc/react'for standard HTTP/Hooks wrapper
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Use Tailwind CSS 4 for styling with shadcn/ui and Radix UI primitive components
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
src/app/metric/_components/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
src/app/metric/_components/**/*.tsx: Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Files:
src/app/metric/_components/base/MetricDialogBase.tsx
src/server/api/routers/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/server/api/routers/**/*.ts: Always verify resource access using authorization helpers (getMetricAndVerifyAccess, getRoleAndVerifyAccess, getTeamAndVerifyAccess) to ensure resources belong to the user's organization in tRPC procedures
Use protectedProcedure or workspaceProcedure for new tRPC procedures instead of public procedures
Invalidate cache tags after mutations using invalidateCacheByTags(ctx.db, [tagName]) to maintain consistency
src/server/api/routers/**/*.ts: In tRPC procedures, never assume authentication; always use helpers likegetTeamAndVerifyAccessto verify access before operations
Be careful when modifying team save logic related toEditSessionmodel, which handles locking for concurrent team editing
Files:
src/server/api/routers/pipeline.ts
🧠 Learnings (18)
📓 Common learnings
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/server/api/routers/pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.tsx : Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Metrics Pipeline processes data through 3 stages: DataIngestionTransformer (Raw API → MetricDataPoint), ChartTransformer (MetricDataPoint → Chart Config), and Visualization (Chart Config → Recharts UI)
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/server/api/routers/pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/server/api/routers/pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.{ts,tsx} : Use auto-save system with debounce (2s delay) that serializes nodes/edges to tRPC mutation, with beforeunload sendBeacon fallback for unsaved changes
Applied to files:
src/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Applied to files:
src/hooks/use-metric-mutations.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/metric/_components/base/MetricDialogBase.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : React Flow nodes must store minimal data (e.g., just `roleId`); fetch full Role data from TanStack Query cache in the Node component to keep canvas and sidebars in sync
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Consolidate role node duplication: extract shared RoleNodeTemplate from role-node.tsx and public-role-node.tsx with isEditable prop instead of maintaining separate components
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*.tsx : Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Avoid modifying `enrichNodesWithRoleData` flow without understanding the complete canvas serialization logic for saving/loading React Flow nodes to the database
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.{ts,tsx} : For new canvas node types: create component in teams/[teamId]/_components/, add to TeamNode union in types/canvas.ts, register in nodeTypes in team-canvas.tsx, and update serialization in canvas-serialization.ts
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*/*.tsx : Client Components must use `import { api } from '@/trpc/react'` for standard HTTP/Hooks wrapper
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Dual API pattern separates Server Components (using `@/trpc/server` with direct DB calls) from Client Components (using `@/trpc/react` with HTTP), eliminating HTTP overhead for server-side operations
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to **/*.{ts,tsx} : Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to prisma/schema.prisma : When modifying the Prisma schema, especially for Metric Transformers, ensure AI-generated code generation and execution logic is updated accordingly
Applied to files:
src/server/api/routers/pipeline.ts
🧬 Code graph analysis (3)
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (1)
src/lib/utils.ts (1)
cn(4-6)
src/hooks/use-metric-mutations.ts (1)
src/trpc/react.tsx (1)
api(28-28)
src/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx (1)
src/lib/utils.ts (1)
cn(4-6)
🔇 Additional comments (25)
src/app/metric/_components/base/MetricDialogBase.tsx (1)
135-154: Clean dialog structure with render prop pattern.The refactored dialog follows the centralized polling approach well. The controlled/uncontrolled open state pattern and render prop for children provide good flexibility for provider-specific dialogs. Based on learnings, this base component properly supports the
[Provider]MetricDialog.tsx+[Provider]MetricContent.tsxpattern.src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (5)
66-81: LGTM! Simplified loading state prop.The transition from
loadingPhasetoisProcessingboolean streamlines the component's public API. Default value offalseensures backward compatibility for callers that don't pass this prop.
108-113: Good performance improvement for chart key generation.Replacing JSON serialization with a simple string concatenation using
chartTypeandlengthis more efficient. This maintains chart re-render behavior while avoiding the cost of serializing potentially large data arrays.
571-574: Clean conditional styling.Using
cn()with the transition utility and conditional opacity correctly applies the pending state visual feedback.
615-624: LGTM! Clear processing indicator logic.The badge now correctly shows "Saving..." when
isPendingis true (optimistic update in progress) and "Processing..." when onlyisProcessingis true (background pipeline running).
711-728: LGTM! Consistent empty state handling.The mutually exclusive conditions for empty chart data with/without processing state are clear and correctly render appropriate UI feedback.
src/server/api/routers/pipeline.ts (2)
322-363: LGTM! Well-designed batch progress endpoint.The
getBatchProgressendpoint efficiently fetches status for multiple metrics in a single query. Returning an empty object for empty input avoids unnecessary DB calls, and verifying organization ownership ensures proper authorization.
64-155: Good consolidation of background task logic.The
runBackgroundTaskfunction successfully unifies all pipeline operations with consistent error handling, cache invalidation, and status updates. The switch-based dispatch is clear and maintainable.src/hooks/use-metric-mutations.ts (3)
89-98: LGTM! Proper handling of refreshStatus for loading indicator.Preserving
refreshStatusfrom the server response (with fallback to"fetching-api-data") ensures the loading indicator displays correctly while the background pipeline runs.
134-135: Verify cache staleness without onSettled invalidation.The
createmutation relies on centralized polling to update the cache, butcreateManual(Line 259-264) anddelete(Line 316-321) still useonSettledfor targeted invalidation. This inconsistency could cause stale data if the polling interval is too long or if centralized polling doesn't cover thecreatepath.Consider adding similar targeted invalidation for consistency:
onSettled: () => { if (teamId) { void utils.dashboard.getDashboardCharts.invalidate({ teamId }); } },
326-334: LGTM! Simplified mutation definitions.Removing
onSuccesshandlers fromrefreshandregeneratemutations aligns with the centralized polling model where the parent component (DashboardClient) handles cache invalidation upon completion detection.src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx (3)
13-27: LGTM! Clear component documentation and query setup.The docstring clearly explains the single source of truth pattern, and the dual query setup with explicit loading/error state tracking follows best practices.
36-50: LGTM! Clean loading skeleton.The skeleton grid provides good visual feedback during initial load. The hardcoded count of 4 skeletons is reasonable as an estimate.
52-64: LGTM! Proper error state handling.The error state correctly handles both query errors and missing data cases, providing actionable feedback to users.
src/app/dashboard/[teamId]/_components/dashboard-client.tsx (4)
14-19: LGTM! Well-defined pipeline progress type.The
PipelineProgressDatainterface is clean and provides all necessary status information for child components.
42-59: LGTM! Efficient batch polling setup.Deriving
processingMetricIdsfrom server data ensures the polling query only runs when needed. The 1-second refetch interval provides responsive updates without excessive server load.
100-111: LGTM! Appropriate error notification.Showing toast errors for failed pipelines with a 10-second duration gives users sufficient time to read the message. Invalidating cache on both success and failure ensures UI stays in sync.
118-140: LGTM! Clean status map construction.The
pipelineStatusMapcorrectly merges batch progress data with server-siderefreshStatusfor metrics not yet in the batch response. This handles the race condition where a metric starts processing after the last batch poll.src/app/dashboard/[teamId]/_components/dashboard-metric-drawer.tsx (2)
214-228: LGTM! Simplified disabled state logic.Using
isProcessingalone for disabled states is cleaner than the previous combined conditions. The spinner animation on the RefreshCw icon provides clear visual feedback.
339-378: LGTM! Consistent isProcessing propagation.The
isProcessingflag is correctly passed toDashboardMetricChartandGoalProgressDisplay, ensuring consistent loading states across the drawer UI.src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (5)
48-51: LGTM! Good prop additions for centralized polling.The
pipelineStatusandteamIdprops enable the parent-driven status model. Optional typing with?maintains backwards compatibility for contexts that don't use centralized polling.
83-85: LGTM! Robust isProcessing derivation.Combining
metric.refreshStatus(server source of truth) withpipelineStatus?.isProcessing(polled status) ensures accurate loading state regardless of data freshness.
96-106: LGTM! Targeted cache invalidation.Using
teamIdfor scoped invalidation is more efficient than global invalidation, especially in multi-team organizations.
181-190: LGTM! Simplified regeneration handler.The
handleRegeneratefunction is now a clean wrapper around the regenerate mutation with appropriate error handling.
283-296: LGTM! Clean chart component integration.The
DashboardMetricChartreceives all necessary props including the newisProcessingflag, completing the centralized status flow from parent to visualization.
| const handleSubmit = async (data: MetricCreateInput) => { | ||
| setError(null); | ||
|
|
||
| try { | ||
| // Create the metric with optimistic update (handled by hook) | ||
| const realDashboardChart = await createMutation.mutateAsync({ | ||
| await createMutation.mutateAsync({ | ||
| ...data, | ||
| teamId, | ||
| }); | ||
|
|
||
| // Store metric info for goal setup after pipeline completes | ||
| setCreatedMetricId(realDashboardChart.metric.id); | ||
| setCreatedMetricName(data.name); | ||
|
|
||
| // Start watching the pipeline progress | ||
| setPipelineMetricId(realDashboardChart.metric.id); | ||
| setStep("processing"); | ||
| // Close dialog immediately - optimistic card is already on dashboard | ||
| // The optimistic update already added the card with refreshStatus | ||
| // Dashboard centralized polling will handle status updates | ||
| setOpen(false); | ||
| onSuccess?.(); | ||
| } catch { | ||
| // Error handling and rollback handled by hook | ||
| toast.error("Failed to create KPI"); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error state is set but never populated.
The error state is cleared at line 100 but never set to an actual error message in the catch block. This means:
- The
errorprop passed to children (line 148) will always benull - The error display at line 151 will never render
Either populate the error state or remove the unused state/prop entirely.
🔎 Option 1: Populate error state
- } catch {
+ } catch (err) {
// Error handling and rollback handled by hook
+ const message = err instanceof Error ? err.message : "Failed to create KPI";
+ setError(message);
toast.error("Failed to create KPI");
}🔎 Option 2: Remove unused error state if relying solely on toast
export function MetricDialogBase({
// ...
}: MetricDialogBaseProps) {
const [internalOpen, setInternalOpen] = useState(false);
- const [error, setError] = useState<string | null>(null);
+ const error = null; // Error shown via toast only
// ...
const handleSubmit = async (data: MetricCreateInput) => {
- setError(null);
-
try {
// ...
} catch {
toast.error("Failed to create KPI");
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const handleSubmit = async (data: MetricCreateInput) => { | |
| setError(null); | |
| try { | |
| // Create the metric with optimistic update (handled by hook) | |
| const realDashboardChart = await createMutation.mutateAsync({ | |
| await createMutation.mutateAsync({ | |
| ...data, | |
| teamId, | |
| }); | |
| // Store metric info for goal setup after pipeline completes | |
| setCreatedMetricId(realDashboardChart.metric.id); | |
| setCreatedMetricName(data.name); | |
| // Start watching the pipeline progress | |
| setPipelineMetricId(realDashboardChart.metric.id); | |
| setStep("processing"); | |
| // Close dialog immediately - optimistic card is already on dashboard | |
| // The optimistic update already added the card with refreshStatus | |
| // Dashboard centralized polling will handle status updates | |
| setOpen(false); | |
| onSuccess?.(); | |
| } catch { | |
| // Error handling and rollback handled by hook | |
| toast.error("Failed to create KPI"); | |
| } | |
| }; | |
| const handleSubmit = async (data: MetricCreateInput) => { | |
| setError(null); | |
| try { | |
| // Create the metric with optimistic update (handled by hook) | |
| await createMutation.mutateAsync({ | |
| ...data, | |
| teamId, | |
| }); | |
| // Close dialog immediately - optimistic card is already on dashboard | |
| // The optimistic update already added the card with refreshStatus | |
| // Dashboard centralized polling will handle status updates | |
| setOpen(false); | |
| onSuccess?.(); | |
| } catch (err) { | |
| // Error handling and rollback handled by hook | |
| const message = err instanceof Error ? err.message : "Failed to create KPI"; | |
| setError(message); | |
| toast.error("Failed to create KPI"); | |
| } | |
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/server/api/services/transformation/data-pipeline.ts (2)
313-337: Fix transformer lookup to usetemplateIdinstead ofmetricId.Lines 318 and 331 both incorrectly use
input.metricIdwhereinput.templateIdshould be used:
Line 318: Passes
input.metricIdas the first parameter togetOrCreateDataIngestionTransformer, but that function expectstemplateId(see line 174). Multiple metrics using the same template should share one transformer.Line 331: Queries with
where: { templateId: input.metricId }, which looks up the wrong field. Should bewhere: { templateId: input.templateId }.🔎 Proposed fix
if (input.generateTransformer) { const { transformer, isNew } = await getOrCreateDataIngestionTransformer( - input.metricId, + input.templateId, input.integrationId, template, fetchResult.data, input.endpointConfig, ); if (!transformer) { return { success: false, error: "Failed to get or create transformer" }; } transformerCode = transformer.transformerCode; transformerCreated = isNew; } else { const transformer = await db.dataIngestionTransformer.findUnique({ - where: { templateId: input.metricId }, + where: { templateId: input.templateId }, }); if (!transformer) { return { success: false, error: "No transformer found for this metric" }; } transformerCode = transformer.transformerCode; }
414-426: Status and step name mismatch may confuse progress tracking.Line 415 sets status to
"fetching-api-data"but line 416 runs the step as"generate-ingestion-transformer". If the frontend displays pipeline progress based on status, users will see a misleading step name.🔎 Recommended alignment
Either remove the redundant
setStatuscall (sincerunner.runsets its own status):- await runner.setStatus("fetching-api-data"); transformResult = await runner.run("generate-ingestion-transformer", () =>Or align the status name with the step:
- await runner.setStatus("fetching-api-data"); + await runner.setStatus("generate-ingestion-transformer"); transformResult = await runner.run("generate-ingestion-transformer", () =>
🧹 Nitpick comments (3)
src/server/api/routers/pipeline.ts (1)
64-155: Approve the consolidated background task runner with a note on non-null assertions.The centralized
runBackgroundTaskfunction is well-structured with proper error handling and cache invalidation. However, the non-null assertions on lines 100-104 and 131-136 could cause runtime errors if the caller omits required fields for a specific task type.Consider using discriminated unions to make the type system enforce required fields per task type:
🔎 Suggested type-safe approach
-type BackgroundTaskType = - | "soft-refresh" - | "hard-refresh" - | "ingestion-only" - | "chart-only"; - -interface BackgroundTaskConfig { - metricId: string; - type: BackgroundTaskType; - organizationId: string; - teamId?: string; - // Ingestion-only fields - templateId?: string; - integrationId?: string; - connectionId?: string; - endpointConfig?: Record<string, string>; - // Chart-only fields - dashboardChartId?: string; - metricName?: string; - metricDescription?: string; - chartType?: string; - cadence?: "DAILY" | "WEEKLY" | "MONTHLY"; - selectedDimension?: string; -} +interface BaseTaskConfig { + metricId: string; + organizationId: string; + teamId?: string; +} + +interface RefreshTaskConfig extends BaseTaskConfig { + type: "soft-refresh" | "hard-refresh"; +} + +interface IngestionOnlyTaskConfig extends BaseTaskConfig { + type: "ingestion-only"; + templateId: string; + integrationId: string; + connectionId: string; + endpointConfig?: Record<string, string>; +} + +interface ChartOnlyTaskConfig extends BaseTaskConfig { + type: "chart-only"; + dashboardChartId: string; + metricName: string; + metricDescription?: string; + chartType?: string; + cadence?: "DAILY" | "WEEKLY" | "MONTHLY"; + selectedDimension?: string; +} + +type BackgroundTaskConfig = + | RefreshTaskConfig + | IngestionOnlyTaskConfig + | ChartOnlyTaskConfig;src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (1)
117-135: Consider consolidating the duplicate setData calls.The
updateMetricMutationupdates cache data for bothundefinedand{ teamId }query keys. While functionally correct, this could be simplified if both queries always use the same shape.🔎 Optional consolidation
const updateMetricMutation = api.metric.update.useMutation({ onSuccess: (updatedMetric) => { - // Update both query variants - utils.dashboard.getDashboardCharts.setData(undefined, (old) => - old?.map((dm) => - dm.metric.id === updatedMetric.id - ? { ...dm, metric: { ...dm.metric, ...updatedMetric } } - : dm, - ), - ); - utils.dashboard.getDashboardCharts.setData({ teamId }, (old) => + const updateFn = (old: DashboardMetrics | undefined) => old?.map((dm) => dm.metric.id === updatedMetric.id ? { ...dm, metric: { ...dm.metric, ...updatedMetric } } : dm, - ), - ); + ); + utils.dashboard.getDashboardCharts.setData(undefined, updateFn); + if (teamId) { + utils.dashboard.getDashboardCharts.setData({ teamId }, updateFn); + } }, });src/server/api/services/transformation/data-pipeline.ts (1)
456-521: Consider aggregating chart transformer errors for visibility.The current implementation logs chart transformer errors (lines 475-478, 514-517) but doesn't propagate them or include them in the result. While this prevents individual chart failures from breaking the entire pipeline, it makes it difficult for callers to know if any charts failed.
Optional enhancement
Consider collecting chart errors and including them in the result:
const chartErrors: string[] = []; // In the error catches: chartErrors.push(`Chart ${dc.id}: ${chartError}`); // In the final return: return { success: true, dataPointCount: transformResult.dataPoints?.length ?? 0, chartErrors: chartErrors.length > 0 ? chartErrors : undefined, };This would allow the frontend to show warnings for partially successful pipeline runs while still completing the main data refresh.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
src/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/metric/_components/manual/ManualMetricContent.tsxsrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.ts
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Use inline type imports with @trivago/prettier-plugin-sort-imports for consistent import orderingUse TypeScript 5.9 with strict type checking for all frontend and backend code
Files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/metric/_components/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
src/app/metric/_components/**/*.tsx: Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsx
src/**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/**/*/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Client Components must use
import { api } from '@/trpc/react'for standard HTTP/Hooks wrapper
Files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
**/*.tsx
📄 CodeRabbit inference engine (GEMINI.md)
Use Tailwind CSS 4 for styling with shadcn/ui and Radix UI primitive components
Files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/dashboard/**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/app/dashboard/**/*.tsx
📄 CodeRabbit inference engine (CLAUDE.md)
Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
src/server/api/routers/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/server/api/routers/**/*.ts: Always verify resource access using authorization helpers (getMetricAndVerifyAccess, getRoleAndVerifyAccess, getTeamAndVerifyAccess) to ensure resources belong to the user's organization in tRPC procedures
Use protectedProcedure or workspaceProcedure for new tRPC procedures instead of public procedures
Invalidate cache tags after mutations using invalidateCacheByTags(ctx.db, [tagName]) to maintain consistency
src/server/api/routers/**/*.ts: In tRPC procedures, never assume authentication; always use helpers likegetTeamAndVerifyAccessto verify access before operations
Be careful when modifying team save logic related toEditSessionmodel, which handles locking for concurrent team editing
Files:
src/server/api/routers/pipeline.ts
🧠 Learnings (15)
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Use MetricDialogBase from base/ for shared metric dialog functionality to reduce duplication across provider-specific dialogs
Applied to files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Metric dialogs should follow the pattern: [Provider]MetricDialog.tsx (dialog wrapper) + [Provider]MetricContent.tsx (form content) and be registered in src/app/metric/_components/index.ts
Applied to files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/metric/_components/**/*.tsx : Consider factory pattern for metric dialogs to reduce the 5 nearly identical wrapper components (currently duplicated across providers)
Applied to files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.tsx : Consolidate dashboard metric card duplication: add readOnly mode to dashboard-metric-card.tsx instead of maintaining separate public-dashboard-metric-card.tsx component
Applied to files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/dashboard/**/*.{ts,tsx} : Three-stage metrics transformation pipeline: Stage 1 (API → DataPoints via DataIngestionTransformer), Stage 2 (DataPoints → ChartConfig via ChartTransformer), Stage 3 (ChartConfig → UI via DashboardMetricChart with Recharts)
Applied to files:
src/app/metric/_components/manual/ManualMetricContent.tsxsrc/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Metrics Pipeline processes data through 3 stages: DataIngestionTransformer (Raw API → MetricDataPoint), ChartTransformer (MetricDataPoint → Chart Config), and Visualization (Chart Config → Recharts UI)
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/lib/metrics/**/*.ts : Metric transformation logic should be organized in `src/lib/metrics/` to support the 3-stage pipeline: Ingestion → Aggregation → Visualization
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsxsrc/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.tssrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*.tsx : Prefer Server Components for initial data fetching; use Client Components ('use client') only for interactivity
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : React Flow nodes must store minimal data (e.g., just `roleId`); fetch full Role data from TanStack Query cache in the Node component to keep canvas and sidebars in sync
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Consolidate role node duplication: extract shared RoleNodeTemplate from role-node.tsx and public-role-node.tsx with isEditable prop instead of maintaining separate components
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/app/teams/[teamId]/**/*.tsx : Avoid modifying `enrichNodesWithRoleData` flow without understanding the complete canvas serialization logic for saving/loading React Flow nodes to the database
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsxsrc/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to src/**/*/*.tsx : Client Components must use `import { api } from '@/trpc/react'` for standard HTTP/Hooks wrapper
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: The Dual API pattern separates Server Components (using `@/trpc/server` with direct DB calls) from Client Components (using `@/trpc/react` with HTTP), eliminating HTTP overhead for server-side operations
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T16:32:46.818Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-20T16:32:46.818Z
Learning: Applies to **/*.{ts,tsx} : Use the tRPC dual API pattern: for server components, directly call tRPC server API (10x faster); for client components, use React hooks with TanStack Query
Applied to files:
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx
📚 Learning: 2025-12-20T22:12:00.566Z
Learnt from: CR
Repo: drifter089/orgOS PR: 0
File: GEMINI.md:0-0
Timestamp: 2025-12-20T22:12:00.566Z
Learning: Applies to prisma/schema.prisma : When modifying the Prisma schema, especially for Metric Transformers, ensure AI-generated code generation and execution logic is updated accordingly
Applied to files:
src/server/api/routers/pipeline.tssrc/server/api/services/transformation/data-pipeline.ts
🧬 Code graph analysis (6)
src/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsx (1)
src/components/ui/scroll-area.tsx (1)
ScrollArea(59-59)
src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (1)
src/lib/utils.ts (1)
cn(4-6)
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx (3)
src/trpc/react.tsx (1)
api(28-28)scripts/analyze-metrics-pipeline.mjs (1)
integrations(33-39)src/components/ui/skeleton.tsx (1)
Skeleton(13-13)
src/app/dashboard/[teamId]/_components/dashboard-client.tsx (1)
src/trpc/react.tsx (2)
RouterOutputs(42-42)api(28-28)
src/server/api/services/transformation/data-pipeline.ts (6)
src/lib/metrics/transformer-types.ts (1)
DataPoint(17-21)src/server/db.ts (1)
db(26-26)src/server/api/services/transformation/executor.ts (1)
executeDataIngestionTransformer(41-137)src/lib/pipeline/index.ts (2)
PipelineType(6-6)createPipelineRunner(22-22)src/lib/pipeline/types.ts (1)
PipelineType(7-7)src/server/api/services/transformation/chart-generator.ts (1)
createChartTransformer(147-275)
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (3)
src/app/dashboard/[teamId]/_components/dashboard-client.tsx (1)
PipelineProgressData(15-19)src/lib/metrics/transformer-types.ts (1)
ChartTransformResult(56-71)src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (1)
DashboardMetricChart(72-749)
🔇 Additional comments (20)
src/app/metric/_components/google-sheets/GoogleSheetsMetricContent.tsx (1)
362-364: LGTM! Well-structured scroll container refactoring.The three-layer nesting pattern effectively separates scroll concerns:
- Outer
overflow-hiddenclips content to the rounded borderScrollAreamanages vertical scrolling with custom scrollbar- Inner container with
overflow-x-autoandmin-w-0enables horizontal scrolling for wide tablesThe
min-w-0is particularly important as it allows the container to shrink below content width in flex/grid layouts, ensuring the horizontal scroll activates properly.src/server/api/routers/pipeline.ts (3)
34-57: Well-structured type definitions for the consolidated background task runner.The
BackgroundTaskTypeunion andBackgroundTaskConfiginterface provide a clear, typed contract for the unified pipeline operations. This is a good consolidation of the four separate wrappers mentioned in the PR objectives.
318-363: Good addition of batch progress endpoint for centralized polling.The
getBatchProgressendpoint is a well-designed addition that enables efficient batch polling for multiple metrics. The authorization check viafindManywithorganizationIdfilter is appropriate. Returning an empty object for empty input avoids unnecessary database queries.
396-406: No action needed—the code is correct.The query filters for
endpoint: { startsWith: "pipeline-step:" }and logs with this prefix are already being written insrc/lib/pipeline/runner.ts(lines 99 and 156) whenmetricApiLogentries are created. The parsing logic at line 418 in this file correctly handles the prefix.completedStepswill properly populate from matching logs.Likely an incorrect or invalid review comment.
src/app/metric/_components/manual/ManualMetricContent.tsx (1)
76-99: Clean simplification of the metric creation flow.The removal of the multi-step goal flow in favor of immediate dialog closure with a toast directing users to the settings drawer is a good UX simplification. This aligns with the broader refactor to centralize pipeline and progress handling.
src/app/dashboard/[teamId]/_components/dashboard-page-client.tsx (2)
22-39: Well-implemented single source of truth for dashboard queries.The conditional
refetchIntervalthat polls every 3 seconds only when metrics are processing is an efficient approach. This eliminates duplicate queries while ensuring timely updates.
47-76: Good loading and error state handling.The explicit loading UI with Skeleton placeholders and descriptive error messages provide clear user feedback. The guard for
!dashboardCharts || !integrationsafter loading completes handles potential undefined states safely.src/app/dashboard/[teamId]/_components/dashboard-client.tsx (3)
44-63: Efficient centralized batch polling implementation.The approach of deriving
serverProcessingIdsfrom server data and using a singlegetBatchProgressquery for all processing metrics is a significant improvement over per-card polling. The conditionalrefetchIntervalavoids unnecessary network traffic when nothing is processing.
65-109: Solid completion detection logic with proper cleanup.The effect correctly tracks transitions from processing to non-processing states using
prevProcessingRef. Error toasts and cache invalidation on completion provide good UX. One minor consideration: the ref update at lines 102-108 correctly reflects current batchProgress state.
111-134: Good fallback handling for metrics not yet in batch response.The
pipelineStatusMapcorrectly merges batch progress data with server-derived status for metrics that started processing but haven't appeared in the batch response yet. This handles the race condition between triggering a pipeline and the first poll returning.src/app/dashboard/[teamId]/_components/dashboard-metric-chart.tsx (3)
66-84: Clean prop additions for state-driven rendering.The new
isProcessingandisFetchingoptional props with sensible defaults replace the legacyloadingPhasemachinery. This simplifies the component's API and aligns with the centralized pipeline progress flow.
111-116: Good performance improvement for chart key generation.Replacing JSON serialization with a simple
${chartType}-${length}key eliminates unnecessary computation on each render. This is sufficient for triggering re-renders when chart data meaningfully changes.
714-745: Clear state-driven loading overlays.The three mutually exclusive states (processing, fetching, no data) provide clear user feedback:
- State 1: Active pipeline processing
- State 2: Pipeline done, fetching updated data
- State 3: Genuinely no data
This is more intuitive than the previous loadingPhase approach.
src/app/dashboard/[teamId]/_components/dashboard-metric-card.tsx (3)
43-56: Well-designed props for centralized pipeline status.The addition of
pipelineStatus,teamId, andisFetchingprops enables the component to receive pipeline state from the parent rather than managing internal polling. The optional nature with sensible defaults maintains backward compatibility.
86-88: Correct dual-source processing state derivation.The OR logic
!!metric.refreshStatus || (pipelineStatus?.isProcessing ?? false)ensures processing is detected from either the server-providedrefreshStatusor the parent's batch-polledpipelineStatus. This handles timing gaps between status sources.
99-109: Targeted cache invalidation pattern.The conditional invalidation
if (teamId) { void utils.dashboard.getDashboardCharts.invalidate({ teamId }); }ensures targeted cache invalidation when teamId is available. As per coding guidelines, this aligns with the pattern of invalidating cache tags after mutations.src/server/api/services/transformation/data-pipeline.ts (4)
1-56: LGTM! Clean type definitions and documentation.The header comments clearly explain the soft vs hard refresh semantics, and the type definitions are well-structured with appropriate JSDoc comments.
58-120: LGTM! Robust error handling and logging.The helper functions properly handle null cases and include fallback error handling to prevent logging failures from disrupting the pipeline flow.
166-271: LGTM! Solid transformer management with race condition handling.The upsert pattern at line 257 correctly handles concurrent transformer creation, and the AI generation includes a retry mechanism with regeneration on test failure.
535-644: LGTM! Clean API surface with proper delegation.The exported functions provide a well-structured API. The new
ingestMetricDatafunction (lines 627-644) is a clean wrapper that correctly delegates tofetchTransformAndSavewith appropriate parameters.
| if (!isTimeSeries) { | ||
| // Snapshot mode: replace all data | ||
| const baseTimestamp = dataPoints[0]?.timestamp ?? new Date(); | ||
| await db.$transaction([ | ||
| db.metricDataPoint.deleteMany({ where: { metricId } }), | ||
| db.metricDataPoint.createMany({ | ||
| data: dataPoints.map((dp, index) => ({ | ||
| metricId, | ||
| timestamp: new Date(baseTimestamp.getTime() + index), | ||
| value: dp.value, | ||
| dimensions: dimensionsToJson(dp.dimensions), | ||
| })), | ||
| }), | ||
| ]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the full context of the saveDataPointsBatch function
cat -n src/server/api/services/transformation/data-pipeline.ts | sed -n '120,170p'Repository: drifter089/orgOS
Length of output: 1910
🏁 Script executed:
# Let's also search for documentation or comments about snapshot vs time-series metrics
rg -i "snapshot|isTimeSeries" src/server/api/services/transformation/data-pipeline.ts -B 2 -A 2Repository: drifter089/orgOS
Length of output: 1505
🏁 Script executed:
# Search for usage of this function to understand how snapshot data is used
rg "saveDataPointsBatch" --type ts -B 2 -A 2Repository: drifter089/orgOS
Length of output: 952
🏁 Script executed:
# Look for tests that might clarify the expected behavior
fd -e spec.ts -e test.ts | xargs rg -l "saveDataPointsBatch|snapshot.*metric" 2>/dev/null | head -5Repository: drifter089/orgOS
Length of output: 42
🏁 Script executed:
# Check if there's any type definition or interface for metricDataPoint
rg "metricDataPoint.*timestamp|interface.*MetricDataPoint" --type ts -B 2 -A 5Repository: drifter089/orgOS
Length of output: 42
Add clarifying comment for snapshot mode timestamp manipulation.
Lines 139-141 use artificial sequential timestamps (baseTimestamp + index) for snapshot data points rather than preserving original timestamps. While the code structure suggests this is intentional (time-series mode preserves timestamps at line 157), the design rationale isn't documented. Add a comment explaining why snapshot metrics use synthetic timestamps—whether because original timestamps are non-semantic, to avoid unique constraint conflicts, or for other reasons.
🤖 Prompt for AI Agents
In src/server/api/services/transformation/data-pipeline.ts around lines 133 to
146, the snapshot branch replaces all metric data and assigns artificial
sequential timestamps (baseTimestamp + index) but lacks an explanation; add a
concise comment above the timestamp assignment clarifying that snapshot mode
intentionally overwrites previous data and uses synthetic sequential timestamps
to preserve insertion order and avoid unique-timestamp/constraint conflicts (and
because original timestamps are not semantically meaningful for snapshot
metrics), and reference that the time-series branch preserves original
timestamps (see lines ~157) for contrast.
| // Delete old ingestion transformer (use deleteMany to avoid error when none exists) | ||
| await runner.run("delete-ingestion-transformer", async () => { | ||
| await db.dataIngestionTransformer.deleteMany({ | ||
| where: { templateId: metric.id }, | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix transformer deletion to use templateId instead of metricId.
Line 410 attempts to delete the ingestion transformer using metric.id (the metric's ID) instead of metric.templateId. The DataIngestionTransformer table is keyed by templateId, so this will either fail to delete the correct transformer or delete nothing.
🔎 Proposed fix
await runner.run("delete-ingestion-transformer", async () => {
await db.dataIngestionTransformer.deleteMany({
- where: { templateId: metric.id },
+ where: { templateId: metric.templateId! },
});
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Delete old ingestion transformer (use deleteMany to avoid error when none exists) | |
| await runner.run("delete-ingestion-transformer", async () => { | |
| await db.dataIngestionTransformer.deleteMany({ | |
| where: { templateId: metric.id }, | |
| }); | |
| }); | |
| // Delete old ingestion transformer (use deleteMany to avoid error when none exists) | |
| await runner.run("delete-ingestion-transformer", async () => { | |
| await db.dataIngestionTransformer.deleteMany({ | |
| where: { templateId: metric.templateId! }, | |
| }); | |
| }); |
🤖 Prompt for AI Agents
In src/server/api/services/transformation/data-pipeline.ts around lines 407 to
412, the deletion uses the wrong field (metric.id) when removing
DataIngestionTransformer rows; change the where clause to use templateId:
metric.templateId so the deleteMany targets the transformer keyed by templateId.
Keep the runner.run wrapper and deleteMany call but replace the predicate to {
where: { templateId: metric.templateId } } to ensure the correct records are
removed.
- Simplify use-metric-mutations to use cache invalidation instead of optimistic updates - Fix completion detection for new metrics by initializing prevProcessingRef properly - Hard-refresh now deletes and regenerates chart transformers (not just executes) - Add processing step display in chart placeholder UI - Set loading state immediately for refresh/regenerate mutations - Await cache invalidation in dialog before closing
Summary
Refactors the metrics pipeline system to eliminate code duplication and establish a single source of truth for pipeline step definitions.
Key Changes
steps.tsas single source for all step definitions and display namesPIPELINE_CONFIGSandSTEP_DISPLAY_NAMES(was defined in 3 places)runBackgroundTaskrunPipelineInBackgroundfrom metric.ts (~100 lines)configs.tsandsteps/delete-old-data.tsdetectPipelineType()andgetPipelineStepCount()PipelineRunnerto use composable operationsNet reduction: ~250 lines of code
Summary by CodeRabbit
Architecture Improvements
New Features
UI/UX Enhancements
Workflow Refinements
Refactor
✏️ Tip: You can customize this high-level summary in your review settings.