-
Notifications
You must be signed in to change notification settings - Fork 1
feat: restructure metrics system with integration-backed templates an… #49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…d full column data support - Replace mock data generation with real integration-backed metrics - Add metric templates for GitHub, Google Sheets, PostHog, and YouTube - Implement integration-specific metric creator UIs with data preview - Add full column data extraction for Google Sheets visualization - Create single source of truth for API endpoints in services layer - Move API testing to separate /api-test page - Add dynamic parameter selection for PostHog (cascading dropdowns) - Store complete column arrays for plotting capabilities
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
WalkthroughThis PR introduces integration-backed metrics by adding database relationships between metrics and integrations, establishing metric templates for GitHub, Google Sheets, PostHog, and YouTube, and implementing dedicated creation and refresh flows. New TRPC endpoints handle template management and dynamic data fetching. The metrics page refactors to a tab-based UI organized by integration, with integration-specific creator components. An API test page is added to the navigation. Changes
Sequence DiagramsequenceDiagram
actor User
participant UI as Metric Page / Creator
participant API as Metric Router
participant Template as Template Service
participant Integ as Integration Service
participant DB as Database
User->>UI: Select integration & template
UI->>API: createFromTemplate({ templateId, integrationId })
API->>Template: getMetricTemplate(templateId)
Template-->>API: Template config
API->>DB: Create Metric with integrationId, metricTemplate, endpointConfig
DB-->>API: New Metric
API-->>UI: Success
UI->>UI: Refresh metrics list
rect rgb(220, 240, 255)
note over User,UI: Later: Refresh metric value
User->>UI: Click Refresh on metric card
UI->>API: refreshMetricValue({ id })
API->>DB: Fetch Metric + Template
DB-->>API: Metric data
API->>Template: Resolve endpoint from template
Template-->>API: Endpoint path with placeholders
API->>Integ: Fetch data using Integration connection
Integ-->>API: Raw API response
API->>API: extractValueFromPath() + applyTransformation()
API->>DB: Update currentValue, lastFetchedAt
DB-->>API: Updated
API-->>UI: Success
UI->>UI: Show "Metric refreshed successfully!"
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes
Areas requiring extra attention:
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Nitpick comments (14)
src/server/api/services/github.ts (1)
23-31: Consider reusinggithubMetricEndpointsinsidegithubEndpointsto avoid driftThe new
githubMetricEndpointsvalues mirror several entries ingithubEndpoints(e.g.,/user,/user/repos,/user/followers). To keep these in sync over time, consider referencinggithubMetricEndpointswhen defininggithubEndpointspaths instead of repeating literals.Example:
{ label: "User Profile", - path: "/user", + path: githubMetricEndpoints.USER_PROFILE, method: "GET", description: "Get authenticated user's profile information", },This keeps your metric templates and API test UI aligned with a single source of truth for endpoint paths.
prisma/schema.prisma (1)
97-115: ClarifyMetric↔Integrationrelation semantics and deletion behaviorThe new integration-backed fields look consistent, but a couple of points are worth confirming:
Deletion semantics for
Integration
Metric.integrationcurrently has no explicitonDelete, so deleting anIntegrationwith associatedMetricrows will be prevented by the FK constraint.- If the desired behavior is “metrics remain but lose their backing integration”, you likely want:
integration Integration? @relation( fields: [integrationId], references: [connectionId], onDelete: SetNull )Otherwise, please confirm that blocking deletion while metrics exist is intentional. This choice directly affects how easy it is to revoke integrations without cleaning up metrics first.
Comment / config shape consistency
endpointConfigis documented as e.g.{videoId: "abc123"}, but your endpoint placeholders elsewhere use uppercase keys ({VIDEO_ID},{PROJECT_ID}, etc.). For future readers it may be clearer to align the example with the actual key casing used in your endpoint templates.Overall, the added fields (
integrationId,metricTemplate,endpointConfig,lastFetchedAt) and themetricsback-reference onIntegrationline up well with the new integration-backed metric flow.Also applies to: 140-141
src/server/api/services/youtube.ts (1)
31-38: Metric endpoint templates look consistent with existing YouTube endpointsThe new
youtubeMetricEndpointsdefinitions match the patterns used inyoutubeDataEndpoints(same base path andVIDEO_IDplaceholder), so they should plug cleanly into the metric template system andfetchYouTubeData’s placeholder replacement.If you want to reduce duplication, you could optionally reference these constants when defining
youtubeDataEndpointsentries that represent the same calls (e.g., statistics-only video/channel metrics), but it’s not required.src/server/api/services/posthog.ts (1)
16-24: Align PostHog metric endpoint paths with test endpoints (trailing slash mismatch)
posthogMetricEndpointsintroduces:PROJECT_EVENTS: "/api/projects/{PROJECT_ID}/events", PROJECT_PERSONS: "/api/projects/{PROJECT_ID}/persons", PROJECTS_LIST: "/api/projects/",but the corresponding entries in
posthogEndpointsuse/events/and/persons/(with a trailing slash).This may be fine if PostHog treats both forms identically, but to avoid subtle 301/404 differences it’s safer to pick one convention and use it consistently across both the metric templates and the API-test endpoints.
For example, if the trailing-slash versions are what the API expects, you could adjust the metric endpoints to:
export const posthogMetricEndpoints = { - PROJECT_EVENTS: "/api/projects/{PROJECT_ID}/events", - PROJECT_PERSONS: "/api/projects/{PROJECT_ID}/persons", + PROJECT_EVENTS: "/api/projects/{PROJECT_ID}/events/", + PROJECT_PERSONS: "/api/projects/{PROJECT_ID}/persons/", PROJECTS_LIST: "/api/projects/", } as const;Please double-check against the PostHog API docs you’re targeting to confirm which form is correct.
src/server/api/services/google-sheets.ts (1)
16-24: Good centralization of Google Sheets metric endpointsThe new
googleSheetsMetricEndpointsmap cleanly to your existing placeholder replacement logic infetchGoogleSheetsData, and wiring"Get Spreadsheet"toSPREADSHEET_METADATAremoves a hard-coded path from the test endpoints.If you later add metric templates for specific ranges or whole-sheet values, reusing
RANGE_VALUESandSHEET_VALUESthere should keep everything consistent.No functional issues spotted here.
Also applies to: 30-38
src/app/metric/github/github-metric-creator.tsx (1)
23-27: KeepMetricTemplateIdin sync with backend template IDs
MetricTemplateIdis defined as a hard-coded union:type MetricTemplateId = | "github-followers-count" | "github-repos-count" | "github-stars-total";and used both for the Tabs and in
getMetricInfo. This works now, but it can easily drift from the server-sidemetricTemplates/createFromTemplateinput schema if you add/rename templates.To strengthen type safety and reduce duplication:
- Consider deriving
MetricTemplateIdfrom your TRPC router input type formetric.createFromTemplate(viainferRouterInputs<AppRouter>["metric"]["createFromTemplate"]["templateId"]), or- At least centralize the mapping in a single data structure, e.g.:
const GITHUB_METRIC_TEMPLATES = { "github-followers-count": { title: "GitHub Followers", description: "Track your total number of GitHub followers", }, "github-repos-count": { title: "Public Repositories", description: "Track your total count of public repositories", }, "github-stars-total": { title: "Repository Stars", description: "Track total stars across all your repositories", }, } as const; type MetricTemplateId = keyof typeof GITHUB_METRIC_TEMPLATES;Then both the tabs and
getMetricInfocan referenceGITHUB_METRIC_TEMPLATESdirectly, so adding a new template is a single change.Please also verify that these string IDs exactly match the server-defined template IDs used by
metric.createFromTemplate.Also applies to: 32-37, 56-76, 89-107
src/app/metric/_components/integration-metric-card.tsx (1)
54-94: Consider extracting helpers outside the component for reuse/readability
getMetricColorandformatValueare pure helpers and don’t depend on component state. Pulling them out ofIntegrationMetricCard(or into a shared metrics util module) would avoid redefinition on each render and make them easier to reuse elsewhere.src/app/metric/google-sheets/sheets-metric-creator.tsx (1)
123-142: Metric creation guard is fine; explicit name check is optional
canCreatealready requiresmetricNameto be truthy, sohandleCreatewill only run when a name is present. If you want an extra layer of defensive coding, you could add|| !metricNameto the guard inhandleCreate, but it’s not strictly necessary given the current UI constraints.src/app/metric/youtube/youtube-metric-creator.tsx (1)
29-75: YouTube metric creator wiring looks correctThe channel/video tab logic, template selection,
canCreateguard, andcreateFromTemplatemutation payload all look consistent with the other integration creators. Only tiny optional polish would be to trimvideoIdbefore sending it, but functionally this is solid as-is.Also applies to: 152-199
src/app/metric/page.tsx (1)
56-63: Align dialog closing behavior to consistently reset form stateRight now:
- Manual create dialog: form is reset on successful create and when clicking “Cancel”, but not when closed via overlay/ESC (
onOpenChange={setIsCreateManualDialogOpen}).- Edit dialog:
editingMetricIdandformDataare reset only on the Cancel button, not when closing via overlay/ESC.This can leave stale form values and
editingMetricIdaround between opens. Not a bug, but for predictability you might wrap dialog closing in helpers that always reset state, and call them from bothonOpenChangeand the Cancel buttons.Example for the create dialog:
-const [isCreateManualDialogOpen, setIsCreateManualDialogOpen] = useState(false); +const [isCreateManualDialogOpen, setIsCreateManualDialogOpen] = useState(false); +const closeCreateManualDialog = () => { + setIsCreateManualDialogOpen(false); + setFormData(initialFormData); +}; <Dialog - open={isCreateManualDialogOpen} - onOpenChange={setIsCreateManualDialogOpen} + open={isCreateManualDialogOpen} + onOpenChange={(open) => { + setIsCreateManualDialogOpen(open); + if (!open) setFormData(initialFormData); + }} > ... <Button variant="outline" - onClick={() => { - setIsCreateManualDialogOpen(false); - setFormData(initialFormData); - }} + onClick={closeCreateManualDialog} > Cancel </Button>Same idea can be applied to the edit dialog.
Also applies to: 358-455, 459-535
src/server/api/services/metric-templates.ts (1)
32-52: NarrowtransformDatato a typed union instead ofstringRight now
transformDatais a free-formstring, but the router only understands a small set of transform names (countRows,countEvents, and the special-casedextractColumn). A typo here will silently fall back to the default numeric conversion branch.Consider introducing a dedicated type and reusing it in the router helpers:
-export interface MetricTemplate { +export type MetricTransform = "countRows" | "countEvents" | "extractColumn"; + +export interface MetricTemplate { @@ - // Optional custom data transformation - transformData?: string; // Function name for custom transformation + // Optional custom data transformation + transformData?: MetricTransform; // Supported transformation identifierand update
applyTransformation(inmetric.ts) to acceptMetricTransforminstead ofstring. This will give you compile-time safety when adding new templates or transforms.src/server/api/routers/metric.ts (3)
164-220: Avoid hard-coding Google Sheets endpoints in router
getSheetStructurere-specifies the Sheets endpoint as a literal:"/v4/spreadsheets/{SPREADSHEET_ID}"given you already have
googleSheetsMetricEndpoints.SPREADSHEET_METADATAdefined ingoogle-sheets.tsand your templates centralize endpoints there.To keep a single source of truth (and ease future API changes), consider importing and using the constant here as well:
- const response = await fetchGoogleSheetsData( - input.connectionId, - "/v4/spreadsheets/{SPREADSHEET_ID}", - { SPREADSHEET_ID: input.spreadsheetId }, - "GET", - ); + const response = await fetchGoogleSheetsData( + input.connectionId, + googleSheetsMetricEndpoints.SPREADSHEET_METADATA, + { SPREADSHEET_ID: input.spreadsheetId }, + "GET", + );Same applies to
getSheetPreviewbelow, which could use the existingSHEET_VALUESconstant.
421-447: FilterendpointConfigto string params before using it asparams
paramsis currently derived via a cast:const params = (metric.endpointConfig as Record<string, string>) ?? {};However, for column-based Sheets metrics you later store
columnData: number[]andlastDataFetch: stringintoendpointConfig. On subsequent refreshes,paramswill therefore contain non-string values (e.g.,columnData: number[]), which are then passed down intofetchGoogleSheetsData/fetchPostHogData/fetchYouTubeData.While this works at runtime due to implicit stringification and because there are no
{columnData}placeholders, it’s not type-safe and makesparamsharder to reason about.You can make this safer by filtering only string-valued keys into the params object:
- let endpoint = template.endpoint; - const params = (metric.endpointConfig as Record<string, string>) ?? {}; + let endpoint = template.endpoint; + const rawConfig = + (metric.endpointConfig ?? {}) as Record<string, unknown>; + const params: Record<string, string> = {}; + for (const [key, value] of Object.entries(rawConfig)) { + if (typeof value === "string") { + params[key] = value; + } + }This keeps
endpointConfigfree to store richer metadata while ensuring only valid path parameters are passed to the integration fetchers.
485-521: ValidateCOLUMN_INDEXbefore using it in the column-extraction pathIn the
extractColumnbranch you compute:const columnIndex = parseInt(params.COLUMN_INDEX ?? "0");If
COLUMN_INDEXis accidentally set to a non-numeric string,parseIntyieldsNaN, and your loop will silently skip all cells, resulting incolumnData=[]andlatestValue=0with no indication of misconfiguration.You can fail fast with a clear error:
- const columnIndex = parseInt(params.COLUMN_INDEX ?? "0"); + const rawColumnIndex = params.COLUMN_INDEX ?? "0"; + const columnIndex = Number.parseInt(rawColumnIndex, 10); + if (Number.isNaN(columnIndex)) { + throw new TRPCError({ + code: "BAD_REQUEST", + message: `Invalid COLUMN_INDEX "${rawColumnIndex}" in endpointConfig`, + }); + }This keeps misconfigured metrics from quietly producing zeroed data.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (15)
prisma/schema.prisma(2 hunks)src/app/api-test/page.tsx(1 hunks)src/app/metric/_components/integration-metric-card.tsx(1 hunks)src/app/metric/github/github-metric-creator.tsx(1 hunks)src/app/metric/google-sheets/sheets-metric-creator.tsx(1 hunks)src/app/metric/page.tsx(7 hunks)src/app/metric/posthog/posthog-metric-creator.tsx(1 hunks)src/app/metric/youtube/youtube-metric-creator.tsx(1 hunks)src/components/navbar/NavMenu.client.tsx(3 hunks)src/server/api/routers/metric.ts(4 hunks)src/server/api/services/github.ts(1 hunks)src/server/api/services/google-sheets.ts(1 hunks)src/server/api/services/metric-templates.ts(1 hunks)src/server/api/services/posthog.ts(1 hunks)src/server/api/services/youtube.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
src/app/metric/github/github-metric-creator.tsx (1)
src/trpc/react.tsx (1)
api(28-28)
src/app/api-test/page.tsx (1)
src/app/api-test/_components/integration-tester.tsx (1)
IntegrationTester(55-502)
src/server/api/services/google-sheets.ts (1)
src/server/api/services/github.ts (1)
ServiceEndpoint(14-21)
src/app/metric/youtube/youtube-metric-creator.tsx (1)
src/trpc/react.tsx (1)
api(28-28)
src/app/metric/page.tsx (6)
src/trpc/react.tsx (1)
api(28-28)src/app/metric/_components/integration-metric-card.tsx (1)
IntegrationMetricCard(47-190)src/app/metric/github/github-metric-creator.tsx (1)
GitHubMetricCreator(28-159)src/app/metric/google-sheets/sheets-metric-creator.tsx (1)
SheetsMetricCreator(38-356)src/app/metric/posthog/posthog-metric-creator.tsx (1)
PostHogMetricCreator(30-273)src/app/metric/youtube/youtube-metric-creator.tsx (1)
YouTubeMetricCreator(29-202)
src/app/metric/google-sheets/sheets-metric-creator.tsx (1)
src/trpc/react.tsx (1)
api(28-28)
src/app/metric/posthog/posthog-metric-creator.tsx (1)
src/trpc/react.tsx (1)
api(28-28)
src/app/metric/_components/integration-metric-card.tsx (3)
src/components/ui/card.tsx (5)
Card(88-88)CardHeader(89-89)CardTitle(91-91)CardDescription(93-93)CardContent(94-94)src/components/ui/badge.tsx (1)
Badge(51-51)src/components/ui/button.tsx (1)
Button(62-62)
src/server/api/services/metric-templates.ts (4)
src/server/api/services/github.ts (1)
githubMetricEndpoints(27-31)src/server/api/services/google-sheets.ts (1)
googleSheetsMetricEndpoints(20-24)src/server/api/services/posthog.ts (1)
posthogMetricEndpoints(20-24)src/server/api/services/youtube.ts (1)
youtubeMetricEndpoints(35-38)
src/server/api/routers/metric.ts (7)
src/server/api/trpc.ts (1)
workspaceProcedure(145-148)src/server/api/services/metric-templates.ts (3)
getAllMetricTemplates(336-338)getTemplatesByIntegration(343-347)getMetricTemplate(352-356)src/server/api/utils/authorization.ts (1)
getIntegrationAndVerifyAccess(284-309)src/server/api/services/google-sheets.ts (1)
fetchGoogleSheetsData(67-112)src/server/api/services/posthog.ts (1)
fetchPostHogData(73-118)src/server/api/services/github.ts (1)
fetchGitHubData(83-119)src/server/api/services/youtube.ts (1)
fetchYouTubeData(142-225)
🪛 Biome (2.1.2)
src/server/api/routers/metric.ts
[error] 751-751: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🔇 Additional comments (10)
src/components/navbar/NavMenu.client.tsx (2)
9-9: LGTM: Icon import is appropriate.The
FlaskConicalicon is a fitting choice for API testing functionality and is correctly imported from lucide-react.
66-67: LGTM: Navigation integration is clean and consistent.The API Testing menu item follows the established pattern for navigation items and is appropriately placed in the Organization dropdown. The active path detection is correctly configured.
Also applies to: 108-115
src/app/api-test/page.tsx (1)
1-19: LGTM: Clean page implementation following Next.js best practices.The component structure is well-organized with a clear layout and semantic HTML. The server component correctly renders the client-side
IntegrationTestercomponent, following Next.js 15 App Router patterns.src/app/metric/posthog/posthog-metric-creator.tsx (1)
30-106: PostHog metric creator flow looks sound and consistent with other integrationsThe project/event fetching, validation via
canCreate, andcreateFromTemplatemutation wiring all look correct and align with the patterns used in the other metric creators. Error and loading states are handled reasonably, and the default metric names are sensible. I don’t see any blocking issues here.Also applies to: 148-270
src/app/metric/page.tsx (2)
68-71: ConfirmintegrationIdon metrics aligns withconnectionIdused for filtering
getMetricsForIntegrationfilters metrics by:const getMetricsForIntegration = (connectionId: string) => metrics?.filter((m) => m.integrationId === connectionId) ?? [];and tabs are keyed by
integration.connectionId.This assumes
metric.integrationIdstores the same value asintegration.connectionId(rather than, say, the integration type like"github"or a numeric PK). If that assumption is wrong, the per-integration metric counts and “Existing Metrics” lists will be empty or mis-grouped.Please double-check the server/Prisma schema to ensure this mapping is correct; if not, you may want to filter on
m.integration?.connectionIdinstead.Also applies to: 172-175, 206-216, 275-280
177-218: Nice consolidation of per‑integration metric flows and layoutThe tabbed layout, per‑integration creators (GitHub/Sheets/PostHog/YouTube), and shared
IntegrationMetricCardusage make the metrics UX much clearer than a single generic flow. The wiring tolistWithStats, per‑integration counts, and refresh/delete/edit actions looks cohesive and easy to extend.Also applies to: 220-273, 275-355
src/server/api/services/metric-templates.ts (1)
158-193: Google Sheets column-data template wiring looks consistent with refresh logicThe
gsheets-column-datatemplate’s configuration lines up with the special-case logic inrefreshMetricValue:
endpointusesSHEET_VALUES.dataPath: "values"yields the full 2D values array that theextractColumnbranch expects.transformData: "extractColumn"matches the branch that storescolumnDataand derives the latest value.COLUMN_INDEXis explicitly 0-based and numeric, which the refresh logic uses when extracting the column.This is a good, explicit contract between the template and the refresh logic; I don’t see issues here assuming the Google Sheets response shape remains
values: string[][].src/server/api/routers/metric.ts (3)
98-162: Template-based metric creation flow looks solidThe
createFromTemplatemutation:
- Validates the template ID and returns a clear
NOT_FOUNDerror if missing.- Uses
getIntegrationAndVerifyAccessto enforce access to the providedconnectionId.- Ensures the integration’s
integrationIdmatches the template’sintegrationId, preventing mismatched wiring.- Checks required params based on
template.requiredParamsand returns a single, helpfulBAD_REQUESTwith missing labels.- Persists
metricTemplateandendpointConfigso refresh can reconstruct the call.This is a clean, constrained flow; I don’t see correctness issues in this block.
280-371: Dynamic PostHog options implementation is clear and constrainedThe
fetchDynamicOptionshandler:
- Re-validates integration access via
getIntegrationAndVerifyAccess.- Uses explicit
endpointvalues (posthog-projects,posthog-events) that match your templatedynamicOptionsEndpointvalues.- Enforces
dependsOnValuefor events and returns an explicitBAD_REQUESTwhen missing.- Maps responses into
{ label, value }pairs with appropriate IDs/names.- Restricts allowed endpoints via a
switch, rejecting unknown values withBAD_REQUEST.The overall shape looks good and avoids exposing arbitrary proxying through this endpoint.
373-395:refreshMetricValueno longer supports non–integration-backed metricsThis guard:
if (!metric.integrationId || !metric.metricTemplate) { throw new TRPCError({ code: "BAD_REQUEST", message: "This metric is not integration-backed", }); }means any existing manually-managed metrics can no longer be refreshed via this endpoint (they’ll now get
BAD_REQUESTinstead of the previous mock behavior).If you still have non-integration metrics in production, you may want either:
- A backwards-compatible path (e.g., no-op refresh, or leave them editable only via
update), or- A migration that converts/removes legacy metrics and updates any UI that calls
refreshMetricValue.Worth double-checking existing data and UI usage before shipping.
| const getMetricColor = ( | ||
| current: number | null, | ||
| target: number | null, | ||
| type: string, | ||
| ): string => { | ||
| if (!current || !target) return "text-gray-500"; | ||
|
|
||
| const percentage = (current / target) * 100; | ||
|
|
||
| if (type === "duration") { | ||
| // For duration, lower is better | ||
| if (percentage <= 100) return "text-green-600"; | ||
| if (percentage <= 120) return "text-yellow-600"; | ||
| return "text-red-600"; | ||
| } else { | ||
| // For other types, higher is better | ||
| if (percentage >= 100) return "text-green-600"; | ||
| if (percentage >= 80) return "text-yellow-600"; | ||
| return "text-red-600"; | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix falsy checks so zero values are handled correctly in getMetricColor
getMetricColor currently does:
if (!current || !target) return "text-gray-500";This treats 0 as “no value”, so metrics with currentValue or targetValue equal to 0 will never be color‑coded and you also implicitly avoid divide‑by‑zero via falsy check.
A safer guard is to treat only null/undefined (and explicit target === 0) as “no comparison”, while allowing 0 as a valid current value:
- if (!current || !target) return "text-gray-500";
+ if (current == null || target == null || target === 0) {
+ return "text-gray-500";
+ }This preserves the neutral color when there is no meaningful target (including 0), but correctly colors a current of 0 against a positive target.
🤖 Prompt for AI Agents
In src/app/metric/_components/integration-metric-card.tsx around lines 54 to 74,
the guard uses falsy checks which treat 0 as missing and prevents colorization
and divides; change the guard to only treat null/undefined as missing and treat
a target of 0 as "no comparison" to avoid divide-by-zero — i.e., replace the
falsy check with a nullish check such as if (current == null || target == null
|| target === 0) return "text-gray-500"; so that current === 0 is allowed and
percentage calculation proceeds safely.
| type MetricTemplateId = | ||
| | "github-followers-count" | ||
| | "github-repos-count" | ||
| | "github-stars-total"; | ||
|
|
||
| export function GitHubMetricCreator({ | ||
| connectionId, | ||
| onSuccess, | ||
| }: GitHubMetricCreatorProps) { | ||
| const [templateId, setTemplateId] = useState<MetricTemplateId>( | ||
| "github-followers-count", | ||
| ); | ||
| const [metricName, setMetricName] = useState(""); | ||
| const [targetValue, setTargetValue] = useState(""); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against NaN when parsing targetValue before mutation
targetValue is stored as a string and converted with:
targetValue: targetValue ? parseFloat(targetValue) : undefined,Because <input type="number"> can temporarily hold values like "-" or "e" while the user is typing, parseFloat can return NaN, which would then be sent to createFromTemplate. Depending on your backend validation, that could either fail unexpectedly or store an invalid float.
A small guard keeps things predictable:
- const handleCreate = () => {
- createMutation.mutate({
- templateId,
- connectionId,
- name: metricName || undefined,
- targetValue: targetValue ? parseFloat(targetValue) : undefined,
- });
- };
+ const handleCreate = () => {
+ const trimmed = targetValue.trim();
+ const parsed =
+ trimmed === "" ? undefined : Number(trimmed);
+
+ // Optionally: prevent submit on invalid number instead of sending NaN
+ if (parsed !== undefined && Number.isNaN(parsed)) {
+ return;
+ }
+
+ createMutation.mutate({
+ templateId,
+ connectionId,
+ name: metricName || undefined,
+ targetValue: parsed,
+ });
+ };This ensures only valid numbers (or undefined) reach the API.
Also applies to: 47-54
🤖 Prompt for AI Agents
In src/app/metric/github/github-metric-creator.tsx around lines 23 to 37 (and
similarly at 47 to 54), guard against parseFloat returning NaN when converting
the string targetValue before sending it to createFromTemplate: parse the
string, check the result with Number.isFinite (or !Number.isNaN) and only
include the numeric value in the mutation payload if it is a valid finite
number; otherwise pass undefined (or omit the field) so invalid intermediate
input like "-" or "e" does not send NaN to the API.
| const extractSpreadsheetId = (url: string) => { | ||
| // Extract ID from various Google Sheets URL formats | ||
| const regex = /\/spreadsheets\/d\/([a-zA-Z0-9-_]+)/; | ||
| const match = regex.exec(url); | ||
| return match ? match[1] : null; | ||
| }; | ||
|
|
||
| const handleUrlChange = (url: string) => { | ||
| setSpreadsheetUrl(url); | ||
| const id = extractSpreadsheetId(url); | ||
| if (id) { | ||
| setSpreadsheetId(id); | ||
| } | ||
| }; | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear stale spreadsheet state when the URL becomes invalid
handleUrlChange only updates spreadsheetId when a new ID is successfully parsed:
const id = extractSpreadsheetId(url);
if (id) {
setSpreadsheetId(id);
}If the user edits a previously valid URL into an invalid one, the old spreadsheetId (and downstream selections) are kept, so “Fetch Sheets” and metric creation can silently operate on the wrong sheet.
You can avoid this by always resetting the ID (and dependent state) when parsing fails:
const handleUrlChange = (url: string) => {
setSpreadsheetUrl(url);
const id = extractSpreadsheetId(url);
- if (id) {
- setSpreadsheetId(id);
- }
+ setSpreadsheetId(id ?? "");
+ if (!id) {
+ // Clear dependent selections when the URL no longer resolves to a sheet
+ setSelectedSheet("");
+ setSelectedColumn(null);
+ setMetricName("");
+ }
};This keeps the UI and the actual target spreadsheet in sync.
🤖 Prompt for AI Agents
In src/app/metric/google-sheets/sheets-metric-creator.tsx around lines 89 to
103, handleUrlChange currently only sets spreadsheetId when a valid ID parses,
leaving stale state when the URL becomes invalid; update the function to clear
spreadsheetId (set to null or empty string) and also reset any dependent state
such as selected sheet, selected range, and fetched sheets/list when parsing
fails so the UI and downstream actions (e.g., Fetch Sheets, metric creation)
cannot operate on an old spreadsheet ID.
| const [refreshingMetricId, setRefreshingMetricId] = useState<string | null>( | ||
| null, | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Refresh loading state can be inaccurate with overlapping refreshes
Because refreshingMetricId is a single string:
- Clicking “Refresh” on metric A sets it to A.
- Clicking “Refresh” on metric B before A completes overwrites it to B.
- When A completes,
onSuccess/onErrorsets it tonull, so B’s card stops showing as refreshing even though its request may still be in flight.
If you expect users to trigger multiple refreshes in quick succession, consider either:
- Treating refresh as strictly single-flight (e.g., disable all refresh buttons while
refreshMutation.isPending), or - Tracking a set of refreshing IDs and clearing only the one that finished.
For example, for single-flight semantics you could simplify to:
-const [refreshingMetricId, setRefreshingMetricId] = useState<string | null>(null);
+const [refreshingMetricId, setRefreshingMetricId] = useState<string | null>(null);
const handleRefresh = (id: string) => {
- setRefreshingMetricId(id);
- refreshMutation.mutate({ id });
+ setRefreshingMetricId(id);
+ refreshMutation.mutate({ id });
};
// In IntegrationMetricCard usage:
- isRefreshing={refreshingMetricId === metric.id}
+ isRefreshing={refreshMutation.isPending && refreshingMetricId === metric.id}This keeps the spinner tied to the metric that initiated the current in-flight refresh and makes it clearer that only one refresh is processed at a time.
Also applies to: 107-117, 167-170, 247-254, 341-348
🤖 Prompt for AI Agents
In src/app/metric/page.tsx around lines 64-67 (and also apply changes at
107-117, 167-170, 247-254, 341-348): the current single-string
refreshingMetricId is overwritten by overlapping refreshes causing unrelated
cards to stop showing a spinner when an earlier refresh completes; replace that
single string with a collection (e.g., a Set<string> or string[] in state) and
update handlers to add the metric id when a refresh starts and remove only that
id on success/error (use functional state updates to avoid race conditions),
then change the UI check from refreshingMetricId === id to collection.has(id)
(or includes) so each metric’s spinner reflects its own in-flight request;
alternatively, if you want strict single-flight instead, disable all refresh
buttons while refreshMutation.isPending and keep a single id only while the
global mutation is pending.
| getSheetPreview: workspaceProcedure | ||
| .input( | ||
| z.object({ | ||
| connectionId: z.string(), | ||
| spreadsheetId: z.string(), | ||
| sheetName: z.string(), | ||
| maxRows: z.number().optional().default(10), // Preview first 10 rows | ||
| }), | ||
| ) | ||
| .query(async ({ ctx, input }) => { | ||
| // Verify integration access | ||
| await getIntegrationAndVerifyAccess( | ||
| ctx.db, | ||
| input.connectionId, | ||
| ctx.user.id, | ||
| ctx.workspace, | ||
| ); | ||
|
|
||
| try { | ||
| // Fetch all non-empty data from the sheet | ||
| // Using just the sheet name returns all non-empty rows/columns | ||
| const response = await fetchGoogleSheetsData( | ||
| input.connectionId, | ||
| "/v4/spreadsheets/{SPREADSHEET_ID}/values/{SHEET_NAME}", | ||
| { | ||
| SPREADSHEET_ID: input.spreadsheetId, | ||
| SHEET_NAME: input.sheetName, | ||
| }, | ||
| "GET", | ||
| ); | ||
|
|
||
| const responseData = response.data as { | ||
| range?: string; | ||
| majorDimension?: string; | ||
| values?: string[][]; | ||
| }; | ||
|
|
||
| const allRows = responseData.values ?? []; | ||
| const headers = allRows.length > 0 ? allRows[0] : []; | ||
| const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows | ||
|
|
||
| return { | ||
| headers: headers ?? [], | ||
| rows: dataRows, | ||
| totalRows: allRows.length - 1, // Exclude header | ||
| totalColumns: (headers ?? []).length, | ||
| }; | ||
| } catch (error) { | ||
| throw new TRPCError({ | ||
| code: "INTERNAL_SERVER_ERROR", | ||
| message: | ||
| error instanceof Error | ||
| ? error.message | ||
| : "Failed to fetch sheet preview", | ||
| }); | ||
| } | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Fix negative totalRows for empty sheets in getSheetPreview
If the sheet has no data (values is undefined or []), allRows.length will be 0, and:
totalRows: allRows.length - 1, // Exclude headercomputes to -1, which is not meaningful.
You can guard this and simplify the return:
- const allRows = responseData.values ?? [];
- const headers = allRows.length > 0 ? allRows[0] : [];
- const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
-
- return {
- headers: headers ?? [],
- rows: dataRows,
- totalRows: allRows.length - 1, // Exclude header
- totalColumns: (headers ?? []).length,
- };
+ const allRows = responseData.values ?? [];
+ const headers = allRows.length > 0 ? allRows[0] : [];
+ const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
+
+ const totalRows =
+ allRows.length > 0 ? allRows.length - 1 : 0; // Exclude header, never negative
+
+ return {
+ headers,
+ rows: dataRows,
+ totalRows,
+ totalColumns: headers.length,
+ };This avoids negative counts while preserving existing behavior for non-empty sheets.
I'll verify this review comment by examining the code and checking for potential issues.
🏁 Script executed:
#!/bin/bash
# 1. First, verify the code at the specified lines
echo "=== Checking code at lines 222-278 in metric.ts ==="
sed -n '222,278p' src/server/api/routers/metric.ts
# 2. Search for usages of getSheetPreview to understand how totalRows is used
echo -e "\n=== Searching for getSheetPreview usages ==="
rg -n "getSheetPreview" --type ts --type tsx -C 3
# 3. Search for totalRows usage to see if negative values cause issues
echo -e "\n=== Searching for totalRows references ==="
rg -n "totalRows" --type ts --type tsx -C 2Length of output: 1910
Let me refine the search with corrected syntax:
🏁 Script executed:
#!/bin/bash
# Search for getSheetPreview usages with corrected syntax
echo "=== Searching for getSheetPreview usages ==="
rg -n "getSheetPreview" -t ts -C 3
# Search for totalRows references with corrected syntax
echo -e "\n=== Searching for totalRows references ==="
rg -n "totalRows" -t ts -C 2
# Search for any validation or guards on numeric values
echo -e "\n=== Checking if there are any guards on returned data ==="
rg -n "totalRows\s*(<|>|<=|>=|===|!==)" -t ts -C 2Length of output: 2575
Fix negative totalRows for empty sheets in getSheetPreview
When a sheet has no data (values is undefined or []), totalRows computes to -1. This value displays in the UI ("Showing 0 of -1 data rows"), which is semantically incorrect.
The proposed fix is valid: guard totalRows to return 0 for empty sheets:
- const allRows = responseData.values ?? [];
- const headers = allRows.length > 0 ? allRows[0] : [];
- const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
-
- return {
- headers: headers ?? [],
- rows: dataRows,
- totalRows: allRows.length - 1, // Exclude header
- totalColumns: (headers ?? []).length,
- };
+ const allRows = responseData.values ?? [];
+ const headers = allRows.length > 0 ? allRows[0] : [];
+ const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
+
+ const totalRows =
+ allRows.length > 0 ? allRows.length - 1 : 0; // Exclude header, never negative
+
+ return {
+ headers,
+ rows: dataRows,
+ totalRows,
+ totalColumns: headers.length,
+ };This ensures correct behavior for both empty and non-empty sheets while preventing negative counts in the UI.
🤖 Prompt for AI Agents
In src/server/api/routers/metric.ts around lines 222 to 278, totalRows can
become -1 when the sheet is empty because it uses allRows.length - 1; change the
computation to guard against negatives (e.g. set totalRows = Math.max(0,
allRows.length - 1)) and ensure headers and totalColumns are derived from the
safe headers array (empty array when no rows) so the API never returns negative
row counts.
| /** | ||
| * Extract a value from a nested object using a dot-notation path | ||
| * @param data The object to extract from | ||
| * @param path Dot-notation path (e.g., "items.0.statistics.viewCount") | ||
| * @returns The extracted value or undefined | ||
| */ | ||
| function extractValueFromPath(data: unknown, path: string): unknown { | ||
| const keys = path.split("."); | ||
| let current: unknown = data; | ||
|
|
||
| for (const key of keys) { | ||
| if (current === null || current === undefined) { | ||
| return undefined; | ||
| } | ||
|
|
||
| if (typeof current === "object" && !Array.isArray(current)) { | ||
| current = (current as Record<string, unknown>)[key]; | ||
| } else if (Array.isArray(current)) { | ||
| const index = parseInt(key, 10); | ||
| if (!isNaN(index)) { | ||
| current = current[index]; | ||
| } else { | ||
| return undefined; | ||
| } | ||
| } else { | ||
| return undefined; | ||
| } | ||
| } | ||
|
|
||
| return current; | ||
| } | ||
|
|
||
| /** | ||
| * Apply transformation to extracted data | ||
| * @param value The extracted value | ||
| * @param transformType The transformation type | ||
| * @returns The transformed numeric value | ||
| */ | ||
| function applyTransformation(value: unknown, transformType: string): number { | ||
| switch (transformType) { | ||
| case "countRows": | ||
| // Count number of rows in a Google Sheets response | ||
| if (Array.isArray(value)) { | ||
| return value.length; | ||
| } | ||
| return 0; | ||
|
|
||
| case "countEvents": | ||
| // Count number of events in PostHog response | ||
| if (Array.isArray(value)) { | ||
| return value.length; | ||
| } | ||
| return 0; | ||
|
|
||
| default: | ||
| // Default: convert to number | ||
| const numValue = parseFloat(String(value)); | ||
| return isNaN(numValue) ? 0 : numValue; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Address noSwitchDeclarations lint warning in applyTransformation
The linter warning about noSwitchDeclarations is due to this declaration inside the default case:
default:
// Default: convert to number
const numValue = parseFloat(String(value));
return isNaN(numValue) ? 0 : numValue;In JavaScript/TypeScript, declarations in switch cases share the same scope, which can lead to subtle issues; your tooling is correctly flagging this.
Wrapping the case body in a block fixes it and makes the intent explicit:
- default:
- // Default: convert to number
- const numValue = parseFloat(String(value));
- return isNaN(numValue) ? 0 : numValue;
+ default: {
+ // Default: convert to number
+ const numValue = parseFloat(String(value));
+ return Number.isNaN(numValue) ? 0 : numValue;
+ }(Using Number.isNaN is also a bit safer than the global isNaN.)
extractValueFromPath above looks good and matches the simple dot-notation paths used by your templates.
I'll verify the claims in this review comment by checking the linter rule documentation and inspecting the actual code.Now let me search for information about Number.isNaN vs global isNaN:
Now let me verify the actual code and linter configuration in the repository:
🏁 Script executed:
#!/bin/bash
# Check the actual source file to verify the exact code
if [ -f "src/server/api/routers/metric.ts" ]; then
echo "=== File exists, showing lines 695-754 ==="
sed -n '695,754p' src/server/api/routers/metric.ts
else
echo "File not found at expected path"
fiLength of output: 1668
🏁 Script executed:
#!/bin/bash
# Check for linter configuration files
echo "=== Checking for linter configurations ==="
if [ -f ".eslintrc.json" ]; then
echo "Found .eslintrc.json"
cat .eslintrc.json
fi
if [ -f ".eslintrc.js" ]; then
echo "Found .eslintrc.js"
fi
if [ -f "biome.json" ]; then
echo "Found biome.json"
cat biome.json
fi
if [ -f ".eslintrc.cjs" ]; then
echo "Found .eslintrc.cjs"
fiLength of output: 1834
Wrap default case in braces to fix noSwitchDeclarations lint violation
The default case in applyTransformation (line 745-747) contains a const declaration without a block, which violates Biome's noSwitchDeclarations rule. Wrap the case body in braces:
default: {
// Default: convert to number
const numValue = parseFloat(String(value));
return isNaN(numValue) ? 0 : numValue;
}Note: Using Number.isNaN instead of global isNaN is a good practice (avoids type coercion), though your linter config has noGlobalIsNan disabled so it won't enforce this.
🧰 Tools
🪛 Biome (2.1.2)
[error] 751-751: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.
The declaration is defined in this switch clause:
Safe fix: Wrap the declaration in a block.
(lint/correctness/noSwitchDeclarations)
🤖 Prompt for AI Agents
In src/server/api/routers/metric.ts around lines 695 to 754, the default case
inside applyTransformation declares a const without a block which trips the
noSwitchDeclarations lint rule; wrap the entire default case body in braces
(i.e., default: { /* ... */ } ) so the const is scoped inside a block, and while
editing you can replace isNaN(numValue) with Number.isNaN(numValue) if you want
to avoid the global isNaN usage.
| export const githubTemplates: MetricTemplate[] = [ | ||
| { | ||
| templateId: "github-stars-total", | ||
| label: "Repository Stars", | ||
| description: "Total number of stars across all your repositories", | ||
| integrationId: "github", | ||
| metricType: "number", | ||
| defaultUnit: "stars", | ||
| endpoint: githubMetricEndpoints.USER_PROFILE, | ||
| dataPath: "public_repos", // Note: This gets repo count, stars need custom aggregation | ||
| requiredParams: [], | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
GitHub “Repository Stars” template currently returns repo count, not stars
This template is labeled and described as “Repository Stars”, but it hits USER_PROFILE and reads public_repos, which is the repository count (as your comment notes), not a star total. That’s likely to confuse users and produce misleading metrics.
Before shipping, either:
- Align the template with the actual value (rename to something like “Public Repositories” and adjust unit), or
- Implement the custom aggregation required to truly compute a total star count (e.g., via
/user/reposand summingstargazers_count).
As-is, the template ID/label/unit don’t match the underlying data.
I'll verify the review comment by checking the GitHub API behavior and the codebase implementation.
🏁 Script executed:
#!/bin/bash
# Search for metric template processing and any aggregation logic
rg -n "public_repos|githubMetricEndpoints|MetricTemplate" --type=ts -A 3 -B 3 src/server/api/services/metric-templates.ts
# Check for any custom aggregation or data transformation logic
rg -n "stargazers|aggregate|sum.*star" --type=ts src/
# Look for endpoint handlers that might process the data
rg -n "dataPath|endpoint" --type=ts src/server/api/ | head -20Length of output: 5624
🌐 Web query:
GitHub API USER_PROFILE endpoint public_repos field what does it return
💡 Result:
The public_repos field is an integer giving the number of public repositories that the user (or organization) has—i.e., how many repositories are publicly visible on that account. [1][2]
Sources:
[1] GitHub REST API — Get a user (users endpoint) (docs).
[2] GitHub REST API user/org response examples showing the public_repos field.
---
🏁 Script executed:
```shell
#!/bin/bash
# Check if transformData is used anywhere and how templates are processed
rg -n "transformData" --type=ts src/
# Search for template processing logic that might apply transforms
rg -n "processMetric|applyTransform|dataPath.*transform" --type=ts src/server/api/
# Look for where metric values are extracted from API responses
rg -n "extract.*dataPath|dataPath.*extract" --type=ts src/server/api/
Length of output: 1281
---
🏁 Script executed:
```shell
#!/bin/bash
# Get the exact github-stars-total template definition
rg -n "templateId.*github-stars-total" --type=ts -A 15 src/server/api/services/metric-templates.ts
# Check extractValueFromPath function to understand data extraction
rg -n "function extractValueFromPath|const extractValueFromPath" --type=ts -A 20 src/server/api/
Length of output: 2005
Fix template metadata or implement star aggregation for "github-stars-total"
This template returns the public repository count (via public_repos), not a total of stars. The template ID, label ("Repository Stars"), unit ("stars"), and description all claim to report stars, but the underlying data is a repository count. Users will see misleading metrics.
Either rename the template to "Public Repositories" to match the actual data, or implement custom aggregation to sum stargazers_count from all user repositories (via /user/repos endpoint).
🤖 Prompt for AI Agents
In src/server/api/services/metric-templates.ts around lines 58-69, the
"github-stars-total" template currently reads public_repos (repo count) but is
labeled/described/typed as total stars; either 1) rename the template to reflect
repo count: update templateId to something like "github-public-repos", label to
"Public Repositories", description to "Total number of public repositories", and
defaultUnit to an appropriate unit (e.g., "repos"), or 2) implement real star
aggregation: change endpoint to the user repos endpoint (e.g., /user/repos),
remove single dataPath and instead add a small aggregation handler that sums
stargazers_count across all returned repos (handle pagination), set metricType
and defaultUnit to match "stars", and ensure requiredParams and auth scopes
include repo read access; pick one approach and update the template metadata or
implement the aggregation accordingly.
…d full column data support
Summary by CodeRabbit
Release Notes
New Features
Improvements