Skip to content

Conversation

@drifter089
Copy link
Owner

@drifter089 drifter089 commented Nov 17, 2025

…d full column data support

  • Replace mock data generation with real integration-backed metrics
  • Add metric templates for GitHub, Google Sheets, PostHog, and YouTube
  • Implement integration-specific metric creator UIs with data preview
  • Add full column data extraction for Google Sheets visualization
  • Create single source of truth for API endpoints in services layer
  • Move API testing to separate /api-test page
  • Add dynamic parameter selection for PostHog (cascading dropdowns)
  • Store complete column arrays for plotting capabilities

Summary by CodeRabbit

Release Notes

  • New Features

    • Added support for integration-backed metrics from GitHub, Google Sheets, PostHog, and YouTube with template-based creation.
    • Redesigned metrics page with tab-based organization by integration type.
    • Added API testing interface for endpoint validation.
    • Introduced metric refresh functionality to fetch real-time data from integrations.
  • Improvements

    • Enhanced metric management interface with streamlined creation workflow.

…d full column data support

- Replace mock data generation with real integration-backed metrics
- Add metric templates for GitHub, Google Sheets, PostHog, and YouTube
- Implement integration-specific metric creator UIs with data preview
- Add full column data extraction for Google Sheets visualization
- Create single source of truth for API endpoints in services layer
- Move API testing to separate /api-test page
- Add dynamic parameter selection for PostHog (cascading dropdowns)
- Store complete column arrays for plotting capabilities
@vercel
Copy link

vercel bot commented Nov 17, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
org-os Ready Ready Preview Comment Nov 17, 2025 1:12pm

@coderabbitai
Copy link

coderabbitai bot commented Nov 17, 2025

Walkthrough

This PR introduces integration-backed metrics by adding database relationships between metrics and integrations, establishing metric templates for GitHub, Google Sheets, PostHog, and YouTube, and implementing dedicated creation and refresh flows. New TRPC endpoints handle template management and dynamic data fetching. The metrics page refactors to a tab-based UI organized by integration, with integration-specific creator components. An API test page is added to the navigation.

Changes

Cohort / File(s) Change Summary
Database Schema
prisma/schema.prisma
Added integrationId, metricTemplate, endpointConfig, and lastFetchedAt fields to Metric model. Added bidirectional relationship between Metric and Integration. Added index on integrationId.
Metric Creator Components
src/app/metric/github/github-metric-creator.tsx, src/app/metric/google-sheets/sheets-metric-creator.tsx, src/app/metric/posthog/posthog-metric-creator.tsx, src/app/metric/youtube/youtube-metric-creator.tsx
New client components for template-based metric creation from GitHub, Google Sheets, PostHog, and YouTube integrations. Each component manages template selection, dynamic option fetching, configuration, and submission via createFromTemplate mutation.
Metric Display
src/app/metric/_components/integration-metric-card.tsx
New component for rendering metric cards with value formatting, integration badges, refresh action (for integration metrics), edit/delete controls, and last-fetched timestamp.
Metric Management Page
src/app/metric/page.tsx
Refactored from dialog-based to tab-based UI with All Metrics and per-integration tabs. Integrated metric creator components. Added manual metric creation dialog. Added refresh flow for integration metrics. Replaced metric grid with per-integration sections using IntegrationMetricCard.
API Testing
src/app/api-test/page.tsx
New page for API testing with IntegrationTester component.
Metric Template Definitions
src/server/api/services/metric-templates.ts
New module defining MetricTemplate and MetricTemplateParam interfaces. Provides templates for four integrations (GitHub, Google Sheets, PostHog, YouTube) with endpoint configs, data paths, and required parameters. Exports registry functions: getAllMetricTemplates, getTemplatesByIntegration, getMetricTemplate, getIntegrationIdsWithTemplates.
Integration Service Endpoints
src/server/api/services/github.ts, src/server/api/services/google-sheets.ts, src/server/api/services/posthog.ts, src/server/api/services/youtube.ts
Each service exports a typed *MetricEndpoints constant defining endpoint templates for metric-related API calls.
Metric Router
src/server/api/routers/metric.ts
Expanded with six new endpoints: getMetricTemplates, getTemplatesByIntegration, createFromTemplate, getSheetStructure, getSheetPreview, fetchDynamicOptions. Enhanced refreshMetricValue to fetch from integrations using templates instead of mock data. Added internal helpers extractValueFromPath and applyTransformation. Removed generateMockValue helper.
Navigation
src/components/navbar/NavMenu.client.tsx
Added API Testing menu item with FlaskConical icon linking to /api-test. Updated active path logic.

Sequence Diagram

sequenceDiagram
    actor User
    participant UI as Metric Page / Creator
    participant API as Metric Router
    participant Template as Template Service
    participant Integ as Integration Service
    participant DB as Database
    
    User->>UI: Select integration & template
    UI->>API: createFromTemplate({ templateId, integrationId })
    API->>Template: getMetricTemplate(templateId)
    Template-->>API: Template config
    API->>DB: Create Metric with integrationId, metricTemplate, endpointConfig
    DB-->>API: New Metric
    API-->>UI: Success
    UI->>UI: Refresh metrics list
    
    rect rgb(220, 240, 255)
    note over User,UI: Later: Refresh metric value
    User->>UI: Click Refresh on metric card
    UI->>API: refreshMetricValue({ id })
    API->>DB: Fetch Metric + Template
    DB-->>API: Metric data
    API->>Template: Resolve endpoint from template
    Template-->>API: Endpoint path with placeholders
    API->>Integ: Fetch data using Integration connection
    Integ-->>API: Raw API response
    API->>API: extractValueFromPath() + applyTransformation()
    API->>DB: Update currentValue, lastFetchedAt
    DB-->>API: Updated
    API-->>UI: Success
    UI->>UI: Show "Metric refreshed successfully!"
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~70 minutes

  • High-density logic: The metric router expansion introduces intricate flows for template resolution, dynamic option fetching, and data transformation with multiple integration paths.
  • Multiple new components: Five new metric creator components each with distinct logic for fetching options, validating state, and calling mutations. IntegrationMetricCard adds UI complexity with conditional rendering and formatting utilities.
  • Large refactored page: The metrics page refactor replaces significant UI structure and introduces tab-based navigation, per-integration filtering, and multiple dialog states.
  • Template system: New metric-templates service with multiple template definitions and registry logic requires careful review of data structure correctness and endpoint configurations.
  • Database migration: Schema changes establishing integrationId relationships may have migration and backfill implications not visible in the diff.

Areas requiring extra attention:

  • src/server/api/routers/metric.ts: Verify integration access checks, error handling for missing templates/integrations, and data extraction logic in extractValueFromPath and applyTransformation
  • src/app/metric/page.tsx: Confirm per-integration filtering logic and state management for refresh loading states
  • src/server/api/services/metric-templates.ts: Validate all endpoint templates match actual service APIs and parameter placeholders align with expected formats
  • Integration-specific creator components: Cross-check dynamic option fetching logic and field validation rules for each integration

Possibly related PRs

Poem

🐰 Templates bloom in gardens fair,
Metrics dance from services rare,
Tabs arrange the data true,
GitHub, Sheets, and PostHog too!
Integration's rabbit rebrand through! 🌟

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main structural change: introducing integration-backed templates and full column data support for metrics, replacing the mock generation approach.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch rea-metrics

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (14)
src/server/api/services/github.ts (1)

23-31: Consider reusing githubMetricEndpoints inside githubEndpoints to avoid drift

The new githubMetricEndpoints values mirror several entries in githubEndpoints (e.g., /user, /user/repos, /user/followers). To keep these in sync over time, consider referencing githubMetricEndpoints when defining githubEndpoints paths instead of repeating literals.

Example:

  {
    label: "User Profile",
-   path: "/user",
+   path: githubMetricEndpoints.USER_PROFILE,
    method: "GET",
    description: "Get authenticated user's profile information",
  },

This keeps your metric templates and API test UI aligned with a single source of truth for endpoint paths.

prisma/schema.prisma (1)

97-115: Clarify MetricIntegration relation semantics and deletion behavior

The new integration-backed fields look consistent, but a couple of points are worth confirming:

  1. Deletion semantics for Integration

    • Metric.integration currently has no explicit onDelete, so deleting an Integration with associated Metric rows will be prevented by the FK constraint.
    • If the desired behavior is “metrics remain but lose their backing integration”, you likely want:
    integration Integration? @relation(
      fields: [integrationId],
      references: [connectionId],
      onDelete: SetNull
    )

    Otherwise, please confirm that blocking deletion while metrics exist is intentional. This choice directly affects how easy it is to revoke integrations without cleaning up metrics first.

  2. Comment / config shape consistency

    • endpointConfig is documented as e.g. {videoId: "abc123"}, but your endpoint placeholders elsewhere use uppercase keys ({VIDEO_ID}, {PROJECT_ID}, etc.). For future readers it may be clearer to align the example with the actual key casing used in your endpoint templates.

Overall, the added fields (integrationId, metricTemplate, endpointConfig, lastFetchedAt) and the metrics back-reference on Integration line up well with the new integration-backed metric flow.

Also applies to: 140-141

src/server/api/services/youtube.ts (1)

31-38: Metric endpoint templates look consistent with existing YouTube endpoints

The new youtubeMetricEndpoints definitions match the patterns used in youtubeDataEndpoints (same base path and VIDEO_ID placeholder), so they should plug cleanly into the metric template system and fetchYouTubeData’s placeholder replacement.

If you want to reduce duplication, you could optionally reference these constants when defining youtubeDataEndpoints entries that represent the same calls (e.g., statistics-only video/channel metrics), but it’s not required.

src/server/api/services/posthog.ts (1)

16-24: Align PostHog metric endpoint paths with test endpoints (trailing slash mismatch)

posthogMetricEndpoints introduces:

PROJECT_EVENTS: "/api/projects/{PROJECT_ID}/events",
PROJECT_PERSONS: "/api/projects/{PROJECT_ID}/persons",
PROJECTS_LIST: "/api/projects/",

but the corresponding entries in posthogEndpoints use /events/ and /persons/ (with a trailing slash).

This may be fine if PostHog treats both forms identically, but to avoid subtle 301/404 differences it’s safer to pick one convention and use it consistently across both the metric templates and the API-test endpoints.

For example, if the trailing-slash versions are what the API expects, you could adjust the metric endpoints to:

 export const posthogMetricEndpoints = {
-  PROJECT_EVENTS: "/api/projects/{PROJECT_ID}/events",
-  PROJECT_PERSONS: "/api/projects/{PROJECT_ID}/persons",
+  PROJECT_EVENTS: "/api/projects/{PROJECT_ID}/events/",
+  PROJECT_PERSONS: "/api/projects/{PROJECT_ID}/persons/",
   PROJECTS_LIST: "/api/projects/",
 } as const;

Please double-check against the PostHog API docs you’re targeting to confirm which form is correct.

src/server/api/services/google-sheets.ts (1)

16-24: Good centralization of Google Sheets metric endpoints

The new googleSheetsMetricEndpoints map cleanly to your existing placeholder replacement logic in fetchGoogleSheetsData, and wiring "Get Spreadsheet" to SPREADSHEET_METADATA removes a hard-coded path from the test endpoints.

If you later add metric templates for specific ranges or whole-sheet values, reusing RANGE_VALUES and SHEET_VALUES there should keep everything consistent.

No functional issues spotted here.

Also applies to: 30-38

src/app/metric/github/github-metric-creator.tsx (1)

23-27: Keep MetricTemplateId in sync with backend template IDs

MetricTemplateId is defined as a hard-coded union:

type MetricTemplateId =
  | "github-followers-count"
  | "github-repos-count"
  | "github-stars-total";

and used both for the Tabs and in getMetricInfo. This works now, but it can easily drift from the server-side metricTemplates / createFromTemplate input schema if you add/rename templates.

To strengthen type safety and reduce duplication:

  • Consider deriving MetricTemplateId from your TRPC router input type for metric.createFromTemplate (via inferRouterInputs<AppRouter>["metric"]["createFromTemplate"]["templateId"]), or
  • At least centralize the mapping in a single data structure, e.g.:
const GITHUB_METRIC_TEMPLATES = {
  "github-followers-count": {
    title: "GitHub Followers",
    description: "Track your total number of GitHub followers",
  },
  "github-repos-count": {
    title: "Public Repositories",
    description: "Track your total count of public repositories",
  },
  "github-stars-total": {
    title: "Repository Stars",
    description: "Track total stars across all your repositories",
  },
} as const;

type MetricTemplateId = keyof typeof GITHUB_METRIC_TEMPLATES;

Then both the tabs and getMetricInfo can reference GITHUB_METRIC_TEMPLATES directly, so adding a new template is a single change.

Please also verify that these string IDs exactly match the server-defined template IDs used by metric.createFromTemplate.

Also applies to: 32-37, 56-76, 89-107

src/app/metric/_components/integration-metric-card.tsx (1)

54-94: Consider extracting helpers outside the component for reuse/readability

getMetricColor and formatValue are pure helpers and don’t depend on component state. Pulling them out of IntegrationMetricCard (or into a shared metrics util module) would avoid redefinition on each render and make them easier to reuse elsewhere.

src/app/metric/google-sheets/sheets-metric-creator.tsx (1)

123-142: Metric creation guard is fine; explicit name check is optional

canCreate already requires metricName to be truthy, so handleCreate will only run when a name is present. If you want an extra layer of defensive coding, you could add || !metricName to the guard in handleCreate, but it’s not strictly necessary given the current UI constraints.

src/app/metric/youtube/youtube-metric-creator.tsx (1)

29-75: YouTube metric creator wiring looks correct

The channel/video tab logic, template selection, canCreate guard, and createFromTemplate mutation payload all look consistent with the other integration creators. Only tiny optional polish would be to trim videoId before sending it, but functionally this is solid as-is.

Also applies to: 152-199

src/app/metric/page.tsx (1)

56-63: Align dialog closing behavior to consistently reset form state

Right now:

  • Manual create dialog: form is reset on successful create and when clicking “Cancel”, but not when closed via overlay/ESC (onOpenChange={setIsCreateManualDialogOpen}).
  • Edit dialog: editingMetricId and formData are reset only on the Cancel button, not when closing via overlay/ESC.

This can leave stale form values and editingMetricId around between opens. Not a bug, but for predictability you might wrap dialog closing in helpers that always reset state, and call them from both onOpenChange and the Cancel buttons.

Example for the create dialog:

-const [isCreateManualDialogOpen, setIsCreateManualDialogOpen] = useState(false);
+const [isCreateManualDialogOpen, setIsCreateManualDialogOpen] = useState(false);

+const closeCreateManualDialog = () => {
+  setIsCreateManualDialogOpen(false);
+  setFormData(initialFormData);
+};

<Dialog
-  open={isCreateManualDialogOpen}
-  onOpenChange={setIsCreateManualDialogOpen}
+  open={isCreateManualDialogOpen}
+  onOpenChange={(open) => {
+    setIsCreateManualDialogOpen(open);
+    if (!open) setFormData(initialFormData);
+  }}
>
  ...
  <Button
    variant="outline"
-   onClick={() => {
-     setIsCreateManualDialogOpen(false);
-     setFormData(initialFormData);
-   }}
+   onClick={closeCreateManualDialog}
  >
    Cancel
  </Button>

Same idea can be applied to the edit dialog.

Also applies to: 358-455, 459-535

src/server/api/services/metric-templates.ts (1)

32-52: Narrow transformData to a typed union instead of string

Right now transformData is a free-form string, but the router only understands a small set of transform names (countRows, countEvents, and the special-cased extractColumn). A typo here will silently fall back to the default numeric conversion branch.

Consider introducing a dedicated type and reusing it in the router helpers:

-export interface MetricTemplate {
+export type MetricTransform = "countRows" | "countEvents" | "extractColumn";
+
+export interface MetricTemplate {
@@
-  // Optional custom data transformation
-  transformData?: string; // Function name for custom transformation
+  // Optional custom data transformation
+  transformData?: MetricTransform; // Supported transformation identifier

and update applyTransformation (in metric.ts) to accept MetricTransform instead of string. This will give you compile-time safety when adding new templates or transforms.

src/server/api/routers/metric.ts (3)

164-220: Avoid hard-coding Google Sheets endpoints in router

getSheetStructure re-specifies the Sheets endpoint as a literal:

"/v4/spreadsheets/{SPREADSHEET_ID}"

given you already have googleSheetsMetricEndpoints.SPREADSHEET_METADATA defined in google-sheets.ts and your templates centralize endpoints there.

To keep a single source of truth (and ease future API changes), consider importing and using the constant here as well:

-        const response = await fetchGoogleSheetsData(
-          input.connectionId,
-          "/v4/spreadsheets/{SPREADSHEET_ID}",
-          { SPREADSHEET_ID: input.spreadsheetId },
-          "GET",
-        );
+        const response = await fetchGoogleSheetsData(
+          input.connectionId,
+          googleSheetsMetricEndpoints.SPREADSHEET_METADATA,
+          { SPREADSHEET_ID: input.spreadsheetId },
+          "GET",
+        );

Same applies to getSheetPreview below, which could use the existing SHEET_VALUES constant.


421-447: Filter endpointConfig to string params before using it as params

params is currently derived via a cast:

const params = (metric.endpointConfig as Record<string, string>) ?? {};

However, for column-based Sheets metrics you later store columnData: number[] and lastDataFetch: string into endpointConfig. On subsequent refreshes, params will therefore contain non-string values (e.g., columnData: number[]), which are then passed down into fetchGoogleSheetsData / fetchPostHogData / fetchYouTubeData.

While this works at runtime due to implicit stringification and because there are no {columnData} placeholders, it’s not type-safe and makes params harder to reason about.

You can make this safer by filtering only string-valued keys into the params object:

-      let endpoint = template.endpoint;
-      const params = (metric.endpointConfig as Record<string, string>) ?? {};
+      let endpoint = template.endpoint;
+      const rawConfig =
+        (metric.endpointConfig ?? {}) as Record<string, unknown>;
+      const params: Record<string, string> = {};
+      for (const [key, value] of Object.entries(rawConfig)) {
+        if (typeof value === "string") {
+          params[key] = value;
+        }
+      }

This keeps endpointConfig free to store richer metadata while ensuring only valid path parameters are passed to the integration fetchers.


485-521: Validate COLUMN_INDEX before using it in the column-extraction path

In the extractColumn branch you compute:

const columnIndex = parseInt(params.COLUMN_INDEX ?? "0");

If COLUMN_INDEX is accidentally set to a non-numeric string, parseInt yields NaN, and your loop will silently skip all cells, resulting in columnData = [] and latestValue = 0 with no indication of misconfiguration.

You can fail fast with a clear error:

-        const columnIndex = parseInt(params.COLUMN_INDEX ?? "0");
+        const rawColumnIndex = params.COLUMN_INDEX ?? "0";
+        const columnIndex = Number.parseInt(rawColumnIndex, 10);
+        if (Number.isNaN(columnIndex)) {
+          throw new TRPCError({
+            code: "BAD_REQUEST",
+            message: `Invalid COLUMN_INDEX "${rawColumnIndex}" in endpointConfig`,
+          });
+        }

This keeps misconfigured metrics from quietly producing zeroed data.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8129e80 and 17b8c88.

📒 Files selected for processing (15)
  • prisma/schema.prisma (2 hunks)
  • src/app/api-test/page.tsx (1 hunks)
  • src/app/metric/_components/integration-metric-card.tsx (1 hunks)
  • src/app/metric/github/github-metric-creator.tsx (1 hunks)
  • src/app/metric/google-sheets/sheets-metric-creator.tsx (1 hunks)
  • src/app/metric/page.tsx (7 hunks)
  • src/app/metric/posthog/posthog-metric-creator.tsx (1 hunks)
  • src/app/metric/youtube/youtube-metric-creator.tsx (1 hunks)
  • src/components/navbar/NavMenu.client.tsx (3 hunks)
  • src/server/api/routers/metric.ts (4 hunks)
  • src/server/api/services/github.ts (1 hunks)
  • src/server/api/services/google-sheets.ts (1 hunks)
  • src/server/api/services/metric-templates.ts (1 hunks)
  • src/server/api/services/posthog.ts (1 hunks)
  • src/server/api/services/youtube.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (10)
src/app/metric/github/github-metric-creator.tsx (1)
src/trpc/react.tsx (1)
  • api (28-28)
src/app/api-test/page.tsx (1)
src/app/api-test/_components/integration-tester.tsx (1)
  • IntegrationTester (55-502)
src/server/api/services/google-sheets.ts (1)
src/server/api/services/github.ts (1)
  • ServiceEndpoint (14-21)
src/app/metric/youtube/youtube-metric-creator.tsx (1)
src/trpc/react.tsx (1)
  • api (28-28)
src/app/metric/page.tsx (6)
src/trpc/react.tsx (1)
  • api (28-28)
src/app/metric/_components/integration-metric-card.tsx (1)
  • IntegrationMetricCard (47-190)
src/app/metric/github/github-metric-creator.tsx (1)
  • GitHubMetricCreator (28-159)
src/app/metric/google-sheets/sheets-metric-creator.tsx (1)
  • SheetsMetricCreator (38-356)
src/app/metric/posthog/posthog-metric-creator.tsx (1)
  • PostHogMetricCreator (30-273)
src/app/metric/youtube/youtube-metric-creator.tsx (1)
  • YouTubeMetricCreator (29-202)
src/app/metric/google-sheets/sheets-metric-creator.tsx (1)
src/trpc/react.tsx (1)
  • api (28-28)
src/app/metric/posthog/posthog-metric-creator.tsx (1)
src/trpc/react.tsx (1)
  • api (28-28)
src/app/metric/_components/integration-metric-card.tsx (3)
src/components/ui/card.tsx (5)
  • Card (88-88)
  • CardHeader (89-89)
  • CardTitle (91-91)
  • CardDescription (93-93)
  • CardContent (94-94)
src/components/ui/badge.tsx (1)
  • Badge (51-51)
src/components/ui/button.tsx (1)
  • Button (62-62)
src/server/api/services/metric-templates.ts (4)
src/server/api/services/github.ts (1)
  • githubMetricEndpoints (27-31)
src/server/api/services/google-sheets.ts (1)
  • googleSheetsMetricEndpoints (20-24)
src/server/api/services/posthog.ts (1)
  • posthogMetricEndpoints (20-24)
src/server/api/services/youtube.ts (1)
  • youtubeMetricEndpoints (35-38)
src/server/api/routers/metric.ts (7)
src/server/api/trpc.ts (1)
  • workspaceProcedure (145-148)
src/server/api/services/metric-templates.ts (3)
  • getAllMetricTemplates (336-338)
  • getTemplatesByIntegration (343-347)
  • getMetricTemplate (352-356)
src/server/api/utils/authorization.ts (1)
  • getIntegrationAndVerifyAccess (284-309)
src/server/api/services/google-sheets.ts (1)
  • fetchGoogleSheetsData (67-112)
src/server/api/services/posthog.ts (1)
  • fetchPostHogData (73-118)
src/server/api/services/github.ts (1)
  • fetchGitHubData (83-119)
src/server/api/services/youtube.ts (1)
  • fetchYouTubeData (142-225)
🪛 Biome (2.1.2)
src/server/api/routers/metric.ts

[error] 751-751: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🔇 Additional comments (10)
src/components/navbar/NavMenu.client.tsx (2)

9-9: LGTM: Icon import is appropriate.

The FlaskConical icon is a fitting choice for API testing functionality and is correctly imported from lucide-react.


66-67: LGTM: Navigation integration is clean and consistent.

The API Testing menu item follows the established pattern for navigation items and is appropriately placed in the Organization dropdown. The active path detection is correctly configured.

Also applies to: 108-115

src/app/api-test/page.tsx (1)

1-19: LGTM: Clean page implementation following Next.js best practices.

The component structure is well-organized with a clear layout and semantic HTML. The server component correctly renders the client-side IntegrationTester component, following Next.js 15 App Router patterns.

src/app/metric/posthog/posthog-metric-creator.tsx (1)

30-106: PostHog metric creator flow looks sound and consistent with other integrations

The project/event fetching, validation via canCreate, and createFromTemplate mutation wiring all look correct and align with the patterns used in the other metric creators. Error and loading states are handled reasonably, and the default metric names are sensible. I don’t see any blocking issues here.

Also applies to: 148-270

src/app/metric/page.tsx (2)

68-71: Confirm integrationId on metrics aligns with connectionId used for filtering

getMetricsForIntegration filters metrics by:

const getMetricsForIntegration = (connectionId: string) =>
  metrics?.filter((m) => m.integrationId === connectionId) ?? [];

and tabs are keyed by integration.connectionId.

This assumes metric.integrationId stores the same value as integration.connectionId (rather than, say, the integration type like "github" or a numeric PK). If that assumption is wrong, the per-integration metric counts and “Existing Metrics” lists will be empty or mis-grouped.

Please double-check the server/Prisma schema to ensure this mapping is correct; if not, you may want to filter on m.integration?.connectionId instead.

Also applies to: 172-175, 206-216, 275-280


177-218: Nice consolidation of per‑integration metric flows and layout

The tabbed layout, per‑integration creators (GitHub/Sheets/PostHog/YouTube), and shared IntegrationMetricCard usage make the metrics UX much clearer than a single generic flow. The wiring to listWithStats, per‑integration counts, and refresh/delete/edit actions looks cohesive and easy to extend.

Also applies to: 220-273, 275-355

src/server/api/services/metric-templates.ts (1)

158-193: Google Sheets column-data template wiring looks consistent with refresh logic

The gsheets-column-data template’s configuration lines up with the special-case logic in refreshMetricValue:

  • endpoint uses SHEET_VALUES.
  • dataPath: "values" yields the full 2D values array that the extractColumn branch expects.
  • transformData: "extractColumn" matches the branch that stores columnData and derives the latest value.
  • COLUMN_INDEX is explicitly 0-based and numeric, which the refresh logic uses when extracting the column.

This is a good, explicit contract between the template and the refresh logic; I don’t see issues here assuming the Google Sheets response shape remains values: string[][].

src/server/api/routers/metric.ts (3)

98-162: Template-based metric creation flow looks solid

The createFromTemplate mutation:

  • Validates the template ID and returns a clear NOT_FOUND error if missing.
  • Uses getIntegrationAndVerifyAccess to enforce access to the provided connectionId.
  • Ensures the integration’s integrationId matches the template’s integrationId, preventing mismatched wiring.
  • Checks required params based on template.requiredParams and returns a single, helpful BAD_REQUEST with missing labels.
  • Persists metricTemplate and endpointConfig so refresh can reconstruct the call.

This is a clean, constrained flow; I don’t see correctness issues in this block.


280-371: Dynamic PostHog options implementation is clear and constrained

The fetchDynamicOptions handler:

  • Re-validates integration access via getIntegrationAndVerifyAccess.
  • Uses explicit endpoint values (posthog-projects, posthog-events) that match your template dynamicOptionsEndpoint values.
  • Enforces dependsOnValue for events and returns an explicit BAD_REQUEST when missing.
  • Maps responses into { label, value } pairs with appropriate IDs/names.
  • Restricts allowed endpoints via a switch, rejecting unknown values with BAD_REQUEST.

The overall shape looks good and avoids exposing arbitrary proxying through this endpoint.


373-395: refreshMetricValue no longer supports non–integration-backed metrics

This guard:

if (!metric.integrationId || !metric.metricTemplate) {
  throw new TRPCError({
    code: "BAD_REQUEST",
    message: "This metric is not integration-backed",
  });
}

means any existing manually-managed metrics can no longer be refreshed via this endpoint (they’ll now get BAD_REQUEST instead of the previous mock behavior).

If you still have non-integration metrics in production, you may want either:

  • A backwards-compatible path (e.g., no-op refresh, or leave them editable only via update), or
  • A migration that converts/removes legacy metrics and updates any UI that calls refreshMetricValue.

Worth double-checking existing data and UI usage before shipping.

Comment on lines +54 to +74
const getMetricColor = (
current: number | null,
target: number | null,
type: string,
): string => {
if (!current || !target) return "text-gray-500";

const percentage = (current / target) * 100;

if (type === "duration") {
// For duration, lower is better
if (percentage <= 100) return "text-green-600";
if (percentage <= 120) return "text-yellow-600";
return "text-red-600";
} else {
// For other types, higher is better
if (percentage >= 100) return "text-green-600";
if (percentage >= 80) return "text-yellow-600";
return "text-red-600";
}
};
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix falsy checks so zero values are handled correctly in getMetricColor

getMetricColor currently does:

if (!current || !target) return "text-gray-500";

This treats 0 as “no value”, so metrics with currentValue or targetValue equal to 0 will never be color‑coded and you also implicitly avoid divide‑by‑zero via falsy check.

A safer guard is to treat only null/undefined (and explicit target === 0) as “no comparison”, while allowing 0 as a valid current value:

-    if (!current || !target) return "text-gray-500";
+    if (current == null || target == null || target === 0) {
+      return "text-gray-500";
+    }

This preserves the neutral color when there is no meaningful target (including 0), but correctly colors a current of 0 against a positive target.

🤖 Prompt for AI Agents
In src/app/metric/_components/integration-metric-card.tsx around lines 54 to 74,
the guard uses falsy checks which treat 0 as missing and prevents colorization
and divides; change the guard to only treat null/undefined as missing and treat
a target of 0 as "no comparison" to avoid divide-by-zero — i.e., replace the
falsy check with a nullish check such as if (current == null || target == null
|| target === 0) return "text-gray-500"; so that current === 0 is allowed and
percentage calculation proceeds safely.

Comment on lines +23 to +37
type MetricTemplateId =
| "github-followers-count"
| "github-repos-count"
| "github-stars-total";

export function GitHubMetricCreator({
connectionId,
onSuccess,
}: GitHubMetricCreatorProps) {
const [templateId, setTemplateId] = useState<MetricTemplateId>(
"github-followers-count",
);
const [metricName, setMetricName] = useState("");
const [targetValue, setTargetValue] = useState("");

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Guard against NaN when parsing targetValue before mutation

targetValue is stored as a string and converted with:

targetValue: targetValue ? parseFloat(targetValue) : undefined,

Because <input type="number"> can temporarily hold values like "-" or "e" while the user is typing, parseFloat can return NaN, which would then be sent to createFromTemplate. Depending on your backend validation, that could either fail unexpectedly or store an invalid float.

A small guard keeps things predictable:

-  const handleCreate = () => {
-    createMutation.mutate({
-      templateId,
-      connectionId,
-      name: metricName || undefined,
-      targetValue: targetValue ? parseFloat(targetValue) : undefined,
-    });
-  };
+  const handleCreate = () => {
+    const trimmed = targetValue.trim();
+    const parsed =
+      trimmed === "" ? undefined : Number(trimmed);
+
+    // Optionally: prevent submit on invalid number instead of sending NaN
+    if (parsed !== undefined && Number.isNaN(parsed)) {
+      return;
+    }
+
+    createMutation.mutate({
+      templateId,
+      connectionId,
+      name: metricName || undefined,
+      targetValue: parsed,
+    });
+  };

This ensures only valid numbers (or undefined) reach the API.

Also applies to: 47-54

🤖 Prompt for AI Agents
In src/app/metric/github/github-metric-creator.tsx around lines 23 to 37 (and
similarly at 47 to 54), guard against parseFloat returning NaN when converting
the string targetValue before sending it to createFromTemplate: parse the
string, check the result with Number.isFinite (or !Number.isNaN) and only
include the numeric value in the mutation payload if it is a valid finite
number; otherwise pass undefined (or omit the field) so invalid intermediate
input like "-" or "e" does not send NaN to the API.

Comment on lines +89 to +103
const extractSpreadsheetId = (url: string) => {
// Extract ID from various Google Sheets URL formats
const regex = /\/spreadsheets\/d\/([a-zA-Z0-9-_]+)/;
const match = regex.exec(url);
return match ? match[1] : null;
};

const handleUrlChange = (url: string) => {
setSpreadsheetUrl(url);
const id = extractSpreadsheetId(url);
if (id) {
setSpreadsheetId(id);
}
};

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clear stale spreadsheet state when the URL becomes invalid

handleUrlChange only updates spreadsheetId when a new ID is successfully parsed:

const id = extractSpreadsheetId(url);
if (id) {
  setSpreadsheetId(id);
}

If the user edits a previously valid URL into an invalid one, the old spreadsheetId (and downstream selections) are kept, so “Fetch Sheets” and metric creation can silently operate on the wrong sheet.

You can avoid this by always resetting the ID (and dependent state) when parsing fails:

const handleUrlChange = (url: string) => {
  setSpreadsheetUrl(url);
  const id = extractSpreadsheetId(url);
-  if (id) {
-    setSpreadsheetId(id);
-  }
+  setSpreadsheetId(id ?? "");
+  if (!id) {
+    // Clear dependent selections when the URL no longer resolves to a sheet
+    setSelectedSheet("");
+    setSelectedColumn(null);
+    setMetricName("");
+  }
};

This keeps the UI and the actual target spreadsheet in sync.

🤖 Prompt for AI Agents
In src/app/metric/google-sheets/sheets-metric-creator.tsx around lines 89 to
103, handleUrlChange currently only sets spreadsheetId when a valid ID parses,
leaving stale state when the URL becomes invalid; update the function to clear
spreadsheetId (set to null or empty string) and also reset any dependent state
such as selected sheet, selected range, and fetched sheets/list when parsing
fails so the UI and downstream actions (e.g., Fetch Sheets, metric creation)
cannot operate on an old spreadsheet ID.

Comment on lines +64 to 67
const [refreshingMetricId, setRefreshingMetricId] = useState<string | null>(
null,
);

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Refresh loading state can be inaccurate with overlapping refreshes

Because refreshingMetricId is a single string:

  • Clicking “Refresh” on metric A sets it to A.
  • Clicking “Refresh” on metric B before A completes overwrites it to B.
  • When A completes, onSuccess/onError sets it to null, so B’s card stops showing as refreshing even though its request may still be in flight.

If you expect users to trigger multiple refreshes in quick succession, consider either:

  • Treating refresh as strictly single-flight (e.g., disable all refresh buttons while refreshMutation.isPending), or
  • Tracking a set of refreshing IDs and clearing only the one that finished.

For example, for single-flight semantics you could simplify to:

-const [refreshingMetricId, setRefreshingMetricId] = useState<string | null>(null);
+const [refreshingMetricId, setRefreshingMetricId] = useState<string | null>(null);

const handleRefresh = (id: string) => {
-  setRefreshingMetricId(id);
-  refreshMutation.mutate({ id });
+  setRefreshingMetricId(id);
+  refreshMutation.mutate({ id });
};

// In IntegrationMetricCard usage:
- isRefreshing={refreshingMetricId === metric.id}
+ isRefreshing={refreshMutation.isPending && refreshingMetricId === metric.id}

This keeps the spinner tied to the metric that initiated the current in-flight refresh and makes it clearer that only one refresh is processed at a time.

Also applies to: 107-117, 167-170, 247-254, 341-348

🤖 Prompt for AI Agents
In src/app/metric/page.tsx around lines 64-67 (and also apply changes at
107-117, 167-170, 247-254, 341-348): the current single-string
refreshingMetricId is overwritten by overlapping refreshes causing unrelated
cards to stop showing a spinner when an earlier refresh completes; replace that
single string with a collection (e.g., a Set<string> or string[] in state) and
update handlers to add the metric id when a refresh starts and remove only that
id on success/error (use functional state updates to avoid race conditions),
then change the UI check from refreshingMetricId === id to collection.has(id)
(or includes) so each metric’s spinner reflects its own in-flight request;
alternatively, if you want strict single-flight instead, disable all refresh
buttons while refreshMutation.isPending and keep a single id only while the
global mutation is pending.

Comment on lines +222 to +278
getSheetPreview: workspaceProcedure
.input(
z.object({
connectionId: z.string(),
spreadsheetId: z.string(),
sheetName: z.string(),
maxRows: z.number().optional().default(10), // Preview first 10 rows
}),
)
.query(async ({ ctx, input }) => {
// Verify integration access
await getIntegrationAndVerifyAccess(
ctx.db,
input.connectionId,
ctx.user.id,
ctx.workspace,
);

try {
// Fetch all non-empty data from the sheet
// Using just the sheet name returns all non-empty rows/columns
const response = await fetchGoogleSheetsData(
input.connectionId,
"/v4/spreadsheets/{SPREADSHEET_ID}/values/{SHEET_NAME}",
{
SPREADSHEET_ID: input.spreadsheetId,
SHEET_NAME: input.sheetName,
},
"GET",
);

const responseData = response.data as {
range?: string;
majorDimension?: string;
values?: string[][];
};

const allRows = responseData.values ?? [];
const headers = allRows.length > 0 ? allRows[0] : [];
const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows

return {
headers: headers ?? [],
rows: dataRows,
totalRows: allRows.length - 1, // Exclude header
totalColumns: (headers ?? []).length,
};
} catch (error) {
throw new TRPCError({
code: "INTERNAL_SERVER_ERROR",
message:
error instanceof Error
? error.message
: "Failed to fetch sheet preview",
});
}
}),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

Fix negative totalRows for empty sheets in getSheetPreview

If the sheet has no data (values is undefined or []), allRows.length will be 0, and:

totalRows: allRows.length - 1, // Exclude header

computes to -1, which is not meaningful.

You can guard this and simplify the return:

-        const allRows = responseData.values ?? [];
-        const headers = allRows.length > 0 ? allRows[0] : [];
-        const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
-
-        return {
-          headers: headers ?? [],
-          rows: dataRows,
-          totalRows: allRows.length - 1, // Exclude header
-          totalColumns: (headers ?? []).length,
-        };
+        const allRows = responseData.values ?? [];
+        const headers = allRows.length > 0 ? allRows[0] : [];
+        const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
+
+        const totalRows =
+          allRows.length > 0 ? allRows.length - 1 : 0; // Exclude header, never negative
+
+        return {
+          headers,
+          rows: dataRows,
+          totalRows,
+          totalColumns: headers.length,
+        };

This avoids negative counts while preserving existing behavior for non-empty sheets.


I'll verify this review comment by examining the code and checking for potential issues.


🏁 Script executed:

#!/bin/bash
# 1. First, verify the code at the specified lines
echo "=== Checking code at lines 222-278 in metric.ts ==="
sed -n '222,278p' src/server/api/routers/metric.ts

# 2. Search for usages of getSheetPreview to understand how totalRows is used
echo -e "\n=== Searching for getSheetPreview usages ==="
rg -n "getSheetPreview" --type ts --type tsx -C 3

# 3. Search for totalRows usage to see if negative values cause issues
echo -e "\n=== Searching for totalRows references ==="
rg -n "totalRows" --type ts --type tsx -C 2

Length of output: 1910


Let me refine the search with corrected syntax:


🏁 Script executed:

#!/bin/bash
# Search for getSheetPreview usages with corrected syntax
echo "=== Searching for getSheetPreview usages ==="
rg -n "getSheetPreview" -t ts -C 3

# Search for totalRows references with corrected syntax
echo -e "\n=== Searching for totalRows references ==="
rg -n "totalRows" -t ts -C 2

# Search for any validation or guards on numeric values
echo -e "\n=== Checking if there are any guards on returned data ==="
rg -n "totalRows\s*(<|>|<=|>=|===|!==)" -t ts -C 2

Length of output: 2575


Fix negative totalRows for empty sheets in getSheetPreview

When a sheet has no data (values is undefined or []), totalRows computes to -1. This value displays in the UI ("Showing 0 of -1 data rows"), which is semantically incorrect.

The proposed fix is valid: guard totalRows to return 0 for empty sheets:

-        const allRows = responseData.values ?? [];
-        const headers = allRows.length > 0 ? allRows[0] : [];
-        const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
-
-        return {
-          headers: headers ?? [],
-          rows: dataRows,
-          totalRows: allRows.length - 1, // Exclude header
-          totalColumns: (headers ?? []).length,
-        };
+        const allRows = responseData.values ?? [];
+        const headers = allRows.length > 0 ? allRows[0] : [];
+        const dataRows = allRows.slice(1, input.maxRows + 1); // Skip header, take maxRows
+
+        const totalRows =
+          allRows.length > 0 ? allRows.length - 1 : 0; // Exclude header, never negative
+
+        return {
+          headers,
+          rows: dataRows,
+          totalRows,
+          totalColumns: headers.length,
+        };

This ensures correct behavior for both empty and non-empty sheets while preventing negative counts in the UI.

🤖 Prompt for AI Agents
In src/server/api/routers/metric.ts around lines 222 to 278, totalRows can
become -1 when the sheet is empty because it uses allRows.length - 1; change the
computation to guard against negatives (e.g. set totalRows = Math.max(0,
allRows.length - 1)) and ensure headers and totalColumns are derived from the
safe headers array (empty array when no rows) so the API never returns negative
row counts.

Comment on lines +695 to +754
/**
* Extract a value from a nested object using a dot-notation path
* @param data The object to extract from
* @param path Dot-notation path (e.g., "items.0.statistics.viewCount")
* @returns The extracted value or undefined
*/
function extractValueFromPath(data: unknown, path: string): unknown {
const keys = path.split(".");
let current: unknown = data;

for (const key of keys) {
if (current === null || current === undefined) {
return undefined;
}

if (typeof current === "object" && !Array.isArray(current)) {
current = (current as Record<string, unknown>)[key];
} else if (Array.isArray(current)) {
const index = parseInt(key, 10);
if (!isNaN(index)) {
current = current[index];
} else {
return undefined;
}
} else {
return undefined;
}
}

return current;
}

/**
* Apply transformation to extracted data
* @param value The extracted value
* @param transformType The transformation type
* @returns The transformed numeric value
*/
function applyTransformation(value: unknown, transformType: string): number {
switch (transformType) {
case "countRows":
// Count number of rows in a Google Sheets response
if (Array.isArray(value)) {
return value.length;
}
return 0;

case "countEvents":
// Count number of events in PostHog response
if (Array.isArray(value)) {
return value.length;
}
return 0;

default:
// Default: convert to number
const numValue = parseFloat(String(value));
return isNaN(numValue) ? 0 : numValue;
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Address noSwitchDeclarations lint warning in applyTransformation

The linter warning about noSwitchDeclarations is due to this declaration inside the default case:

default:
  // Default: convert to number
  const numValue = parseFloat(String(value));
  return isNaN(numValue) ? 0 : numValue;

In JavaScript/TypeScript, declarations in switch cases share the same scope, which can lead to subtle issues; your tooling is correctly flagging this.

Wrapping the case body in a block fixes it and makes the intent explicit:

-    default:
-      // Default: convert to number
-      const numValue = parseFloat(String(value));
-      return isNaN(numValue) ? 0 : numValue;
+    default: {
+      // Default: convert to number
+      const numValue = parseFloat(String(value));
+      return Number.isNaN(numValue) ? 0 : numValue;
+    }

(Using Number.isNaN is also a bit safer than the global isNaN.)

extractValueFromPath above looks good and matches the simple dot-notation paths used by your templates.


I'll verify the claims in this review comment by checking the linter rule documentation and inspecting the actual code.Now let me search for information about Number.isNaN vs global isNaN:

Now let me verify the actual code and linter configuration in the repository:


🏁 Script executed:

#!/bin/bash
# Check the actual source file to verify the exact code
if [ -f "src/server/api/routers/metric.ts" ]; then
  echo "=== File exists, showing lines 695-754 ==="
  sed -n '695,754p' src/server/api/routers/metric.ts
else
  echo "File not found at expected path"
fi

Length of output: 1668


🏁 Script executed:

#!/bin/bash
# Check for linter configuration files
echo "=== Checking for linter configurations ==="
if [ -f ".eslintrc.json" ]; then
  echo "Found .eslintrc.json"
  cat .eslintrc.json
fi

if [ -f ".eslintrc.js" ]; then
  echo "Found .eslintrc.js"
fi

if [ -f "biome.json" ]; then
  echo "Found biome.json"
  cat biome.json
fi

if [ -f ".eslintrc.cjs" ]; then
  echo "Found .eslintrc.cjs"
fi

Length of output: 1834


Wrap default case in braces to fix noSwitchDeclarations lint violation

The default case in applyTransformation (line 745-747) contains a const declaration without a block, which violates Biome's noSwitchDeclarations rule. Wrap the case body in braces:

    default: {
      // Default: convert to number
      const numValue = parseFloat(String(value));
      return isNaN(numValue) ? 0 : numValue;
    }

Note: Using Number.isNaN instead of global isNaN is a good practice (avoids type coercion), though your linter config has noGlobalIsNan disabled so it won't enforce this.

🧰 Tools
🪛 Biome (2.1.2)

[error] 751-751: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Safe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🤖 Prompt for AI Agents
In src/server/api/routers/metric.ts around lines 695 to 754, the default case
inside applyTransformation declares a const without a block which trips the
noSwitchDeclarations lint rule; wrap the entire default case body in braces
(i.e., default: { /* ... */ } ) so the const is scoped inside a block, and while
editing you can replace isNaN(numValue) with Number.isNaN(numValue) if you want
to avoid the global isNaN usage.

Comment on lines +58 to +69
export const githubTemplates: MetricTemplate[] = [
{
templateId: "github-stars-total",
label: "Repository Stars",
description: "Total number of stars across all your repositories",
integrationId: "github",
metricType: "number",
defaultUnit: "stars",
endpoint: githubMetricEndpoints.USER_PROFILE,
dataPath: "public_repos", // Note: This gets repo count, stars need custom aggregation
requiredParams: [],
},
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

GitHub “Repository Stars” template currently returns repo count, not stars

This template is labeled and described as “Repository Stars”, but it hits USER_PROFILE and reads public_repos, which is the repository count (as your comment notes), not a star total. That’s likely to confuse users and produce misleading metrics.

Before shipping, either:

  • Align the template with the actual value (rename to something like “Public Repositories” and adjust unit), or
  • Implement the custom aggregation required to truly compute a total star count (e.g., via /user/repos and summing stargazers_count).

As-is, the template ID/label/unit don’t match the underlying data.


I'll verify the review comment by checking the GitHub API behavior and the codebase implementation.





🏁 Script executed:

#!/bin/bash
# Search for metric template processing and any aggregation logic
rg -n "public_repos|githubMetricEndpoints|MetricTemplate" --type=ts -A 3 -B 3 src/server/api/services/metric-templates.ts

# Check for any custom aggregation or data transformation logic
rg -n "stargazers|aggregate|sum.*star" --type=ts src/

# Look for endpoint handlers that might process the data
rg -n "dataPath|endpoint" --type=ts src/server/api/ | head -20

Length of output: 5624


🌐 Web query:

GitHub API USER_PROFILE endpoint public_repos field what does it return

💡 Result:

The public_repos field is an integer giving the number of public repositories that the user (or organization) has—i.e., how many repositories are publicly visible on that account. [1][2]

Sources:
[1] GitHub REST API — Get a user (users endpoint) (docs).
[2] GitHub REST API user/org response examples showing the public_repos field.



---

🏁 Script executed:

```shell
#!/bin/bash
# Check if transformData is used anywhere and how templates are processed
rg -n "transformData" --type=ts src/

# Search for template processing logic that might apply transforms
rg -n "processMetric|applyTransform|dataPath.*transform" --type=ts src/server/api/

# Look for where metric values are extracted from API responses
rg -n "extract.*dataPath|dataPath.*extract" --type=ts src/server/api/

Length of output: 1281



---

🏁 Script executed:

```shell
#!/bin/bash
# Get the exact github-stars-total template definition
rg -n "templateId.*github-stars-total" --type=ts -A 15 src/server/api/services/metric-templates.ts

# Check extractValueFromPath function to understand data extraction
rg -n "function extractValueFromPath|const extractValueFromPath" --type=ts -A 20 src/server/api/

Length of output: 2005


Fix template metadata or implement star aggregation for "github-stars-total"

This template returns the public repository count (via public_repos), not a total of stars. The template ID, label ("Repository Stars"), unit ("stars"), and description all claim to report stars, but the underlying data is a repository count. Users will see misleading metrics.

Either rename the template to "Public Repositories" to match the actual data, or implement custom aggregation to sum stargazers_count from all user repositories (via /user/repos endpoint).

🤖 Prompt for AI Agents
In src/server/api/services/metric-templates.ts around lines 58-69, the
"github-stars-total" template currently reads public_repos (repo count) but is
labeled/described/typed as total stars; either 1) rename the template to reflect
repo count: update templateId to something like "github-public-repos", label to
"Public Repositories", description to "Total number of public repositories", and
defaultUnit to an appropriate unit (e.g., "repos"), or 2) implement real star
aggregation: change endpoint to the user repos endpoint (e.g., /user/repos),
remove single dataPath and instead add a small aggregation handler that sums
stargazers_count across all returned repos (handle pagination), set metricType
and defaultUnit to match "stars", and ensure requiredParams and auth scopes
include repo read access; pick one approach and update the template metadata or
implement the aggregation accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants