feat: marketplace, per-server auth tokens, and PostHog analytics#155
feat: marketplace, per-server auth tokens, and PostHog analytics#155
Conversation
Ship `@apifold/cli` with 4 commands: - `apifold serve <spec>` — start MCP server from OpenAPI spec - `apifold transform <spec>` — output tool definitions as JSON - `apifold validate <spec>` — parse-only with detailed warnings - `apifold init [spec]` — generate apifold.config.yaml template Lightweight Express server with SSE and Streamable HTTP transports, HTTP proxy with auth injection, Zod config validation, and env var interpolation. Server binds to 127.0.0.1 with CORS restricted to same-origin. 104 unit tests with 95%+ coverage.
Extend credential system to support OAuth 2.0 Authorization Code and Client Credentials flows with 8 provider presets (Google, Slack, Microsoft, HubSpot, Salesforce, Notion, GitHub, Spotify). - Migration 0003: add OAuth columns (refresh token, scopes, token endpoint, client ID, client secret, token expiry, provider) - Extend AuthMode and CredentialAuthType with oauth2_authcode and oauth2_client_creds - Vault encryption for refresh tokens and client secrets - tokenEndpoint validated as HTTPS with private hostname rejection - Provider presets resolve authorization endpoints at runtime - 64 tests covering provider configs, schema validation, and SSRF rejection
The logs endpoint returned a raw RequestLog[] array but the frontend
useLogs hook expected { logs, cursor, hasMore }. This caused
page.logs to be undefined, crashing the log table with
"Cannot read properties of undefined (reading 'method')".
Wrap the response in the expected pagination envelope.
Add redirect blocking (reject 3xx responses) and 10MB response size cap to the general-purpose pinnedTransport in ssrf-guard.ts. This closes the DNS rebinding vector via redirect and prevents memory exhaustion from large upstream responses.
- ARCHITECTURE.md: ADR-013 (CLI tool) and ADR-014 (OAuth foundation) - API.md: OAuth credential fields, authMode values, logs response format - SECURITY.md: OAuth credential encryption, CLI security model, SSRF hardening with redirect blocking and size limits
- Authorization code flow with PKCE + Redis state storage - Client credentials flow for machine-to-machine APIs - Token exchange with SSRF-safe endpoint validation - Runtime auto-refresh with Redis distributed lock to prevent races - CREDENTIAL_EXPIRED error code surfaced in MCP responses - Credential list/create API with GET endpoint - 51 tests (PKCE, token exchange, token refresher)
- Credentials tab on server detail page - Add Credential modal with 4 auth types (API Key, Bearer, OAuth User, OAuth Service) - Provider picker grid (Google, Slack, Microsoft, HubSpot, Salesforce, Notion, GitHub, Spotify, Custom) - Credential list with auth type badges, scopes, and expiry status - Styled auth mode selector replacing native select on config form - OAuth mode shows contextual link to Credentials tab
- npm packaging: public @apifold/cli with bin entry and files config - GitHub Actions: publish-cli.yml for npm, bun compile binaries, GitHub Release, Docker - Docker: multi-stage Dockerfile with non-root user and healthcheck - Homebrew: tap formula for macOS/Linux binary installation
Detect Swagger 2.0 specs and transparently convert to OpenAPI 3.0 via swagger2openapi. Uses structuredClone to neutralize prototype pollution vectors before passing to the converter. Integrated in web spec import API and CLI serve/transform commands. 18 tests.
Web spec import and CLI serve/transform commands now call autoConvert() before parseSpec(), transparently supporting Swagger 2.0 specs. Web API response includes converted flag and originalVersion.
- Analytics API endpoint with frequency, percentiles, error rates - Materialized view (hourly_tool_stats) with user_id scoping - Monthly table partitioning for usage_events and request_logs - Overview dashboard with recharts (stat cards, line/bar charts, latency bars) - Per-tool sortable breakdown table - Analytics tab in server detail navigation - 12 analytics hook type tests
Analytics is now a first-class sidebar nav item at /dashboard/analytics with a server dropdown filter and time range selector. Includes stat cards, charts, latency percentiles, and per-tool breakdown — all scoped to the selected server.
- Gradient icon accents on stat cards with contextual colors - Area chart with gradient fill replacing flat line chart - Asymmetric grid layout (3/5 + 2/5) for chart prominence - Redesigned latency bars with P50/P95/P99 color coding - Server dropdown with Server icon and backdrop blur - Violet-accented time range pills - Beautiful empty state with centered guidance text - Hover glow effects on stat cards - Dark-theme optimized tooltip styling - Tool breakdown table with dot indicators
Applied all 7 Impeccable design commands: colorize, arrange, distill, quieter, normalize, animate, typeset. Removed AI slop patterns: - No more hero metric cards — just numbers on the surface - No more glassmorphism/backdrop-blur - No more hardcoded violet — uses design system tokens - Varied spatial rhythm instead of uniform spacing - Charts sit directly on surface, no wrapping cards - Status colors from design tokens (success/warning/error) - Chart HSL values reference the design system palette
…ty feed - Time-series chart: hourly/daily call volume + error overlay - Error breakdown: by status code and error code with colored badges - Failing tools: ranked by error rate percentage - Usage vs plan: monthly quota bar with plan name - Recent activity: last 15 calls with status dots and timestamps - API returns 8 data sections (overview, timeSeries, topTools, errorBreakdown, failingTools, recentActivity, usage, percentiles)
- Remove unused imports (Clock, AlertTriangle) - Add aria-label on server select for screen readers - Add aria-pressed + aria-label on time range buttons - Add motion-reduce:transition-none on all progress bars - Add optical spacing (mt-0.5) between metric value and detail - Save .impeccable.md design context for future skill runs
52 type/shape tests for AnalyticsResponse, all sub-interfaces, TimeRange union, and useAnalytics hook export. 20 API schema tests for time range validation, defaults, rejection of invalid values, and getStartDate date arithmetic.
Per-server analytics page had #888888, #1f2937, #f3f4f6, #8b5cf6 hardcoded in Recharts props. Replaced with HSL values matching the design system: --muted-foreground, --card, --card-foreground, --primary.
- Error explorer: filterable by status code, failed requests list, tools ranked by error rate with progress bars - Request log viewer: reuses existing log table/filter components with pagination and detail modal - CSV export: GET /api/servers/:id/analytics/export with proper field escaping, userId scoping, and 10k row limit
New @apifold/registry package with static catalog of validated OpenAPI specs: Stripe, GitHub, Slack, HubSpot, Twilio, Petstore, OpenAI, Notion. Includes search(), getById(), listAll(), and getCategories() functions. CI validation workflow parses all specs through the transformer. 32 tests.
Searchable grid of API cards at /dashboard/specs/new/registry with category filters, auth type badges, operation counts, docs links, and one-click Deploy button that imports through the secure spec import API.
- Detail page at /registry/[id] with metadata, auth info, and Deploy button - Browse page cards now navigate to detail instead of deploying directly - CLI: apifold serve --registry stripe loads spec from registry - Community CONTRIBUTING.md with spec submission guide
- Migration 0005: access_profiles table + credential profile_id FK - Profile types (AccessProfile, CreateProfileInput, UpdateProfileInput) - ProfileRepository with CRUD, findBySlug, default profile protection - Profile CRUD API (GET/POST/PUT/DELETE /api/servers/:id/profiles) - Auto-generate profiles: Read Only, Read/Write, Full Access on server creation
Profile-based filtering: tools/list only returns tools in the active profile's toolIds array. tools/call rejects requests for tools outside the profile with TOOL_NOT_FOUND. Agents on a Read Only profile literally cannot see or call write operations.
- Migration 0006: endpoint_id (12 hex chars, unique, NOT NULL with backfill), custom_domain, domain_verified_at, domain_verification_token - McpServer type updated with endpointId, customDomain, domainVerifiedAt - Server repository generates endpoint ID via randomBytes(6) on creation
- ServerRegistry: getByEndpointId() and getByDomain() lookups - SSE + streamable-http route /mcp/:identifier with auto-detection (12 hex chars = endpoint ID, otherwise = slug fallback) - postgres-loader selects endpoint_id and custom_domain columns
Snippet copier reads NEXT_PUBLIC_PLATFORM_DOMAIN (default: apifold.dev) and generates URLs with endpoint ID instead of slug. Custom domains shown when verified. No more "your-domain.com" placeholder.
- Domain API: PUT (set domain + generate token), POST (verify via DNS
TXT lookup), DELETE (remove domain), GET (status)
- DNS verification: _apifold-verify.{domain} TXT record with random token
- Domain settings component on server detail sidebar with set/verify/remove
- Platform domains blocked to prevent hijacking
- Verified domains shown in snippet copier with custom URL
- Add .eslintrc.cjs extending @apifold/eslint-config/base - Remove unused RegistryMeta import - Fix import ordering in validate.ts - Add --ext .ts flag to lint command for ESLint 8 compatibility
Marketplace: - 4 new DB tables (listings, installs, reports, audit_log) with migrations - Drizzle schemas, repositories, Zod validation, status machine - Deploy service with full transaction safety (spec + server + tools + install) - Uninstall with cascade cleanup and install_count trigger - Public browse/search/detail/categories/featured API endpoints - My installs endpoint, Redis caching layer - Marketplace UI: browse page, listing detail, search, category sidebar - Navigation updates (navbar, command palette, dashboard tab bar) - Markdown rendering for listing descriptions and setup guides - 10 official seed listings (Stripe, GitHub, Notion, etc.) Per-server access tokens: - Every MCP server gets its own af_-prefixed bearer token (256-bit entropy) - SHA-256 hash stored in token_hash column, plaintext shown once at creation - New server-token-auth middleware: per-server token check, global MCP_API_KEY fallback, zero-trust reject when neither is configured - Token accepted via Authorization header or ?token= query param (for SSE) - SSE sessions bound to authenticated state at connect time - Token rotation endpoint (POST /api/servers/:id/rotate-token) with Redis hot-reload across runtime instances - Domain router reordered before auth middleware to prevent bypass - Timing-safe comparison with raw binary buffers and 32-byte validation - tokenHash stripped from all API responses - Random slug suffixes for marketplace deploys (not predictable) - Security-scanned: no CRITICAL vulnerabilities found
- Cards: bordered icon containers, no colored gradients, uppercase tag pills - Buttons: bg-foreground text-background matching navbar CTA pattern - Marketplace home: centered hero with trust badge, 3-col featured row, sidebar browse - Detail page: tabs (Overview / Spec / Changelog), monospace metadata values - Spec tab: raw JSON viewer for the OpenAPI spec - Changelog tab: placeholder for future version history - Tags: bordered uppercase tracking-wider pills matching features grid pattern - Deploy button: "Deploy to APIFold" with Vercel-style hover states - Removed all colored category gradients, sparkle icons, and AI-aesthetic patterns
- Added static SVG logos for all 10 listings (Stripe, GitHub real logos; branded letter marks for others) - Rewrote all descriptions for AI agent comprehension: structured "What your agent can do" sections with specific tool capabilities - Added "Example prompts" to each listing showing real usage patterns - Updated spec versions to latest (Stripe 3.1.0, GitHub 3.1.0) - Enriched setup guides with credential format hints (sk_live_, github_pat_) - Expanded tags for better search coverage - Icons served from /public/marketplace/logos/ (static, no proxy needed)
- Created markdown-content.css with custom styles (no typography plugin needed) - h2 as uppercase tracking-wider with border-bottom separator - Bullet items with circle markers and proper indentation - First h1 hidden since page header shows the listing name - Version selector next to tabs with dropdown indicator - Replaced broken prose classes with MarkdownContent component
…selector - New marketplace_versions table (migration 0010) with listing_id, version, spec_hash, raw_spec, changelog, tool_count - Version repository with findByListing and findByListingAndVersion - GET /api/marketplace/:slug/versions endpoint - Working VersionSelector dropdown component: fetches versions from API, shows version number, tool count, date, and "latest" badge - Changelog tab: timeline UI with vertical connector, version dots, rendered changelog markdown per version - Seeded 29 versions across 10 listings (3-4 versions each) with realistic changelogs including breaking changes - Fixed double border between navbar and marketplace pages
- Public marketplace GET routes (browse, detail, categories, featured, versions) excluded from Clerk auth protection - Deploy and other write endpoints remain protected - Deploy button handles unauthenticated users gracefully: redirects to /sign-in with return URL back to the listing page - Catches Clerk 404 rewrite (dev mode behavior) and treats as auth redirect
- Marketplace listings now use actual OpenAPI specs from packages/registry (Stripe, GitHub, Notion, OpenAI, HubSpot, Twilio, Slack) - Deploy will produce real MCP tools (e.g. Stripe: listcharges, createcharge, listcustomers, createcustomer, listinvoices) - Removed listings without registry specs (Resend, Sentry, Shopify, Linear) - Added Slack listing with spec from registry - Seed script reads spec.json files directly from packages/registry/specs/
Registry specs (stripe, github, notion, openai, hubspot, twilio, slack): - Every operation now has a detailed description explaining when to use it, what it returns, and key behavioral notes - All parameters have description fields with valid values and formats - Request body schemas expanded with property descriptions and required arrays - info.description added to every spec explaining the platform Transformer: - Changed description priority: prefer operation.description over operation.summary so rich descriptions flow through to MCP tools - Previously used summary (short label) which produced unhelpful descriptions like "List charges" instead of explaining the tool's purpose
PostHog client (posthog-js): - Initialized with respect_dnt, localStorage+cookie persistence - Disabled in development unless NEXT_PUBLIC_POSTHOG_DEV is set - Session recording disabled by default, enabled after cookie consent - Autocapture off — explicit event tracking only PostHog server (posthog-node): - Server-side event capture for deploy, uninstall, sign-up, token rotation - Feature flag evaluation for server-rendered pages Cookie consent banner: - GDPR compliant with "Necessary only" and "Accept all" options - Granular consent: analytics, heatmaps, session recording - Consent stored in localStorage, applied on subsequent visits - Respects user choice: opt-out disables all PostHog capture Feature flags: - Client-side isFeatureEnabled() and getFeatureFlag() - Server-side async evaluation with user context - Known flag constants for marketplace features Event catalog (typed): - marketplace_browse, marketplace_listing_view, marketplace_deploy - server_created, cookie_consent - Server: marketplace_deploy_server, marketplace_uninstall, user_signed_up Wiring: - PostHogProvider wraps app in providers.tsx with page view tracking - User identification via Clerk on sign-in, reset on sign-out - CookieConsent banner rendered at app root - Env vars: NEXT_PUBLIC_POSTHOG_KEY, NEXT_PUBLIC_POSTHOG_HOST
Reverse proxy (/ingest): - All PostHog requests route through /ingest/* to avoid ad blockers - Proxies both GET and POST to PostHog host - No auth required (excluded from Clerk middleware) Full client config: - Heatmaps enabled (enable_heatmaps: true) - Autocapture on for heatmap click tracking - Error tracking (capture_exceptions: true) - Session recording with masked inputs, enabled after consent - Console log recording for debugging - Toolbar support for visual element selection - Request batching for performance Group analytics: - Users grouped by plan tier (free/starter/pro/enterprise) - Super properties on every event: plan, is_admin, user_created_at Experiments: - getExperimentVariant() + trackExperimentExposure() helpers - Known experiments: marketplace-layout, deploy-cta-copy, pricing-page-v2 - Feature flag payload support (getFeatureFlagPayload) - Server-side getAllFlags() for SSR Events wired into code paths: - Deploy button: tracks success/failure with listing context - Marketplace service: serverTrackDeploy on successful deploy - Uninstall: serverTrackUninstall with server ID - Token rotation: serverTrackTokenRotation Extended event catalog: - search, cta_click, version_selected, listing_tab_change - spec_imported, server_deleted, client_error - plan_upgrade, checkout_started (revenue tracking) - api_request (server-side request metrics)
posthog-node uses node:async_hooks which webpack can't bundle for the browser. Split events.ts into events.client.ts (posthog-js only) and events.server.ts (posthog-node only). Client components import from events.client, API routes and services import from events.server.
…ration CRITICAL fixes: - Ingest proxy: path allowlist (only /e/, /decide/, /batch/, /engage/, /s/, /static/), 512KB body cap, path traversal normalization - Consent: opt_out_capturing_by_default: true — zero tracking before explicit user consent (GDPR Article 7 compliance) HIGH fixes: - Removed PII from PostHog identify (no email, no name — opaque user ID only) - Removed is_admin from super properties (no admin flag in analytics) - Disabled console log recording (was exfiltrating logged data) - Enabled mask_all_text: true for autocapture (safe default) - Server-side events now flush immediately after capture (serverless safe) - Search queries sanitized: truncated to 100 chars, email patterns redacted
Deploy workflow now triggers automatically after CI passes on master: - workflow_run on CI completion (master branch only) - Deploys web first, then runtime to Fly.io - Only runs when CI succeeds (conclusion == 'success') - Manual workflow_dispatch still available for staging and selective deploys - Production environment protection rules apply (if configured in GitHub) Flow: push to master → CI (lint, typecheck, test) → Deploy (web + runtime)
- Dockerfiles: updated build configs for web and runtime - Sign-in/sign-up pages: Clerk auth route pages - Brand logo component and SVG asset - Features grid and footer marketing updates - Root layout adjustments - Runtime fly.toml config cleanup
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds a Marketplace feature (DB schema, services, APIs, UI), PostHog analytics (client/server, events, consent, feature flags, ingest proxy), and per-server bearer-token auth (DB column, runtime auth, token rotation). Also updates Docker/CI, seeds/scripts, registry OpenAPI specs, and several UI/layout components. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User (Browser)
participant Web as Next App
participant API as Marketplace API
participant DB as Postgres
participant Runtime as MCP Runtime
User->>Web: Click "Deploy" (listing slug)
Web->>API: POST /api/marketplace/{slug}/deploy
API->>DB: Load listing, check existing installs
alt Existing install
API-->>Web: 201 idempotent result (redirectUrl, no token)
else New install
API->>DB: create spec, create mcp_server (store token_hash), create install, increment install_count
API-->>Web: 201 DeployResult (serverId, accessToken, redirectUrl)
end
User->>Runtime: Connect /sse or send request with token
Runtime->>Runtime: extract token (header or ?token), sha256(token)
Runtime->>DB: lookup server token_hash via registry
Runtime->>Runtime: timingSafeEqual(storedHash, computedHash)
alt match
Runtime-->>User: allow (set req.serverTokenVerified = true)
else mismatch
Runtime-->>User: 401 unauthorized
end
sequenceDiagram
participant Client as Browser
participant NextApp as Next App (client)
participant PostHog as PostHog Backend
Client->>NextApp: Page load (marketplace)
NextApp->>NextApp: initPostHog() (if key present and consent)
NextApp->>PostHog: capture $pageview / events (when consented)
Client->>NextApp: Sign-in completes
NextApp->>PostHog: identify(user.id) + set plan super property
Client->>NextApp: Trigger deploy
NextApp->>PostHog: trackMarketplaceDeploy(success/failure)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
✨ Finishing Touches🧪 Generate unit tests (beta)
|
…nalytics changes)
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/web/app/(marketing)/components/features-grid.tsx (1)
1-79:⚠️ Potential issue | 🟡 MinorReplace array index with stable keys in the dashboard visualization.
Line 142-147 uses array index as the React key in the chart visualization, which violates React linting rules. Use stable, unique values (e.g., combining bar height and position) instead of array index:
Suggested fix
{[30, 50, 40, 70, 55, 80, 65, 90, 100].map((h, i) => ( <div key={`bar-${h}-${i}`} // Use stable identifier instead of i className="w-3 rounded-t bg-foreground" style={{ height: `${h}%`, opacity: 0.2 + (i / 8) * 0.8, }} /> ))}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/components/features-grid.tsx around lines 1 - 79, In FeaturesGrid the bar-chart map uses the array index as the React key (violates linting); update the mapping inside the dashboard visualization (the map over the heights array in FeaturesGrid) to use a stable unique key instead of the index — e.g. build a string combining the bar height and its position like "bar-{height}-{index}" (or another stable identifier) and use that as the key, leaving the rest of the element props and styles unchanged.apps/runtime/Dockerfile (1)
6-13:⚠️ Potential issue | 🟠 MajorAdd
packages/vault/package.jsonto thedepsstage.
apps/runtimedeclares a direct dependency on@apifold/vault(workspace protocol). Thedepsstage must copypackages/vault/package.jsonbefore runningpnpm installso that pnpm can resolve the workspace link. Without it, the dependency cannot be properly installed, even though the package is later copied in therunnerstage.Proposed fix
COPY packages/logger/package.json ./packages/logger/ +COPY packages/vault/package.json ./packages/vault/ COPY packages/eslint-config/package.json ./packages/eslint-config/ RUN pnpm install --frozen-lockfile --prod=false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/runtime/Dockerfile` around lines 6 - 13, The deps stage in the Dockerfile is missing packages/vault/package.json which prevents pnpm from resolving the workspace dependency `@apifold/vault` for apps/runtime; update the deps stage COPY list to include packages/vault/package.json alongside the other package.json copies before the RUN pnpm install --frozen-lockfile --prod=false so pnpm can link the workspace package properly (ensure the change is in the deps stage of the Dockerfile where package.json files are copied and pnpm install is executed).
🟠 Major comments (22)
.github/workflows/deploy.yml-34-59 (1)
34-59:⚠️ Potential issue | 🟠 MajorSerialize production deploys.
There is no concurrency guard here, so two production deploys can run at once. When both
auto-deploy(triggered after CI passes on master) andmanual-deploy(triggered manually to production) run simultaneously, they both access the same Fly applications using the same API token. The slower workflow finishes last, potentially rolling back the faster deploy with an older revision.💡 Proposed fix
auto-deploy: + concurrency: + group: deploy-production + cancel-in-progress: true if: > github.event_name == 'workflow_run' && github.event.workflow_run.conclusion == 'success' @@ manual-deploy: + concurrency: + group: deploy-${{ inputs.environment }} + cancel-in-progress: true if: github.event_name == 'workflow_dispatch'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/deploy.yml around lines 34 - 59, Add a GitHub Actions concurrency block to serialize production deploys so auto-deploy and manual-deploy cannot run at the same time: add a concurrency: group: "production-deploy" and cancel-in-progress: true (or a group using environment/inputs) to both the auto-deploy and manual-deploy jobs so the flyctl deploy steps (the run lines invoking flyctl deploy in auto-deploy and manual-deploy) are executed one at a time and any in-progress older job is cancelled.packages/registry/specs/twilio/spec.json-152-184 (1)
152-184:⚠️ Potential issue | 🟠 MajorAdd schema constraint for
Url/Twimlrequirement.The descriptions state that either
UrlorTwimlis required, but the schema only enforcesToandFrom. This allows invalid payloads that lack bothUrlandTwimlto pass validation. UseanyOfto enforce this constraint:Suggested change
"schema": { "type": "object", "required": [ "To", "From" ], + "anyOf": [ + { "required": ["Url"] }, + { "required": ["Twiml"] } + ], "properties": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/registry/specs/twilio/spec.json` around lines 152 - 184, The object schema currently requires only "To" and "From" but must enforce that either "Url" or "Twiml" is present; update the same object schema (the one with properties "To", "From", "Url", "Twiml", etc.) to include an anyOf constraint that requires either ["Url"] or ["Twiml"] in addition to the existing required fields so payloads missing both are rejected.packages/registry/specs/twilio/spec.json-86-124 (1)
86-124:⚠️ Potential issue | 🟠 MajorChange Twilio Messages and Calls endpoints to use form-encoded requests.
Twilio's
POST /Messages.jsonandPOST /Calls.jsonendpoints requireapplication/x-www-form-urlencoded, notapplication/json. Using JSON will cause requests to fail with parameter errors. Generated clients/executors relying on this spec would send incorrectly formatted requests.Suggested change
- "application/json": { + "application/x-www-form-urlencoded": {Apply to both
sendMessage(lines 86-124) andmakeCall(lines 148-187).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/registry/specs/twilio/spec.json` around lines 86 - 124, The POST bodies for Twilio endpoints are currently defined as JSON but Twilio requires application/x-www-form-urlencoded; update the requestBody for both sendMessage and makeCall to use content type "application/x-www-form-urlencoded" instead of "application/json", convert the schema to an appropriate form-style schema (properties as form fields: To, From, Body, StatusCallback, and MediaUrl encoded as repeated "MediaUrl" fields or comma-separated per Twilio rules) and ensure required fields remain (To, From, Body); this change should be applied to the sendMessage and makeCall operation requestBody definitions so generated clients send form-encoded parameters.packages/registry/specs/twilio/spec.json-1-13 (1)
1-13:⚠️ Potential issue | 🟠 MajorDeclare Twilio auth in the OpenAPI document.
Twilio REST API requests use HTTP Basic authentication, but this document doesn't define a
securitySchemesentry or applysecurityglobally/per-operation. Generated marketplace clients will otherwise treat these operations as anonymous and fail unless auth is injected out-of-band. Twilio credentials (API Key + Secret or Account SID + Auth Token) must be base64-encoded in the Authorization header.Suggested change
{ "openapi": "3.0.3", "info": { "title": "Twilio REST API", "version": "2010-04-01", "description": "Twilio is a communications platform for sending SMS/MMS messages and making phone calls programmatically. Use this API to send text messages, retrieve message history, and initiate voice calls. Supports domestic and international messaging and calling." }, + "components": { + "securitySchemes": { + "twilioBasicAuth": { + "type": "http", + "scheme": "basic" + } + } + }, + "security": [ + { + "twilioBasicAuth": [] + } + ], "servers": [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/registry/specs/twilio/spec.json` around lines 1 - 13, The OpenAPI document for Twilio is missing a security declaration so generated clients treat requests as anonymous; add a components.securitySchemes entry (e.g., name it "basicAuth" or "twilio_basic") with type: "http" and scheme: "basic", and apply a global "security" array referencing that scheme (e.g., "basicAuth": []) so all operations require HTTP Basic auth; note Twilio expects Account SID (or API Key) as username and Auth Token (or Secret) as password which clients will send base64-encoded in the Authorization header.apps/web/components/marketplace/markdown-content.tsx-10-12 (1)
10-12:⚠️ Potential issue | 🟠 MajorSanitize the HTML before rendering with
dangerouslySetInnerHTML.The
renderMarkdownfunction converts markdown to HTML using remark, but applies no sanitization. Since this HTML comes from database content (marketplace descriptions, setup guides, and changelogs) and flows directly intodangerouslySetInnerHTML, unsanitized content becomes an XSS sink. DOMPurify (v3.3.3) is already installed in the project—use it to sanitize the HTML afterrenderMarkdownand before passing toMarkdownContent, or wrap the sanitized output in a safe-HTML type to enforce the contract.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/markdown-content.tsx` around lines 10 - 12, The rendered HTML from renderMarkdown is not sanitized before being fed into the MarkdownContent component via dangerouslySetInnerHTML, creating an XSS risk; fix it by importing and using DOMPurify (v3.3.3) to sanitize the HTML output of renderMarkdown (or sanitize inside MarkdownContent) and pass the sanitized string to dangerouslySetInnerHTML (or wrap it in a safe-HTML contract), updating references to renderMarkdown, MarkdownContent, and the dangerouslySetInnerHTML usage so the component only ever receives and renders DOMPurify.sanitize(html).apps/web/app/api/marketplace/my-installs/route.ts-20-22 (1)
20-22: 🛠️ Refactor suggestion | 🟠 MajorUse
getReadDb()for read-only operations.This endpoint only reads data but uses
getDb()instead ofgetReadDb(). Other marketplace routes (featured, versions, detail) usegetReadDb()for read operations. Using the read replica improves consistency across the codebase and distributes load appropriately.♻️ Proposed fix
- const db = getDb(); + const db = getReadDb();Also update the import:
-import { getDb } from '../../../../lib/db/index'; +import { getReadDb } from '../../../../lib/db/index';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/marketplace/my-installs/route.ts` around lines 20 - 22, Replace the read connection with the read-replica by switching getDb() to getReadDb() where the MarketplaceInstallRepository is constructed for the read-only path (the lines creating db, new MarketplaceInstallRepository(db) and calling installRepo.findByUser(userId)). Also update the import to pull getReadDb instead of getDb so the endpoint uses the read-only database client consistent with other marketplace routes.apps/web/app/api/marketplace/route.ts-14-18 (1)
14-18:⚠️ Potential issue | 🟠 MajorCache-hit and cache-miss return different response shapes.
At Line 17 you return
createSuccessResponse(cached), but on miss (Line 27–Line 32) you returncreateSuccessResponse(result.items, meta). Since Line 24 cachesresult, clients will get inconsistent payloads depending on cache state.🐛 Proposed fix (cache final payload, return consistently)
const cacheKey = marketplaceCache.buildBrowseKey(params); const cached = await marketplaceCache.getCached(cacheKey); if (cached) { - return NextResponse.json(createSuccessResponse(cached)); + return NextResponse.json(cached); } @@ - await marketplaceCache.setCached(cacheKey, result, 'browse'); - - return NextResponse.json( - createSuccessResponse(result.items, { - total: result.total, - page: result.page, - limit: result.limit, - hasMore: result.page < result.totalPages, - }), - ); + const payload = createSuccessResponse(result.items, { + total: result.total, + page: result.page, + limit: result.limit, + hasMore: result.page < result.totalPages, + }); + await marketplaceCache.setCached(cacheKey, payload, 'browse'); + return NextResponse.json(payload);Also applies to: 24-33
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/marketplace/route.ts` around lines 14 - 18, The cached response and live response use different shapes (you cache `result` but return `createSuccessResponse(result.items, meta)`), causing inconsistent payloads on hits vs misses; update the handler so you construct a single final payload (e.g., an object containing items and meta) and use that same payload for both caching and responding: build the payload after computing `result`/`meta`, call `marketplaceCache.setCache(cacheKey, payload)` (or equivalent) and return `createSuccessResponse(payload)` on miss, and on hit return `createSuccessResponse(cached)` — locate references to marketplaceCache.buildBrowseKey, marketplaceCache.getCached, marketplaceCache.setCache (or where caching is written), createSuccessResponse, and the `result`/`result.items`/`meta` variables to implement this change.apps/web/scripts/run-migration.ts-18-23 (1)
18-23:⚠️ Potential issue | 🟠 MajorAvoid
process.exit(1)insidecatch; letfinallycomplete async cleanup.At line 21,
process.exit(1)terminates the event loop before theawait sql.end()call in the finally block can complete, leaving the database connection ungracefully closed. Useprocess.exitCode = 1andreturninstead to allow full cleanup.🛠️ Proposed fix
} catch (e: unknown) { const msg = e instanceof Error ? e.message : String(e); console.error('Migration failed:', msg); - process.exit(1); + process.exitCode = 1; + return; } finally { await sql.end(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/scripts/run-migration.ts` around lines 18 - 23, The catch block currently calls process.exit(1) which prevents the finally block's async cleanup (await sql.end()) from running; update the catch in run-migration.ts to set process.exitCode = 1 and then return instead of calling process.exit(1) so the function can complete and the finally block (including sql.end()) can run to gracefully close the DB connection.apps/web/lib/analytics/posthog-client.ts-24-28 (1)
24-28:⚠️ Potential issue | 🟠 MajorHeatmap element attributes should be masked to prevent sensitive data capture.
At lines 25–27,
mask_all_element_attributes: falseleaves DOM attributes unmasked in heatmaps despiteenable_heatmaps: trueon line 30. This contradicts the code comment's claim of "safe masking defaults" and risks leaking PII from attributes. PostHog's heatmap privacy best practice is to mask element attributes by default.🔒 Recommended fix
autocapture: true, mask_all_text: true, - mask_all_element_attributes: false, + mask_all_element_attributes: true,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/analytics/posthog-client.ts` around lines 24 - 28, The heatmap config in the PostHog client uses unsafe defaults—update the PostHog config object (the keys autocapture, mask_all_text, mask_all_element_attributes within the client initialization in posthog-client.ts) to set mask_all_element_attributes to true so element attributes are masked when enable_heatmaps is true; ensure the change leaves autocapture and mask_all_text as-is and keeps enable_heatmaps enabled to maintain the "safe masking defaults" behavior.packages/registry/specs/hubspot/spec.json-1-169 (1)
1-169:⚠️ Potential issue | 🟠 MajorAdd explicit OpenAPI auth requirements (
securitySchemes+ globalsecurity).The spec currently lacks machine-readable authentication policy (no
securitySchemesorsecurityfields), which allows generated clients and tooling to treat endpoints as unauthenticated. HubSpot APIs require Bearer token authentication using private app tokens. Add asecuritySchemesdefinition and apply it globally.🔐 Proposed fix
{ "openapi": "3.0.3", "info": { @@ "servers": [ @@ ], + "components": { + "securitySchemes": { + "hubspotPrivateAppToken": { + "type": "http", + "scheme": "bearer", + "bearerFormat": "Private App Token" + } + } + }, + "security": [ + { + "hubspotPrivateAppToken": [] + } + ], "paths": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/registry/specs/hubspot/spec.json` around lines 1 - 169, The OpenAPI spec is missing machine-readable auth; add a components.securitySchemes entry (e.g., name it "bearerAuth" with type: "http", scheme: "bearer", bearerFormat: "JWT" or "Token") and then add a top-level "security" array that requires this scheme globally so endpoints like operationId listContacts, createContact, listDeals and listCompanies are treated as authenticated; update the root object to include the new components.securitySchemes and global security requirement.apps/web/app/(marketing)/marketplace/[slug]/page.tsx-21-29 (1)
21-29:⚠️ Potential issue | 🟠 MajorDon't collapse database failures into 404s.
This helper turns “listing not found” and “read path blew up” into the same
nullresult, so transient DB failures rendernotFound()and bad metadata instead of a 500. Let unexpected errors bubble.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx around lines 21 - 29, The getListing helper currently catches all errors and returns null, collapsing DB/other failures into a 404; change getListing so it does not swallow unexpected exceptions: call repo.findPublishedBySlug(slug) and only return null when the repository explicitly indicates "not found" (e.g., a null/undefined result or a specific NotFoundError from MarketplaceListingRepository.findPublishedBySlug), but rethrow any other exceptions so they produce a 500; update the catch block (or remove it) in getListing to only handle the known not-found case and let other errors bubble.apps/web/lib/db/migrations/0008_marketplace.sql-84-84 (1)
84-84:⚠️ Potential issue | 🟠 MajorThis uniqueness constraint blocks multiple installs of the same listing.
(listing_id, user_id)allows a user to deploy a listing only once, even though each install already has its ownserver_id. That prevents common dev/prod or multi-region installs of the same template.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/migrations/0008_marketplace.sql` at line 84, The UNIQUE index idx_mi_listing_user on marketplace_installs (listing_id, user_id) prevents a user from creating multiple installs of the same listing; change the constraint so uniqueness includes server_id (or remove uniqueness if intended) — e.g., replace the index creation with a UNIQUE index on (listing_id, user_id, server_id) to allow multiple installs per user for different servers while preserving per-server uniqueness.apps/web/scripts/seed-marketplace.ts-294-350 (1)
294-350:⚠️ Potential issue | 🟠 MajorDon't leave the seed run's top-level promise floating.
main()is invoked without.catch, andsql.end()only runs on the happy path. A missing spec or failed upsert can skip cleanup and turn the failure into an unhandled rejection.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/scripts/seed-marketplace.ts` around lines 294 - 350, main() is called without error handling so failures can skip cleanup; wrap the invocation in a try/catch/finally so sql.end() always runs and errors are surfaced. Modify the top-level run to call the existing async main() from a small runner that catches errors from loadSpec/INSERTs and logs them (including the thrown error), ensures sql.end() is awaited in a finally block (referencing the sql variable created near the top of main and the main() function itself), and exits non-zero on failure; this guarantees SEED_LISTINGS processing and resources are cleaned up even on errors.apps/web/lib/db/migrations/0008_marketplace.sql-91-91 (1)
91-91:⚠️ Potential issue | 🟠 MajorLet users report the same listing again after the first report is closed.
This global unique index means a dismissed/reviewed report permanently prevents the same reporter from filing a new one. If the goal is only to suppress duplicate open reports, make the uniqueness partial on
status = 'open'.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/migrations/0008_marketplace.sql` at line 91, Replace the global unique index idx_mrp_user_listing on marketplace_reports (reporter_id, listing_id) with a partial unique index that only enforces uniqueness for open reports; drop or alter the existing idx_mrp_user_listing and recreate it as a partial unique index on (reporter_id, listing_id) with WHERE status = 'open' so closed/dismissed reports no longer block new reports from the same reporter for the same listing.apps/web/components/marketplace/deploy-button.tsx-34-47 (1)
34-47:⚠️ Potential issue | 🟠 MajorDon't treat generic fetch failures as auth failures.
If
res.json()throws or the request fails for a non-auth reason, this path redirects an already-authenticated user to/sign-inand hides the actual deploy problem. Only redirect when the response proves an auth failure; otherwise surface an error state and track it as a failed deploy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/deploy-button.tsx` around lines 34 - 47, The catch block currently assumes any exception means an auth redirect and sends users to sign-in; instead, only redirect when the response proves an auth failure (e.g., res.status === 401 || res.status === 403). Update the flow in the deploy handler around res, data, setError, trackMarketplaceDeploy and router.push so that: after awaiting fetch and before calling res.json() handle non-ok responses by reading data (if available) and treating auth status codes as redirect, otherwise setError and call trackMarketplaceDeploy(..., success: false, error: msg); also wrap res.json() in a try/catch so JSON parse errors or network errors fall into the generic error handling (setError + track failure) and only perform router.push('/sign-in?...') when you have a confirmed auth status from res.apps/web/app/(marketing)/marketplace/[slug]/page.tsx-148-155 (1)
148-155:⚠️ Potential issue | 🟠 MajorThe version selector is cosmetic right now.
Selecting a version only changes the client-side label; the page still renders the slug's current
listingdata for overview/spec/changelog content. Wire the selection into routing/data loading or hide the control until it actually changes what the user sees.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx around lines 148 - 155, The VersionSelector is only updating client-side UI and not changing the data shown; either wire it into routing/data loading or hide it. Update the page component that renders ListingTabs and VersionSelector to read the selected version from the URL (e.g., searchParams.version or a route segment) and pass that version into your listing loader (replace calls that use listing.specVersion with the selected version when present, e.g., getListing(listing.slug, version) or fetchListingVersion(slug, version)); then change VersionSelector to navigate (Link or router.push) to the same page with the selected version in the query/route so the server/component re-loads the correct listing version (or, if you prefer temporary fix, hide VersionSelector until routing/loading is implemented). Ensure you update references to VersionSelector, listing.specVersion, ListingTabs/activeTab and any data-fetching helpers so they consume the version param.apps/web/scripts/seed-marketplace.ts-327-340 (1)
327-340:⚠️ Potential issue | 🟠 MajorUpsert the rest of the seeded metadata too.
category,author_type, andrecommended_auth_modeare inserted but never updated on conflict. Rerunning the seed can leave browse filters and deploy defaults stale after the registry data changes.Proposed fix
ON CONFLICT (slug) DO UPDATE SET name = EXCLUDED.name, short_description = EXCLUDED.short_description, long_description = EXCLUDED.long_description, + category = EXCLUDED.category, icon_url = EXCLUDED.icon_url, + author_type = EXCLUDED.author_type, tags = EXCLUDED.tags, raw_spec = EXCLUDED.raw_spec, spec_version = EXCLUDED.spec_version, spec_hash = EXCLUDED.spec_hash, setup_guide = EXCLUDED.setup_guide, api_docs_url = EXCLUDED.api_docs_url, recommended_base_url = EXCLUDED.recommended_base_url, + recommended_auth_mode = EXCLUDED.recommended_auth_mode, featured = EXCLUDED.featured, status = 'published'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/scripts/seed-marketplace.ts` around lines 327 - 340, The upsert ON CONFLICT clause (the "ON CONFLICT (slug) DO UPDATE SET" block) currently updates many fields but omits category, author_type, and recommended_auth_mode, so rerunning the seed won't refresh those values; update the DO UPDATE SET list to include category = EXCLUDED.category, author_type = EXCLUDED.author_type, and recommended_auth_mode = EXCLUDED.recommended_auth_mode so those seeded columns are overwritten on conflict and stay in sync with registry changes.apps/web/lib/db/repositories/marketplace-report.repository.ts-11-17 (1)
11-17:⚠️ Potential issue | 🟠 MajorNarrow
create()to user-report fields only.Accepting full
$inferInsertlets a caller set moderation-owned columns likestatus,reviewedBy, or custom ids/timestamps. Forcestatus: 'open'here and take an explicit DTO instead.Proposed fix
+type CreateMarketplaceReportInput = Pick< + typeof marketplaceReports.$inferInsert, + 'listingId' | 'reporterId' | 'reason' | 'details' +>; + async create( - input: typeof marketplaceReports.$inferInsert, + input: CreateMarketplaceReportInput, ): Promise<MarketplaceReport> { const rows = await this.db .insert(marketplaceReports) - .values(input) + .values({ ...input, status: 'open' }) .returning();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/repositories/marketplace-report.repository.ts` around lines 11 - 17, The create method currently accepts the full marketplaceReports.$inferInsert which allows callers to set moderation-owned fields; change create(input) to accept a narrow DTO (e.g., CreateMarketplaceReportDto or similar) containing only user-supplied fields (reporterId, targetId, reason, description, etc.), explicitly set status: 'open' and omit/override fields like status, reviewedBy, id, createdAt, updatedAt before inserting via marketplaceReports; update the code paths calling create to construct the new DTO and ensure the database insert uses your DTO plus the forced status and any server-generated fields.apps/web/lib/analytics/feature-flags.ts-64-70 (1)
64-70:⚠️ Potential issue | 🟠 MajorUpdate
getAllFlagsForUserreturn type to match PostHog's actual return.The return type should be
Promise<Record<string, any>>to match PostHog'sgetAllFlags, which can return flag values of any type (boolean, string, object, etc.). The current typePromise<Record<string, string | boolean>>is overly restrictive and doesn't reflect what PostHog actually returns.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/analytics/feature-flags.ts` around lines 64 - 70, The return type of getAllFlagsForUser is too restrictive; change its signature from Promise<Record<string, string | boolean>> to Promise<Record<string, any>> to match PostHog's ph.getAllFlags return type; update the function declaration for getAllFlagsForUser and ensure any callers that relied on the narrower union are adjusted to handle any-typed flag values (refer to function getAllFlagsForUser and the ph.getAllFlags call to locate the code).apps/web/lib/db/repositories/marketplace-listing.repository.ts-74-99 (1)
74-99:⚠️ Potential issue | 🟠 MajorBug:
totalcount ignores full-text search filter.The count query (Lines 75-78) uses only the base
whereClause, but whenparams.qis provided, the items query (Lines 93-98) adds an additionalsearchCondition. This means:
totalreflects all listings matching category/author filtersitemsonly contains listings also matching the search querytotalPageswill be inflated, showing more pages than actually existThe count query needs to include the same search condition.
🐛 Proposed fix
async searchPublished(params: BrowseMarketplaceInput): Promise<BrowseResult> { const conditions = [eq(marketplaceListings.status, 'published')]; if (params.category) { conditions.push(eq(marketplaceListings.category, params.category)); } if (params.author_type) { conditions.push(eq(marketplaceListings.authorType, params.author_type)); } - const whereClause = and(...conditions); + // Add full-text search condition if query provided + if (params.q) { + conditions.push( + sql`to_tsvector('english', ${marketplaceListings.name} || ' ' || ${marketplaceListings.shortDescription} || ' ' || array_to_string(${marketplaceListings.tags}, ' ')) @@ plainto_tsquery('english', ${params.q})` + ); + } + + const whereClause = and(...conditions); const offset = (params.page - 1) * params.limit; // Count total const [countRow] = await this.db .select({ count: sql<number>`count(*)::int` }) .from(marketplaceListings) .where(whereClause); const total = countRow?.count ?? 0; - // Build query with search and sorting - let query = this.db + // Build query with sorting + const query = this.db .select() .from(marketplaceListings) .where(whereClause) .limit(params.limit) .offset(offset); - // Full-text search via plainto_tsquery - if (params.q) { - const searchCondition = sql`to_tsvector('english', ${marketplaceListings.name} || ' ' || ${marketplaceListings.shortDescription} || ' ' || array_to_string(${marketplaceListings.tags}, ' ')) @@ plainto_tsquery('english', ${params.q})`; - query = this.db - .select() - .from(marketplaceListings) - .where(and(whereClause, searchCondition)) - .limit(params.limit) - .offset(offset); - } - // Sort🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/repositories/marketplace-listing.repository.ts` around lines 74 - 99, The count currently uses only whereClause so total (countRow/total) ignores the full-text search added when params.q is present; update the count logic to build the same searchCondition (the sql using to_tsvector/plainto_tsquery) and use and(whereClause, searchCondition) when computing countRow (i.e., run the same select({count: sql`count(*)::int`}).from(marketplaceListings).where(and(whereClause, searchCondition)) when params.q exists) so total matches the items query and totalPages is correct; reference countRow, total, marketplaceListings, whereClause, params.q, searchCondition, and query when making the change.apps/web/lib/db/repositories/marketplace-install.repository.ts-115-129 (1)
115-129:⚠️ Potential issue | 🟠 Major
_newHashparameter is unused — hash comparison logic appears incomplete.The method accepts
_newHashand the comment on Line 125 mentions "only flag installs that have a different version hash," but the actual query only checksisUpdateAvailable === false. It never comparesinstalledVersionHashwith the new hash.This means all installs for the listing get flagged as having updates available, even if they're already on the latest version.
🐛 Proposed fix to compare hashes
async markUpdateAvailable( listingId: string, - _newHash: string, + newHash: string, ): Promise<void> { await this.db .update(marketplaceInstalls) .set({ isUpdateAvailable: true }) .where( and( eq(marketplaceInstalls.listingId, listingId), - // Only flag installs that have a different version hash - eq(marketplaceInstalls.isUpdateAvailable, false), + // Only flag installs on older versions + sql`${marketplaceInstalls.installedVersionHash} != ${newHash}`, ), ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/db/repositories/marketplace-install.repository.ts` around lines 115 - 129, The markUpdateAvailable function currently ignores the _newHash parameter and only checks isUpdateAvailable; update the DB update so it also compares installedVersionHash against the provided new hash (use the actual parameter name instead of _newHash) — e.g., add a condition like neq(marketplaceInstalls.installedVersionHash, newHash) (and handle possible null/undefined hashes if needed) to the where clause alongside eq(marketplaceInstalls.listingId, listingId) and eq(marketplaceInstalls.isUpdateAvailable, false) so only installs with a different version hash are flagged.apps/web/app/(marketing)/marketplace/page.tsx-74-99 (1)
74-99:⚠️ Potential issue | 🟠 MajorSearch results display incorrect total count and pagination.
The count query (lines 75–78) uses only the base
whereClause(published status plus optional category/author type filters), but the results query (lines 91–98) also applies the full-text search condition whenparams.qis provided. This causes thetotalvalue to reflect the count of all filtered items rather than the count of items matching the search query, resulting in inflated pagination and misleading result counts in the UI.To fix, apply the search condition to the count query when
params.qis provided, matching the filters applied to the results query.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/page.tsx around lines 74 - 99, The count query is using only the base whereClause (published + optional category/author filters) while the results query also applies the full-text search when params.q is present, so update the count logic to include the same search condition when params.q exists; locate the code that builds whereClause and the count call (referenced by whereClause and params.q in the page component) and merge/apply the same search filter used for fetching listings to the count query so total reflects only items matching the search and other filters.
🟡 Minor comments (10)
apps/web/components/marketplace/search-bar.tsx-28-36 (1)
28-36:⚠️ Potential issue | 🟡 MinorAdd an explicit label for the search field.
W3C guidance expects form controls to have labels, with
aria-labelas the fallback when the visible label is intentionally hidden. This field currently relies only on placeholder text. (w3.org)Suggested change
- <form onSubmit={handleSearch} className="relative w-full max-w-lg"> - <Search className="absolute left-3 top-1/2 h-4 w-4 -translate-y-1/2 text-muted-foreground" /> + <form onSubmit={handleSearch} className="relative w-full max-w-lg"> + <label htmlFor="marketplace-search" className="sr-only"> + Search integrations + </label> + <Search + aria-hidden="true" + className="absolute left-3 top-1/2 h-4 w-4 -translate-y-1/2 text-muted-foreground" + /> <input + id="marketplace-search" type="text"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/search-bar.tsx` around lines 28 - 36, The search input in the SearchBar component is missing an accessible label; update the JSX around the input used by handleSearch, query and setQuery to include an explicit label element (visually visible or visually-hidden) or add an aria-label/aria-labelledby to the input so it is properly announced by assistive tech; ensure the label text (e.g., "Search integrations") is unique and matches the input tied to the input element used in the component.packages/registry/specs/twilio/spec.json-109-114 (1)
109-114:⚠️ Potential issue | 🟡 MinorAdd schema constraints to match documented bounds.
MediaUrlis documented as "up to 10 URLs" but lacksmaxItems: 10, andTimeoutis documented with a "Maximum is 600" but lacksmaximum: 600. Without these constraints in the schema, generated validators won't enforce the documented limits.Suggested change
"MediaUrl": { "type": "array", "description": "URLs of media files to attach as MMS (e.g. images, GIFs). Up to 10 URLs. Each URL must be publicly accessible. Only supported for MMS-capable numbers.", + "maxItems": 10, "items": { "type": "string" } }, @@ "Timeout": { "type": "integer", "description": "Number of seconds to wait for the call to be answered before giving up. Defaults to 60. Maximum is 600.", + "maximum": 600, "default": 60 }Also applies to: 178-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/registry/specs/twilio/spec.json` around lines 109 - 114, The JSON schema is missing bounds: add "maxItems": 10 to the MediaUrl array schema (the MediaUrl property) so validators enforce the documented "up to 10 URLs" limit, and add "maximum": 600 to the Timeout numeric schema (the Timeout property) to enforce the documented maximum of 600; apply these updates to every occurrence of MediaUrl and Timeout in the spec (including the other instance noted around the later block) so generated validators will enforce both constraints.apps/web/components/marketplace/search-bar.tsx-10-24 (1)
10-24:⚠️ Potential issue | 🟡 MinorSync
querystate with URL parameter on navigation.
useSearchParams()updates when the URL changes (including back/forward navigation), butuseStateinitialized once does not automatically reflect those changes. If a user navigates back to this page, the URL will show the previous search term but the input field will display the old local state, creating a mismatch.Suggested change
-import { useCallback, useState } from 'react'; +import { useCallback, useEffect, useState } from 'react'; @@ - const [query, setQuery] = useState(searchParams.get('q') ?? ''); + const urlQuery = searchParams.get('q') ?? ''; + const [query, setQuery] = useState(urlQuery); + + useEffect(() => { + setQuery(urlQuery); + }, [urlQuery]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/search-bar.tsx` around lines 10 - 24, The local query state initialized with useState isn’t synced when URL searchParams change (e.g., browser back/forward), so add an effect that updates query via setQuery whenever searchParams changes: use an useEffect that reads searchParams.get('q') ?? '' and calls setQuery with that value; ensure this keeps handleSearch (which reads query and uses router, searchParams) intact and include searchParams in the effect dependency array so the input reflects URL updates.apps/web/components/marketplace/listing-tabs.tsx-24-37 (1)
24-37:⚠️ Potential issue | 🟡 MinorMark the active tab with
aria-currentfor accessibility.At Line 24–Line 37, active state is visual-only. Add
aria-current="page"on the active link so screen readers announce the selected tab.♿ Proposed fix
<Link key={tab.id} href={href} + aria-current={isActive ? 'page' : undefined} className={`relative px-4 py-3 text-sm transition-colors duration-150 ${🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/listing-tabs.tsx` around lines 24 - 37, The active tab Link currently only changes visually; update the Link element (the JSX using Link with key={tab.id}, href={href} and isActive) to include aria-current="page" when isActive is true (e.g., conditionally set aria-current={isActive ? "page" : undefined}) so screen readers can announce the selected tab.apps/web/components/marketplace/version-selector.tsx-25-31 (1)
25-31:⚠️ Potential issue | 🟡 MinorSurface version-loading failures instead of swallowing them.
A failed fetch currently leaves the selector silently empty, which looks the same as “no versions available.” Preserve an error state or at least telemetry here so the UI can explain that the list failed to load.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/version-selector.tsx` around lines 25 - 31, The fetch in the useEffect inside the VersionSelector component swallows failures and leaves the UI indistinguishable from "no versions"; update the effect to catch and propagate errors by setting an error state (e.g., add setVersionsError / setLoadError) when the fetch or res.json() fails and/or call the existing telemetry/logging helper (or console.error) with the error and context (include slug and response status), and only clear the error when a successful data.data result is received (ensure existing setVersions(data.data) remains). Locate the useEffect that calls fetch(`/api/marketplace/${slug}/versions`) and modify its .catch and response handling to surface the failure to the component state and telemetry so the UI can display an appropriate message.apps/web/lib/services/marketplace-service.ts-45-55 (1)
45-55:⚠️ Potential issue | 🟡 MinorIdempotency return uses listing
sluginstead of actualserverSlug.When returning an existing install (Line 48),
serverSlugis set to the listing'sslug, not the actual server's slug which includes a random suffix. This could cause issues if callers expect the actual server slug.Consider fetching the actual server slug from the install's
serverIdor storingserverSlugin the install record.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/services/marketplace-service.ts` around lines 45 - 55, The idempotent-return block returns a listing `slug` into the install object (serverSlug: slug) instead of the actual server slug for the existing install; update the logic in the marketplace-service idempotency path so serverSlug comes from the existing install/server data (e.g., look up the server slug by existing.serverId or read a stored serverSlug field from the install record) and return that value in place of `slug` so callers receive the real server slug; adjust any data-access call (fetchServerById/getServerSlug or ensure install.serverSlug is persisted) used by this code path (the symbols to change are the `existing` object, `serverSlug` property in the returned object, and the `slug` variable currently used).apps/web/lib/services/marketplace-service.ts-222-225 (1)
222-225:⚠️ Potential issue | 🟡 Minor
computeSpecHashonly sorts top-level keys — nested objects may produce inconsistent hashes.
Object.keys(spec).sort()only sorts the immediate properties ofspec. Nested objects retain their original key order, which can vary depending on how the JSON was parsed or constructed. This could cause different hashes for semantically identical specs.🔧 Proposed fix for deep sorting
export function computeSpecHash(spec: Record<string, unknown>): string { - const canonical = JSON.stringify(spec, Object.keys(spec).sort()); + const canonical = JSON.stringify(sortKeysDeep(spec)); return createHash('sha256').update(canonical).digest('hex'); } + +function sortKeysDeep(obj: unknown): unknown { + if (Array.isArray(obj)) { + return obj.map(sortKeysDeep); + } + if (obj !== null && typeof obj === 'object') { + const sorted: Record<string, unknown> = {}; + for (const key of Object.keys(obj).sort()) { + sorted[key] = sortKeysDeep((obj as Record<string, unknown>)[key]); + } + return sorted; + } + return obj; +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/services/marketplace-service.ts` around lines 222 - 225, computeSpecHash currently only sorts top-level keys causing inconsistent hashes for semantically identical nested objects; modify computeSpecHash to produce a fully canonical JSON representation by deep-sorting object keys recursively (and preserving array order and primitive values), e.g. add a helper (canonicalizeSpec or deepSortObject) that walks the spec, returns objects with sorted keys and leaves arrays/primitives intact, then JSON.stringify that canonical structure and feed it to createHash('sha256') so nested key order no longer affects the resulting hash.apps/web/lib/analytics/events.client.ts-68-71 (1)
68-71:⚠️ Potential issue | 🟡 MinorEmail redaction regex misses common email patterns.
The regex
[a-zA-Z0-9._+\-]{20,}@only matches email local parts with 20+ characters. Common emails likejohn@example.comoralice.smith@company.iowon't be redacted.🔒 Broader email pattern
// Truncate and sanitize query to prevent PII leakage - const safeQuery = params.query.slice(0, 100).replace(/[a-zA-Z0-9._+\-]{20,}@/g, '[REDACTED]@'); + const safeQuery = params.query.slice(0, 100).replace(/[a-zA-Z0-9._+\-]+@[a-zA-Z0-9.\-]+/g, '[REDACTED]'); getPostHog()?.capture('search', { ...params, query: safeQuery });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/lib/analytics/events.client.ts` around lines 68 - 71, The current email-redaction in safeQuery only matches local-parts >=20 chars; update the sanitization of params.query used to build safeQuery so it uses a broader email regex (e.g., a standard pattern that matches local@domain and subdomains) and replace matched emails (either whole address or just the local-part) with a safe token (e.g., "[REDACTED]" or "[REDACTED]@domain") before truncation; change the regex/replacement logic where safeQuery is computed so getPostHog()?.capture('search', { ...params, query: safeQuery }) never receives raw email addresses.apps/web/app/(marketing)/marketplace/page.tsx-184-196 (1)
184-196:⚠️ Potential issue | 🟡 Minor
SortDropdownhas no effect — selection changes are not wired to navigation.The
<select>element renders sort options but has noonChangehandler or surrounding<form>to submit the selection. In a server component, changing the dropdown value does nothing — the URL won't update and results won't re-sort.Consider either:
- Wrapping in a
<form method="GET" action="/marketplace">with hidden inputs for other params and usingonChangeto submit- Converting to a client component that uses
useRouterto navigate on change💡 Option 1: Form-based approach
-function SortDropdown({ currentSort }: { currentSort?: string }) { +function SortDropdown({ currentSort, currentParams }: { currentSort?: string; currentParams: Record<string, string | undefined> }) { return ( - <select - name="sort" - defaultValue={currentSort ?? 'popular'} - className="rounded-md border border-border bg-transparent px-3 py-1.5 text-xs text-muted-foreground focus:border-foreground focus:outline-none" - > - <option value="popular">Popular</option> - <option value="newest">Newest</option> - <option value="name">A–Z</option> - </select> + <form method="GET" action="/marketplace"> + {currentParams.q && <input type="hidden" name="q" value={currentParams.q} />} + {currentParams.category && <input type="hidden" name="category" value={currentParams.category} />} + <select + name="sort" + defaultValue={currentSort ?? 'popular'} + onChange={(e) => e.currentTarget.form?.submit()} + className="rounded-md border border-border bg-transparent px-3 py-1.5 text-xs text-muted-foreground focus:border-foreground focus:outline-none" + > + <option value="popular">Popular</option> + <option value="newest">Newest</option> + <option value="name">A–Z</option> + </select> + </form> ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/page.tsx around lines 184 - 196, SortDropdown currently renders a <select> but isn't wired to navigation so changing it doesn't update the URL or re-sort results; update SortDropdown to either (A) be wrapped in a GET <form> that includes any existing query params as hidden inputs and submits onChange (or use an onChange handler that calls form.submit()) so the sort query is applied, or (B) convert SortDropdown into a client component that uses Next.js router (useRouter or useSearchParams) and navigates/pushes the new ?sort= value while preserving other query params on change; ensure you update the component named SortDropdown and preserve other filters when constructing the GET submission or router push.packages/registry/specs/slack/spec.json-1-12 (1)
1-12:⚠️ Potential issue | 🟡 MinorOpenAPI spec is missing security definitions.
The Slack API requires Bearer token authentication, but this spec has no
securityDefinitions/components.securityor operation-levelsecurityrequirements defined. Consumers of this spec won't know authentication is required, and generated clients may not include auth headers.🔒 Add security scheme
"servers": [ { "url": "https://slack.com/api" } ], + "components": { + "securitySchemes": { + "bearerAuth": { + "type": "http", + "scheme": "bearer", + "description": "Slack Bot or User OAuth token" + } + } + }, + "security": [ + { "bearerAuth": [] } + ], "paths": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/registry/specs/slack/spec.json` around lines 1 - 12, The OpenAPI spec in packages/registry/specs/slack/spec.json is missing security definitions for Slack's Bearer token auth; add a components.securitySchemes entry (e.g., name "bearerAuth" with type: http, scheme: bearer, bearerFormat: "OAuth2/Token") and apply a global security requirement using that scheme (components.security or top-level "security") so all operations (or specific operations if needed) require the bearer token; update any operation-level docs if certain endpoints are public. Reference the spec.json top-level "components.securitySchemes" and the top-level "security" array to implement this change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 022bc8e7-0131-4293-aff3-8f50aaa7663b
⛔ Files ignored due to path filters (13)
apps/web/public/logo.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/github.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/hubspot.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/linear.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/notion.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/openai.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/resend.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/sentry.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/shopify.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/slack.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/stripe.svgis excluded by!**/*.svgapps/web/public/marketplace/logos/twilio.svgis excluded by!**/*.svgpnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (92)
.env.example.github/workflows/deploy.ymlapps/runtime/Dockerfileapps/runtime/src/mcp/session-manager.tsapps/runtime/src/middleware/mcp-auth.tsapps/runtime/src/middleware/server-token-auth.tsapps/runtime/src/registry/server-registry.tsapps/runtime/src/server.tsapps/runtime/src/sync/postgres-loader.tsapps/runtime/src/transports/sse.tsapps/web/Dockerfileapps/web/app/(marketing)/components/features-grid.tsxapps/web/app/(marketing)/components/footer.tsxapps/web/app/(marketing)/components/navbar.tsxapps/web/app/(marketing)/layout.tsxapps/web/app/(marketing)/marketplace/[slug]/page.tsxapps/web/app/(marketing)/marketplace/layout.tsxapps/web/app/(marketing)/marketplace/page.tsxapps/web/app/api/marketplace/[slug]/deploy/route.tsapps/web/app/api/marketplace/[slug]/route.tsapps/web/app/api/marketplace/[slug]/versions/route.tsapps/web/app/api/marketplace/categories/route.tsapps/web/app/api/marketplace/featured/route.tsapps/web/app/api/marketplace/installs/[installId]/route.tsapps/web/app/api/marketplace/my-installs/route.tsapps/web/app/api/marketplace/route.tsapps/web/app/api/servers/[id]/rotate-token/route.tsapps/web/app/api/servers/route.tsapps/web/app/ingest/[[...path]]/route.tsapps/web/app/layout.tsxapps/web/app/providers.tsxapps/web/app/sign-in/[[...sign-in]]/page.tsxapps/web/app/sign-up/[[...sign-up]]/page.tsxapps/web/components/analytics/cookie-consent.tsxapps/web/components/analytics/posthog-provider.tsxapps/web/components/brand/logo.tsxapps/web/components/layout/command-palette.tsxapps/web/components/marketplace/author-badge.tsxapps/web/components/marketplace/category-icon.tsxapps/web/components/marketplace/deploy-button.tsxapps/web/components/marketplace/filter-sidebar.tsxapps/web/components/marketplace/listing-card.tsxapps/web/components/marketplace/listing-tabs.tsxapps/web/components/marketplace/markdown-content.cssapps/web/components/marketplace/markdown-content.tsxapps/web/components/marketplace/search-bar.tsxapps/web/components/marketplace/version-selector.tsxapps/web/lib/analytics/events.client.tsapps/web/lib/analytics/events.server.tsapps/web/lib/analytics/feature-flags.tsapps/web/lib/analytics/posthog-client.tsapps/web/lib/analytics/posthog-server.tsapps/web/lib/constants/navigation.tsapps/web/lib/db/migrations/0008_marketplace.sqlapps/web/lib/db/migrations/0009_server_access_tokens.sqlapps/web/lib/db/migrations/0010_marketplace_versions.sqlapps/web/lib/db/repositories/marketplace-audit-log.repository.tsapps/web/lib/db/repositories/marketplace-install.repository.tsapps/web/lib/db/repositories/marketplace-listing.repository.tsapps/web/lib/db/repositories/marketplace-report.repository.tsapps/web/lib/db/repositories/marketplace-version.repository.tsapps/web/lib/db/repositories/server.repository.tsapps/web/lib/db/schema/index.tsapps/web/lib/db/schema/marketplace-audit-log.tsapps/web/lib/db/schema/marketplace-installs.tsapps/web/lib/db/schema/marketplace-listings.tsapps/web/lib/db/schema/marketplace-reports.tsapps/web/lib/db/schema/marketplace-versions.tsapps/web/lib/db/schema/servers.tsapps/web/lib/marketplace/cache.tsapps/web/lib/marketplace/categories.tsapps/web/lib/marketplace/render-markdown.tsapps/web/lib/marketplace/status-machine.tsapps/web/lib/services/marketplace-service.tsapps/web/lib/validation/marketplace.schema.tsapps/web/middleware.tsapps/web/package.jsonapps/web/public/.gitkeepapps/web/scripts/run-migration.tsapps/web/scripts/seed-marketplace-versions.tsapps/web/scripts/seed-marketplace.tsapps/web/scripts/test-token-auth.tsinfra/fly/fly.runtime.tomlpackages/registry/specs/github/spec.jsonpackages/registry/specs/hubspot/spec.jsonpackages/registry/specs/notion/spec.jsonpackages/registry/specs/openai/spec.jsonpackages/registry/specs/slack/spec.jsonpackages/registry/specs/stripe/spec.jsonpackages/registry/specs/twilio/spec.jsonpackages/transformer/src/transform.tspackages/types/src/server.ts
💤 Files with no reviewable changes (1)
- infra/fly/fly.runtime.toml
Deploy workflow (CodeRabbit critical): - Only trigger on push events (not reruns or PR-triggered CI) - Only run on first attempt (run_attempt == 1) - Checkout exact commit SHA from the triggering workflow run instead of HEAD (prevents deploying wrong revision) Markdown rendering (CodeRabbit critical): - Added rehype-sanitize to markdown pipeline to prevent XSS from user-submitted listing descriptions - Sanitizes HTML output before rendering with dangerouslySetInnerHTML
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (6)
apps/web/app/(marketing)/marketplace/[slug]/page.tsx (4)
63-71: Versions fetched unconditionally regardless of active tab.The versions query runs on every page load, but the data is only displayed when
activeTab === 'changelog'. For the more common 'overview' tab visits, this is an unnecessary database query.♻️ Conditionally fetch versions
- // Fetch versions for the changelog tab - let versions: readonly { readonly id: string; readonly version: string; readonly changelog: string | null; readonly toolCount: number; readonly createdAt: Date }[] = []; - try { - const db = getReadDb(); - const versionRepo = new MarketplaceVersionRepository(db); - versions = await versionRepo.findByListing(listing.id); - } catch { - // Versions table may not exist yet - } + // Fetch versions only for the changelog tab + let versions: readonly { readonly id: string; readonly version: string; readonly changelog: string | null; readonly toolCount: number; readonly createdAt: Date }[] = []; + if (activeTab === 'changelog') { + try { + const db = getReadDb(); + const versionRepo = new MarketplaceVersionRepository(db); + versions = await versionRepo.findByListing(listing.id); + } catch { + // Versions table may not exist yet + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx around lines 63 - 71, The page currently fetches versions unconditionally; change it to only perform the database query when the changelog tab is active by checking the tab state (activeTab === 'changelog') before calling getReadDb(), creating MarketplaceVersionRepository, or invoking versionRepo.findByListing(listing.id); move the try/catch (and the versions variable initialization) inside that conditional so versions stays empty for other tabs and the DB query is skipped for non-changelog views.
100-105: Consider adding alt text for accessibility.The icon image has an empty
alt=""attribute. While decorative images can use empty alt, if the icon conveys meaning about the listing, consider adding descriptive alt text likealt={listing.name}for screen reader users.{listing.iconUrl ? ( <img src={listing.iconUrl} - alt="" + alt={`${listing.name} icon`} className="h-8 w-8 rounded" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx around lines 100 - 105, The img for the listing icon currently uses an empty alt attribute (listing.iconUrl rendering) which may hide meaningful information from screen readers; update the JSX where listing.iconUrl is rendered to provide a descriptive alt (e.g., use listing.name or a default like `${listing.name} icon`) so the <img> alt conveys the listing identity when available, falling back to an empty alt only if you determine the icon is purely decorative.
32-35: Duplicate database query for listing data.
getListing(slug)is called twice per page load: once ingenerateMetadata(line 34) and once inListingDetailPage(line 52). In Next.js App Router, these functions run sequentially for the same request, resulting in redundant database queries.Consider using React's
cache()to deduplicate the fetch within the same request lifecycle:♻️ Deduplicate with React cache
+import { cache } from 'react'; import { ChevronRight, ExternalLink } from 'lucide-react'; // ... other imports -async function getListing(slug: string) { +const getListing = cache(async (slug: string) => { try { const db = getReadDb(); const repo = new MarketplaceListingRepository(db); return await repo.findPublishedBySlug(slug); } catch { return null; } -} +});Also applies to: 49-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx around lines 32 - 35, generateMetadata and ListingDetailPage both call getListing(slug) causing duplicate DB queries per request; wrap the data fetch with React's cache (e.g., create a cached wrapper around getListing or export a cached version like cachedGetListing = cache(getListing)) and use that cached function in both generateMetadata and the ListingDetailPage data load so the fetch is deduplicated within the same request lifecycle; update references to call cachedGetListing(slug) (or modify getListing to return a cached function) wherever listing is fetched.
69-71: Empty catch block masks all errors, not just missing table.The comment suggests this handles the case where the versions table doesn't exist yet, but the empty catch will also swallow connection failures, query errors, or other unexpected issues. Consider logging the error or being more specific about what's caught:
🛡️ Improve error handling
} catch { - // Versions table may not exist yet + // Log but don't fail - versions are supplementary content + console.warn(`[marketplace] Failed to fetch versions for listing ${listing.id}`); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx around lines 69 - 71, The empty catch in apps/web/app/(marketing)/marketplace/[slug]/page.tsx swallows all errors when fetching versions (masking connection/query failures); change the catch to receive the error (catch (err)) and either log it (console.error or your app logger) and only suppress expected "table does not exist" errors (e.g., check err.message or err.code for the versions table/relation-not-found case) otherwise rethrow or surface the error so real failures aren’t silently ignored.apps/web/app/api/marketplace/[slug]/versions/route.ts (1)
16-25: Consider adding caching for consistency with sibling routes.The sibling route at
apps/web/app/api/marketplace/[slug]/route.ts(context snippet 1, lines 16-20) usesmarketplaceCachewith a 10-minute TTL for thefindPublishedBySlugcall. This route bypasses caching entirely and hits the database on every request for both the listing lookup and versions fetch.For a public, read-heavy endpoint like versions (consumed by
VersionSelector), consider adding similar caching to reduce database load and improve response times.♻️ Suggested caching pattern
import { createSuccessResponse } from '@apifold/types'; import { NextResponse, type NextRequest } from 'next/server'; import { withErrorHandler, NotFoundError } from '../../../../../lib/api-helpers'; import { getReadDb } from '../../../../../lib/db/index'; import { MarketplaceListingRepository } from '../../../../../lib/db/repositories/marketplace-listing.repository'; import { MarketplaceVersionRepository } from '../../../../../lib/db/repositories/marketplace-version.repository'; +import * as marketplaceCache from '../../../../../lib/marketplace/cache'; export function GET( _request: NextRequest, { params }: { params: Promise<{ slug: string }> }, ): Promise<NextResponse> { return withErrorHandler(async () => { const { slug } = await params; + const cacheKey = `marketplace:versions:${slug}`; + const cached = await marketplaceCache.getCached(cacheKey); + if (cached) { + return NextResponse.json(createSuccessResponse(cached)); + } + const db = getReadDb(); const listingRepo = new MarketplaceListingRepository(db); const listing = await listingRepo.findPublishedBySlug(slug); if (!listing) { throw new NotFoundError('Listing not found'); } const versionRepo = new MarketplaceVersionRepository(db); const versions = await versionRepo.findByListing(listing.id); + await marketplaceCache.setCached(cacheKey, versions, 'detail'); + return NextResponse.json(createSuccessResponse(versions)); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/app/api/marketplace/`[slug]/versions/route.ts around lines 16 - 25, This route currently queries the DB directly via getReadDb -> MarketplaceListingRepository.findPublishedBySlug(slug) and MarketplaceVersionRepository.findByListing(listing.id); add the same marketplaceCache usage as the sibling route by caching the listing lookup and the versions list (use stable cache keys like `listing:${slug}` and `versions:${listing.id}`) with a 10-minute TTL, i.e., call marketplaceCache.getOrSet (or equivalent) around the findPublishedBySlug and findByListing calls so you return cached results when present and only hit the DB when the cache misses.apps/web/components/marketplace/filter-sidebar.tsx (1)
27-37: Addaria-currenton the active category link.On Line 27 and Line 40, active state is only visual. Expose it semantically so screen readers announce the current filter.
♿ Suggested accessibility tweak
<Link href="/marketplace" + aria-current={!activeCategory ? 'page' : undefined} className={`flex items-center justify-between rounded-md px-2.5 py-1.5 text-sm transition-colors ${ !activeCategory ? 'text-foreground font-medium' : 'text-muted-foreground hover:text-foreground' }`} > ... <Link key={cat.slug} href={`/marketplace?category=${cat.slug}`} + aria-current={activeCategory === cat.slug ? 'page' : undefined} className={`flex items-center gap-2 rounded-md px-2.5 py-1.5 text-sm transition-colors ${ activeCategory === cat.slug ? 'text-foreground font-medium' : 'text-muted-foreground hover:text-foreground' }`} >Also applies to: 40-52
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/components/marketplace/filter-sidebar.tsx` around lines 27 - 37, The "All" Link and each category Link in filter-sidebar.tsx only indicate active state visually; update the Link components (the one rendering "All" using activeCategory and the category Links that compare category === activeCategory) to include an aria-current attribute when active (e.g., aria-current="page" when !activeCategory for the "All" link and when category === activeCategory for the category links) so assistive technologies can announce the current filter; keep the existing className logic and only add the conditional aria-current prop to the Link elements.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/web/app/`(marketing)/marketplace/page.tsx:
- Around line 116-117: SortDropdown is rendering a <select> that isn't wired to
navigation, so changing the sort does nothing; make the sort control submit a
GET so the server receives the new sort param. Update the SortDropdown component
to wrap the <select> in a <form method="get">, give the <select> a name="sort"
and its value/selected option bound to the currentSort prop, and add a visible
submit control (button/input) so users can submit without JS; ensure the page
code still reads rawParams.sort (or the request query) to render results based
on the sort parameter.
- Around line 38-63: The try/catch currently lets
browseMarketplaceSchema.parse(rawParams) throw and causes the page to fall back
to empty results; change to defensive parsing using
browseMarketplaceSchema.safeParse(rawParams) (or equivalent validation) to
detect malformed query params, log the validation error, and then set a safe
default params object (e.g. no q, no category, page '1') so
listingRepo.searchPublished(...) still runs against defaults rather than
returning an empty marketplace; keep the other DB calls (getReadDb, new
MarketplaceListingRepository, findFeatured, findCategoriesWithCounts) outside
being short-circuited by parse failures and ensure the catch only handles
unexpected DB/runtime errors.
In `@apps/web/components/analytics/cookie-consent.tsx`:
- Around line 17-24: getStoredConsent currently trusts parsed JSON as
ConsentState and may accept malformed shapes; update getStoredConsent to
validate the parsed value before returning: after JSON.parse(raw) ensure the
result is a non-null object and that expected ConsentState fields (e.g.,
analytics and any other consent flags defined on ConsentState) exist and are
booleans (or match the expected types), and return null if validation fails;
keep the try/catch but add this runtime shape/type check referencing
getStoredConsent, CONSENT_KEY and the ConsentState fields.
- Around line 39-47: The code only calls ph.startSessionRecording() when
consent.sessionRecording is true but never calls ph.stopSessionRecording() when
sessionRecording becomes false (or when analytics is disabled), so explicitly
call ph.stopSessionRecording() in the branches where recording should be halted:
after ph.opt_out_capturing() when consent.analytics is false, and in the else
branch for the consent.analytics true block when consent.sessionRecording is
false; keep existing calls to ph.opt_in_capturing(), ph.opt_out_capturing(),
ph.startSessionRecording(), and add ph.stopSessionRecording() to ensure the
rrweb recorder is stopped and snapshots are not kept in memory.
---
Nitpick comments:
In `@apps/web/app/`(marketing)/marketplace/[slug]/page.tsx:
- Around line 63-71: The page currently fetches versions unconditionally; change
it to only perform the database query when the changelog tab is active by
checking the tab state (activeTab === 'changelog') before calling getReadDb(),
creating MarketplaceVersionRepository, or invoking
versionRepo.findByListing(listing.id); move the try/catch (and the versions
variable initialization) inside that conditional so versions stays empty for
other tabs and the DB query is skipped for non-changelog views.
- Around line 100-105: The img for the listing icon currently uses an empty alt
attribute (listing.iconUrl rendering) which may hide meaningful information from
screen readers; update the JSX where listing.iconUrl is rendered to provide a
descriptive alt (e.g., use listing.name or a default like `${listing.name}
icon`) so the <img> alt conveys the listing identity when available, falling
back to an empty alt only if you determine the icon is purely decorative.
- Around line 32-35: generateMetadata and ListingDetailPage both call
getListing(slug) causing duplicate DB queries per request; wrap the data fetch
with React's cache (e.g., create a cached wrapper around getListing or export a
cached version like cachedGetListing = cache(getListing)) and use that cached
function in both generateMetadata and the ListingDetailPage data load so the
fetch is deduplicated within the same request lifecycle; update references to
call cachedGetListing(slug) (or modify getListing to return a cached function)
wherever listing is fetched.
- Around line 69-71: The empty catch in
apps/web/app/(marketing)/marketplace/[slug]/page.tsx swallows all errors when
fetching versions (masking connection/query failures); change the catch to
receive the error (catch (err)) and either log it (console.error or your app
logger) and only suppress expected "table does not exist" errors (e.g., check
err.message or err.code for the versions table/relation-not-found case)
otherwise rethrow or surface the error so real failures aren’t silently ignored.
In `@apps/web/app/api/marketplace/`[slug]/versions/route.ts:
- Around line 16-25: This route currently queries the DB directly via getReadDb
-> MarketplaceListingRepository.findPublishedBySlug(slug) and
MarketplaceVersionRepository.findByListing(listing.id); add the same
marketplaceCache usage as the sibling route by caching the listing lookup and
the versions list (use stable cache keys like `listing:${slug}` and
`versions:${listing.id}`) with a 10-minute TTL, i.e., call
marketplaceCache.getOrSet (or equivalent) around the findPublishedBySlug and
findByListing calls so you return cached results when present and only hit the
DB when the cache misses.
In `@apps/web/components/marketplace/filter-sidebar.tsx`:
- Around line 27-37: The "All" Link and each category Link in filter-sidebar.tsx
only indicate active state visually; update the Link components (the one
rendering "All" using activeCategory and the category Links that compare
category === activeCategory) to include an aria-current attribute when active
(e.g., aria-current="page" when !activeCategory for the "All" link and when
category === activeCategory for the category links) so assistive technologies
can announce the current filter; keep the existing className logic and only add
the conditional aria-current prop to the Link elements.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2897d743-7bde-4546-b178-958500592912
📒 Files selected for processing (37)
apps/web/app/(marketing)/layout.tsxapps/web/app/(marketing)/marketplace/[slug]/page.tsxapps/web/app/(marketing)/marketplace/page.tsxapps/web/app/api/marketplace/[slug]/deploy/route.tsapps/web/app/api/marketplace/[slug]/route.tsapps/web/app/api/marketplace/[slug]/versions/route.tsapps/web/app/api/marketplace/categories/route.tsapps/web/app/api/marketplace/featured/route.tsapps/web/app/api/marketplace/installs/[installId]/route.tsapps/web/app/api/marketplace/my-installs/route.tsapps/web/app/api/marketplace/route.tsapps/web/app/api/servers/[id]/analytics/export/route.tsapps/web/app/api/servers/[id]/analytics/route.tsapps/web/app/api/servers/[id]/credentials/route.tsapps/web/app/api/servers/[id]/domain/route.tsapps/web/app/api/servers/[id]/logs/route.tsapps/web/app/api/servers/[id]/profiles/route.tsapps/web/app/api/servers/[id]/rotate-token/route.tsapps/web/app/api/servers/[id]/route.tsapps/web/app/api/servers/[id]/test/route.tsapps/web/app/api/servers/[id]/tools/[toolId]/route.tsapps/web/app/api/servers/[id]/tools/route.tsapps/web/app/ingest/[[...path]]/route.tsapps/web/app/providers.tsxapps/web/app/sign-in/[[...sign-in]]/page.tsxapps/web/app/sign-up/[[...sign-up]]/page.tsxapps/web/components/analytics/cookie-consent.tsxapps/web/components/analytics/posthog-provider.tsxapps/web/components/marketplace/category-icon.tsxapps/web/components/marketplace/deploy-button.tsxapps/web/components/marketplace/filter-sidebar.tsxapps/web/components/marketplace/listing-card.tsxapps/web/components/marketplace/search-bar.tsxapps/web/components/marketplace/version-selector.tsxapps/web/lib/analytics/events.client.tsapps/web/lib/marketplace/render-markdown.tsapps/web/lib/services/marketplace-service.ts
✅ Files skipped from review due to trivial changes (10)
- apps/web/app/api/servers/[id]/analytics/export/route.ts
- apps/web/app/api/servers/[id]/test/route.ts
- apps/web/app/api/servers/[id]/tools/[toolId]/route.ts
- apps/web/app/api/servers/[id]/logs/route.ts
- apps/web/app/api/servers/[id]/domain/route.ts
- apps/web/app/api/servers/[id]/profiles/route.ts
- apps/web/app/api/servers/[id]/analytics/route.ts
- apps/web/app/api/servers/[id]/tools/route.ts
- apps/web/app/api/servers/[id]/route.ts
- apps/web/app/api/servers/[id]/credentials/route.ts
🚧 Files skipped from review as they are similar to previous changes (22)
- apps/web/app/providers.tsx
- apps/web/app/(marketing)/layout.tsx
- apps/web/lib/marketplace/render-markdown.ts
- apps/web/app/sign-up/[[...sign-up]]/page.tsx
- apps/web/components/marketplace/search-bar.tsx
- apps/web/app/api/marketplace/[slug]/route.ts
- apps/web/app/api/marketplace/categories/route.ts
- apps/web/components/marketplace/category-icon.tsx
- apps/web/app/api/marketplace/my-installs/route.ts
- apps/web/app/api/marketplace/featured/route.ts
- apps/web/app/api/marketplace/installs/[installId]/route.ts
- apps/web/app/api/marketplace/[slug]/deploy/route.ts
- apps/web/components/analytics/posthog-provider.tsx
- apps/web/app/api/servers/[id]/rotate-token/route.ts
- apps/web/app/api/marketplace/route.ts
- apps/web/app/sign-in/[[...sign-in]]/page.tsx
- apps/web/components/marketplace/listing-card.tsx
- apps/web/components/marketplace/version-selector.tsx
- apps/web/components/marketplace/deploy-button.tsx
- apps/web/app/ingest/[[...path]]/route.ts
- apps/web/lib/services/marketplace-service.ts
- apps/web/lib/analytics/events.client.ts
| try { | ||
| const db = getReadDb(); | ||
| const listingRepo = new MarketplaceListingRepository(db); | ||
|
|
||
| const params = browseMarketplaceSchema.parse(rawParams); | ||
| const result = await listingRepo.searchPublished(params); | ||
|
|
||
| listings = result.items; | ||
| total = result.total; | ||
| totalPages = result.totalPages; | ||
|
|
||
| if (!rawParams.q && !rawParams.category && (!rawParams.page || rawParams.page === '1')) { | ||
| featured = await listingRepo.findFeatured(3); | ||
| } | ||
|
|
||
| const counts = await listingRepo.findCategoriesWithCounts(); | ||
| categories = Object.values(MARKETPLACE_CATEGORIES).map((cat) => { | ||
| const countEntry = counts.find((c) => c.category === cat.slug); | ||
| return { slug: cat.slug, name: cat.name, count: countEntry?.count ?? 0 }; | ||
| }); | ||
| } catch (err) { | ||
| console.error('[marketplace] Failed to load listings:', err); | ||
| } | ||
|
|
||
| const currentPage = parseInt(rawParams.page ?? '1', 10); | ||
| const isFiltered = !!(rawParams.q || rawParams.category); |
There was a problem hiding this comment.
Avoid turning malformed query params into an empty marketplace.
Line 42 can throw for bad URL params, and the catch at Line 58 then falls back to empty results (No integrations found). That’s a misleading failure mode for a public browse page.
🛠️ Suggested defensive parsing fallback
export default async function MarketplacePage({ searchParams }: PageProps) {
const rawParams = await searchParams;
+ const parsed = browseMarketplaceSchema.safeParse(rawParams);
+ const params = parsed.success ? parsed.data : browseMarketplaceSchema.parse({});
let listings: readonly Record<string, unknown>[] = [];
let featured: readonly Record<string, unknown>[] = [];
let total = 0;
let totalPages = 1;
let categories: { slug: string; name: string; count: number }[] = [];
try {
const db = getReadDb();
const listingRepo = new MarketplaceListingRepository(db);
- const params = browseMarketplaceSchema.parse(rawParams);
const result = await listingRepo.searchPublished(params);
@@
- const currentPage = parseInt(rawParams.page ?? '1', 10);
- const isFiltered = !!(rawParams.q || rawParams.category);
+ const currentPage = params.page;
+ const isFiltered = !!(params.q || params.category);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/app/`(marketing)/marketplace/page.tsx around lines 38 - 63, The
try/catch currently lets browseMarketplaceSchema.parse(rawParams) throw and
causes the page to fall back to empty results; change to defensive parsing using
browseMarketplaceSchema.safeParse(rawParams) (or equivalent validation) to
detect malformed query params, log the validation error, and then set a safe
default params object (e.g. no q, no category, page '1') so
listingRepo.searchPublished(...) still runs against defaults rather than
returning an empty marketplace; keep the other DB calls (getReadDb, new
MarketplaceListingRepository, findFeatured, findCategoriesWithCounts) outside
being short-circuited by parse failures and ensure the catch only handles
unexpected DB/runtime errors.
| <SortDropdown currentSort={rawParams.sort} /> | ||
| </div> |
There was a problem hiding this comment.
Sort dropdown is currently non-functional.
At Line 187, the <select> is not tied to navigation (no form submit and no client-side route update), so changing sort does nothing.
🛠️ Suggested no-JS server-friendly fix (GET form)
- <SortDropdown currentSort={rawParams.sort} />
+ <SortDropdown
+ currentSort={rawParams.sort}
+ q={rawParams.q}
+ category={rawParams.category}
+ />
@@
-function SortDropdown({ currentSort }: { currentSort?: string }) {
+function SortDropdown({
+ currentSort,
+ q,
+ category,
+}: {
+ currentSort?: string;
+ q?: string;
+ category?: string;
+}) {
return (
- <select
- name="sort"
- defaultValue={currentSort ?? 'popular'}
- className="rounded-md border border-border bg-transparent px-3 py-1.5 text-xs text-muted-foreground focus:border-foreground focus:outline-none"
- >
- <option value="popular">Popular</option>
- <option value="newest">Newest</option>
- <option value="name">A–Z</option>
- </select>
+ <form action="/marketplace" method="get" className="flex items-center gap-2">
+ {q ? <input type="hidden" name="q" value={q} /> : null}
+ {category ? <input type="hidden" name="category" value={category} /> : null}
+ <select
+ name="sort"
+ defaultValue={currentSort ?? 'popular'}
+ className="rounded-md border border-border bg-transparent px-3 py-1.5 text-xs text-muted-foreground focus:border-foreground focus:outline-none"
+ >
+ <option value="popular">Popular</option>
+ <option value="newest">Newest</option>
+ <option value="name">A–Z</option>
+ </select>
+ <button
+ type="submit"
+ className="rounded-md border border-border px-3 py-1.5 text-xs text-muted-foreground hover:bg-muted"
+ >
+ Apply
+ </button>
+ </form>
);
}Also applies to: 185-196
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/app/`(marketing)/marketplace/page.tsx around lines 116 - 117,
SortDropdown is rendering a <select> that isn't wired to navigation, so changing
the sort does nothing; make the sort control submit a GET so the server receives
the new sort param. Update the SortDropdown component to wrap the <select> in a
<form method="get">, give the <select> a name="sort" and its value/selected
option bound to the currentSort prop, and add a visible submit control
(button/input) so users can submit without JS; ensure the page code still reads
rawParams.sort (or the request query) to render results based on the sort
parameter.
| function getStoredConsent(): ConsentState | null { | ||
| if (typeof window === 'undefined') return null; | ||
| try { | ||
| const raw = localStorage.getItem(CONSENT_KEY); | ||
| return raw ? JSON.parse(raw) : null; | ||
| } catch { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Validate stored consent shape before applying it.
On Line 21, parsed JSON is trusted as ConsentState without runtime checks. Malformed values (for example, "analytics": "false") can be treated as truthy and flip consent behavior.
Proposed fix
interface ConsentState {
readonly analytics: boolean;
readonly heatmaps: boolean;
readonly sessionRecording: boolean;
readonly timestamp: number;
}
+function isConsentState(value: unknown): value is ConsentState {
+ if (!value || typeof value !== 'object') return false;
+ const v = value as Record<string, unknown>;
+ return (
+ typeof v.analytics === 'boolean' &&
+ typeof v.heatmaps === 'boolean' &&
+ typeof v.sessionRecording === 'boolean' &&
+ typeof v.timestamp === 'number'
+ );
+}
+
function getStoredConsent(): ConsentState | null {
if (typeof window === 'undefined') return null;
try {
const raw = localStorage.getItem(CONSENT_KEY);
- return raw ? JSON.parse(raw) : null;
+ if (!raw) return null;
+ const parsed: unknown = JSON.parse(raw);
+ return isConsentState(parsed) ? parsed : null;
} catch {
return null;
}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/components/analytics/cookie-consent.tsx` around lines 17 - 24,
getStoredConsent currently trusts parsed JSON as ConsentState and may accept
malformed shapes; update getStoredConsent to validate the parsed value before
returning: after JSON.parse(raw) ensure the result is a non-null object and that
expected ConsentState fields (e.g., analytics and any other consent flags
defined on ConsentState) exist and are booleans (or match the expected types),
and return null if validation fails; keep the try/catch but add this runtime
shape/type check referencing getStoredConsent, CONSENT_KEY and the ConsentState
fields.
| if (consent.analytics) { | ||
| ph.opt_in_capturing(); | ||
|
|
||
| if (consent.sessionRecording) { | ||
| ph.startSessionRecording(); | ||
| } | ||
| } else { | ||
| ph.opt_out_capturing(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In posthog-js, does opt_out_capturing()automatically stop an active session recording, or shouldstopSessionRecording() be called explicitly when consent is revoked?
💡 Result:
In posthog-js, calling opt_out_capturing prevents all data from being captured and sent to PostHog, including session replays. Official documentation states: "Opting out on a PostHog client will prevent all data from being captured and sent to PostHog. This includes any autocaptures, manual captures, and session replays." Therefore, it automatically stops an active session recording by blocking the emission and sending of $snapshot events. However, stopSessionRecording should be called explicitly if you need to immediately pause the local rrweb recorder (e.g., to stop collecting snapshots in memory), as opt_out_capturing primarily affects the capture queue and sending but may not instantly halt the underlying recording process. The docs distinguish session recording control via startSessionRecording / stopSessionRecording for manual management, separate from the global opt-out. For consent revocation, opt_out_capturing is sufficient to comply with privacy requirements by preventing data transmission, but calling stopSessionRecording ensures complete cessation of recording activity.
Citations:
- 1: https://posthog.com/docs/session-replay/how-to-control-which-sessions-you-record
- 2: https://posthog.com/docs/privacy/data-collection
- 3: $snapshot events being sent for users that opted out PostHog/posthog-js#101
- 4: Sessions do not get recorded once user opts in PostHog/posthog-js#336
🏁 Script executed:
# Check if the file exists and view the relevant lines
if [ -f "apps/web/components/analytics/cookie-consent.tsx" ]; then
echo "File found. Content around lines 39-47:"
sed -n '35,50p' "apps/web/components/analytics/cookie-consent.tsx"
else
echo "File not found at specified location"
# Search for the file
find . -name "cookie-consent.tsx" -type f 2>/dev/null
fiRepository: Work90210/APIFold
Length of output: 411
🏁 Script executed:
# Search for other uses of startSessionRecording or stopSessionRecording in the codebase
rg "stopSessionRecording|startSessionRecording" --type typescript --type tsxRepository: Work90210/APIFold
Length of output: 95
🏁 Script executed:
# Search for other consent handling patterns
rg "opt_in_capturing|opt_out_capturing" --type typescript --type tsxRepository: Work90210/APIFold
Length of output: 95
🏁 Script executed:
# Search for stopSessionRecording and startSessionRecording without type filtering
rg "stopSessionRecording|startSessionRecording"Repository: Work90210/APIFold
Length of output: 143
🏁 Script executed:
# Search for opt_in_capturing and opt_out_capturing without type filtering
rg "opt_in_capturing|opt_out_capturing"Repository: Work90210/APIFold
Length of output: 367
🏁 Script executed:
# Check if stopSessionRecording is defined in PostHog types or if it's optional
rg "stopSessionRecording\?"Repository: Work90210/APIFold
Length of output: 43
🏁 Script executed:
# Check posthog-client.ts to understand how getPostHog() is initialized and what methods are available
cat -n apps/web/lib/analytics/posthog-client.tsRepository: Work90210/APIFold
Length of output: 2124
🏁 Script executed:
# Search for any PostHog type definitions or interface files
fd "posthog" --type f | grep -E "(types|interface|declaration)" | head -20Repository: Work90210/APIFold
Length of output: 43
🏁 Script executed:
# Check package.json to see which posthog version is being used
rg "posthog-js" apps/web/package.json -A 2 -B 2Repository: Work90210/APIFold
Length of output: 195
🏁 Script executed:
# Check PostHog version more precisely and look for its type definitions
grep -r "posthog-js" . --include="*.json" --include="*.lock"Repository: Work90210/APIFold
Length of output: 114
🏁 Script executed:
# Let's look at the ConsentState type to understand what properties it has
rg "type ConsentState|interface ConsentState" -A 5Repository: Work90210/APIFold
Length of output: 510
🏁 Script executed:
# Check if there are any other places where session recording is managed
rg "sessionRecording|session.recording" -B 3 -A 3Repository: Work90210/APIFold
Length of output: 3209
Explicitly disable session recording when consent disallows it.
Lines 42-47 only start recording but never explicitly stop it when sessionRecording becomes false. Additionally, when analytics is false, opt_out_capturing() blocks transmission but doesn't explicitly halt the rrweb recorder. Per PostHog documentation, stopSessionRecording() should be called explicitly to ensure the recorder stops capturing snapshots in memory, separate from the opt-out behavior which only prevents transmission.
Proposed fix
function applyConsent(consent: ConsentState): void {
const ph = getPostHog();
if (!ph) return;
if (consent.analytics) {
ph.opt_in_capturing();
if (consent.sessionRecording) {
ph.startSessionRecording();
+ } else {
+ ph.stopSessionRecording?.();
}
} else {
+ ph.stopSessionRecording?.();
ph.opt_out_capturing();
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (consent.analytics) { | |
| ph.opt_in_capturing(); | |
| if (consent.sessionRecording) { | |
| ph.startSessionRecording(); | |
| } | |
| } else { | |
| ph.opt_out_capturing(); | |
| } | |
| if (consent.analytics) { | |
| ph.opt_in_capturing(); | |
| if (consent.sessionRecording) { | |
| ph.startSessionRecording(); | |
| } else { | |
| ph.stopSessionRecording?.(); | |
| } | |
| } else { | |
| ph.stopSessionRecording?.(); | |
| ph.opt_out_capturing(); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/components/analytics/cookie-consent.tsx` around lines 39 - 47, The
code only calls ph.startSessionRecording() when consent.sessionRecording is true
but never calls ph.stopSessionRecording() when sessionRecording becomes false
(or when analytics is disabled), so explicitly call ph.stopSessionRecording() in
the branches where recording should be halted: after ph.opt_out_capturing() when
consent.analytics is false, and in the else branch for the consent.analytics
true block when consent.sessionRecording is false; keep existing calls to
ph.opt_in_capturing(), ph.opt_out_capturing(), ph.startSessionRecording(), and
add ph.stopSessionRecording() to ensure the rrweb recorder is stopped and
snapshots are not kept in memory.
Coverage Report for packages/transformer
File Coverage
|
||||||||||||||||||||||||||||||||||||||
Summary
af_-prefixed bearer token — replaces reliance on globalMCP_API_KEYMarketplace
marketplace_listings,marketplace_installs,marketplace_reports,marketplace_audit_log+marketplace_versionspackages/registrydescriptionoversummary)Per-Server Access Tokens
token_hashcolumn onmcp_servers(migration 0009)af_+ 256-bit entropy), SHA-256 hash storedserver-token-auth.tsmiddleware: per-server token → global key fallback → rejectAuthorization: Bearerheader and?token=query param (for SSE)POST /api/servers/:id/rotate-token) with Redis hot-reloadPostHog Analytics
posthog-jsclient +posthog-nodeserveropt_out_capturing_by_default, cookie consent banner (marketing pages only)/ingest/*with path allowlist and 512KB body capCI/CD
Test plan
/marketplace/ingestproxy rejects non-allowlisted pathsSummary by CodeRabbit
New Features
Chores