Skip to content

feat: Sprints 9-13 — CLI, OAuth, Analytics, Registry, Access Control, Custom Domains#53

Merged
Work90210 merged 46 commits intomasterfrom
sprint-9-10/cli-oauth-complete
Mar 23, 2026
Merged

feat: Sprints 9-13 — CLI, OAuth, Analytics, Registry, Access Control, Custom Domains#53
Work90210 merged 46 commits intomasterfrom
sprint-9-10/cli-oauth-complete

Conversation

@Work90210
Copy link
Copy Markdown
Owner

@Work90210 Work90210 commented Mar 23, 2026

Summary

Delivers 5 full sprints (68+ story points) of features across the entire Apifold stack:

Sprint 9 — CLI Tool + OAuth Foundation

  • npx apifold serve CLI with 4 commands (serve, transform, validate, init)
  • Lightweight Express MCP server with SSE + Streamable HTTP
  • OAuth 2.0 credential foundation (8 provider presets, DB migration, vault encryption)
  • SSRF hardening (redirect blocking, response size limits on pinnedTransport)

Sprint 10 — OAuth Complete + CLI Publish

  • OAuth PKCE authorization code flow with Redis state storage
  • Client credentials flow for machine-to-machine APIs
  • Runtime auto-refresh with Redis distributed lock
  • Credential management UI (provider picker, add credential modal)
  • CLI publish pipeline (npm, Docker, Homebrew, binary builds)

Sprint 11 — Swagger 2.0 + Analytics

  • Swagger 2.0 auto-conversion via swagger2openapi with structuredClone safety
  • Analytics dashboard with time series, error breakdown, latency percentiles
  • Materialized views + monthly table partitioning
  • Impeccable design audit and polish

Sprint 12 — Analytics Complete + Spec Registry

  • Error explorer with status code filtering
  • CSV export with field escaping
  • @apifold/registry package with 8 curated API specs
  • Browse Registry UI with search, categories, and one-click deploy

Sprint 13 — Registry Deploy + Access Control

  • One-click deploy from registry with detail page
  • CLI --registry flag
  • Access profiles (Read Only / Read-Write / Full Access)
  • Runtime profile enforcement on tools/list and tools/call

Endpoint IDs + Custom Domains

  • Cryptographic 12-char hex endpoint IDs per server
  • Real URLs in snippet copier (no more "your-domain.com")
  • Custom domain verification via DNS TXT records
  • Host-header domain routing middleware
  • Vercel-style domain management UI with DNS health monitoring
  • Domains overview page in sidebar

Cross-cutting

  • 309+ tests across all packages, all passing
  • 45/45 security checks passing
  • README rewritten with CLI quick start, features, and updated architecture

Test plan

  • pnpm --filter @apifold/cli test — 104 tests
  • pnpm --filter @apifold/web test — 164 tests
  • pnpm --filter @apifold/runtime test — 23 tests
  • pnpm --filter @apifold/transformer test — 18 tests
  • pnpm --filter @apifold/registry test — 32 tests
  • All packages typecheck clean
  • CLI npx apifold serve works with real API spec
  • Analytics dashboard shows real data
  • Domain setup wizard creates DNS records
  • Registry browse and deploy flow works

Summary by CodeRabbit

  • New Features

    • CLI: installable apifold CLI with serve/transform/validate/init commands, standalone binaries, Docker image, and Homebrew formula
    • OAuth: PKCE auth‑code & client‑credentials flows, token exchange/refresh and integrated credential handling
    • API Registry: browse, import and deploy curated integrations
    • Domains & DNS: add/verify custom domains with DNS verification and dashboard controls
    • Versions & Releases: spec versioning, diffs, promote/rollback releases
    • Analytics & Export: server analytics UI and CSV export
    • Access Controls: tool-level access profiles
  • Documentation

    • Added API reference, registry contribution guide, design guide, and security docs
  • Tests

    • Extensive new unit and integration tests across CLI, web, runtime, and transformer
  • Chores

    • CI/CD workflows for CLI publishing, registry validation, DB migrations and schema updates

KyleFuehri added 30 commits March 22, 2026 19:46
Ship `@apifold/cli` with 4 commands:
- `apifold serve <spec>` — start MCP server from OpenAPI spec
- `apifold transform <spec>` — output tool definitions as JSON
- `apifold validate <spec>` — parse-only with detailed warnings
- `apifold init [spec]` — generate apifold.config.yaml template

Lightweight Express server with SSE and Streamable HTTP transports,
HTTP proxy with auth injection, Zod config validation, and env var
interpolation. Server binds to 127.0.0.1 with CORS restricted to
same-origin. 104 unit tests with 95%+ coverage.
Extend credential system to support OAuth 2.0 Authorization Code and
Client Credentials flows with 8 provider presets (Google, Slack,
Microsoft, HubSpot, Salesforce, Notion, GitHub, Spotify).

- Migration 0003: add OAuth columns (refresh token, scopes, token
  endpoint, client ID, client secret, token expiry, provider)
- Extend AuthMode and CredentialAuthType with oauth2_authcode and
  oauth2_client_creds
- Vault encryption for refresh tokens and client secrets
- tokenEndpoint validated as HTTPS with private hostname rejection
- Provider presets resolve authorization endpoints at runtime
- 64 tests covering provider configs, schema validation, and SSRF
  rejection
The logs endpoint returned a raw RequestLog[] array but the frontend
useLogs hook expected { logs, cursor, hasMore }. This caused
page.logs to be undefined, crashing the log table with
"Cannot read properties of undefined (reading 'method')".

Wrap the response in the expected pagination envelope.
Add redirect blocking (reject 3xx responses) and 10MB response size
cap to the general-purpose pinnedTransport in ssrf-guard.ts. This
closes the DNS rebinding vector via redirect and prevents memory
exhaustion from large upstream responses.
- ARCHITECTURE.md: ADR-013 (CLI tool) and ADR-014 (OAuth foundation)
- API.md: OAuth credential fields, authMode values, logs response format
- SECURITY.md: OAuth credential encryption, CLI security model, SSRF
  hardening with redirect blocking and size limits
- Authorization code flow with PKCE + Redis state storage
- Client credentials flow for machine-to-machine APIs
- Token exchange with SSRF-safe endpoint validation
- Runtime auto-refresh with Redis distributed lock to prevent races
- CREDENTIAL_EXPIRED error code surfaced in MCP responses
- Credential list/create API with GET endpoint
- 51 tests (PKCE, token exchange, token refresher)
- Credentials tab on server detail page
- Add Credential modal with 4 auth types (API Key, Bearer, OAuth User, OAuth Service)
- Provider picker grid (Google, Slack, Microsoft, HubSpot, Salesforce, Notion, GitHub, Spotify, Custom)
- Credential list with auth type badges, scopes, and expiry status
- Styled auth mode selector replacing native select on config form
- OAuth mode shows contextual link to Credentials tab
- npm packaging: public @apifold/cli with bin entry and files config
- GitHub Actions: publish-cli.yml for npm, bun compile binaries, GitHub Release, Docker
- Docker: multi-stage Dockerfile with non-root user and healthcheck
- Homebrew: tap formula for macOS/Linux binary installation
Detect Swagger 2.0 specs and transparently convert to OpenAPI 3.0
via swagger2openapi. Uses structuredClone to neutralize prototype
pollution vectors before passing to the converter. Integrated in
web spec import API and CLI serve/transform commands. 18 tests.
Web spec import and CLI serve/transform commands now call autoConvert()
before parseSpec(), transparently supporting Swagger 2.0 specs. Web
API response includes converted flag and originalVersion.
- Analytics API endpoint with frequency, percentiles, error rates
- Materialized view (hourly_tool_stats) with user_id scoping
- Monthly table partitioning for usage_events and request_logs
- Overview dashboard with recharts (stat cards, line/bar charts, latency bars)
- Per-tool sortable breakdown table
- Analytics tab in server detail navigation
- 12 analytics hook type tests
Analytics is now a first-class sidebar nav item at /dashboard/analytics
with a server dropdown filter and time range selector. Includes stat
cards, charts, latency percentiles, and per-tool breakdown — all
scoped to the selected server.
- Gradient icon accents on stat cards with contextual colors
- Area chart with gradient fill replacing flat line chart
- Asymmetric grid layout (3/5 + 2/5) for chart prominence
- Redesigned latency bars with P50/P95/P99 color coding
- Server dropdown with Server icon and backdrop blur
- Violet-accented time range pills
- Beautiful empty state with centered guidance text
- Hover glow effects on stat cards
- Dark-theme optimized tooltip styling
- Tool breakdown table with dot indicators
Applied all 7 Impeccable design commands: colorize, arrange, distill,
quieter, normalize, animate, typeset. Removed AI slop patterns:
- No more hero metric cards — just numbers on the surface
- No more glassmorphism/backdrop-blur
- No more hardcoded violet — uses design system tokens
- Varied spatial rhythm instead of uniform spacing
- Charts sit directly on surface, no wrapping cards
- Status colors from design tokens (success/warning/error)
- Chart HSL values reference the design system palette
…ty feed

- Time-series chart: hourly/daily call volume + error overlay
- Error breakdown: by status code and error code with colored badges
- Failing tools: ranked by error rate percentage
- Usage vs plan: monthly quota bar with plan name
- Recent activity: last 15 calls with status dots and timestamps
- API returns 8 data sections (overview, timeSeries, topTools,
  errorBreakdown, failingTools, recentActivity, usage, percentiles)
- Remove unused imports (Clock, AlertTriangle)
- Add aria-label on server select for screen readers
- Add aria-pressed + aria-label on time range buttons
- Add motion-reduce:transition-none on all progress bars
- Add optical spacing (mt-0.5) between metric value and detail
- Save .impeccable.md design context for future skill runs
52 type/shape tests for AnalyticsResponse, all sub-interfaces,
TimeRange union, and useAnalytics hook export. 20 API schema tests
for time range validation, defaults, rejection of invalid values,
and getStartDate date arithmetic.
Per-server analytics page had #888888, #1f2937, #f3f4f6, #8b5cf6
hardcoded in Recharts props. Replaced with HSL values matching the
design system: --muted-foreground, --card, --card-foreground, --primary.
- Error explorer: filterable by status code, failed requests list,
  tools ranked by error rate with progress bars
- Request log viewer: reuses existing log table/filter components
  with pagination and detail modal
- CSV export: GET /api/servers/:id/analytics/export with proper
  field escaping, userId scoping, and 10k row limit
New @apifold/registry package with static catalog of validated
OpenAPI specs: Stripe, GitHub, Slack, HubSpot, Twilio, Petstore,
OpenAI, Notion. Includes search(), getById(), listAll(), and
getCategories() functions. CI validation workflow parses all specs
through the transformer. 32 tests.
Searchable grid of API cards at /dashboard/specs/new/registry with
category filters, auth type badges, operation counts, docs links,
and one-click Deploy button that imports through the secure spec
import API.
- Detail page at /registry/[id] with metadata, auth info, and Deploy button
- Browse page cards now navigate to detail instead of deploying directly
- CLI: apifold serve --registry stripe loads spec from registry
- Community CONTRIBUTING.md with spec submission guide
- Migration 0005: access_profiles table + credential profile_id FK
- Profile types (AccessProfile, CreateProfileInput, UpdateProfileInput)
- ProfileRepository with CRUD, findBySlug, default profile protection
- Profile CRUD API (GET/POST/PUT/DELETE /api/servers/:id/profiles)
- Auto-generate profiles: Read Only, Read/Write, Full Access on server creation
Profile-based filtering: tools/list only returns tools in the active
profile's toolIds array. tools/call rejects requests for tools outside
the profile with TOOL_NOT_FOUND. Agents on a Read Only profile
literally cannot see or call write operations.
- Migration 0006: endpoint_id (12 hex chars, unique, NOT NULL with backfill),
  custom_domain, domain_verified_at, domain_verification_token
- McpServer type updated with endpointId, customDomain, domainVerifiedAt
- Server repository generates endpoint ID via randomBytes(6) on creation
- ServerRegistry: getByEndpointId() and getByDomain() lookups
- SSE + streamable-http route /mcp/:identifier with auto-detection
  (12 hex chars = endpoint ID, otherwise = slug fallback)
- postgres-loader selects endpoint_id and custom_domain columns
Snippet copier reads NEXT_PUBLIC_PLATFORM_DOMAIN (default: apifold.dev)
and generates URLs with endpoint ID instead of slug. Custom domains
shown when verified. No more "your-domain.com" placeholder.
- Domain API: PUT (set domain + generate token), POST (verify via DNS
  TXT lookup), DELETE (remove domain), GET (status)
- DNS verification: _apifold-verify.{domain} TXT record with random token
- Domain settings component on server detail sidebar with set/verify/remove
- Platform domains blocked to prevent hijacking
- Verified domains shown in snippet copier with custom URL
KyleFuehri added 6 commits March 23, 2026 07:27
Custom domain requests (e.g., mcp.mycompany.com/sse) are intercepted
by the domain router middleware, resolved via registry.getByDomain(),
and rewritten to /mcp/:endpointId/sse so existing transport routers
handle them normally. Platform domain and localhost pass through.
1. Unverified domains no longer route traffic — postgres-loader uses
   CASE WHEN domain_verified_at IS NOT NULL to null out custom_domain
   for unverified servers. Prevents domain takeover without DNS proof.

2. autoGenerateProfiles now called on server creation — new servers
   get Read Only, Read/Write, and Full Access profiles automatically
   inside the creation transaction.

3. credential.profile_id column documented (intentional FK for future
   credential-to-profile association in runtime).
- New /dashboard/domains page with all custom domains across servers
- 3-step setup wizard: enter domain → configure DNS (CNAME + TXT) → verify
- DNS health check on GET /api/servers/:id/domain (CNAME resolution)
- CNAME setup instructions (point to platform domain for routing)
- Copy-to-clipboard on DNS record values
- Remove redundant per-server analytics pages (use /dashboard/analytics)
- Remove Analytics tab from server detail navigation
- Add Domains to sidebar navigation
…domains

Complete rewrite reflecting Sprints 9-13: CLI quick start, --registry
flag, OAuth 2.0, access profiles, analytics, custom domains, Swagger
2.0 support, unique endpoint IDs. Updated architecture diagram and
component table with CLI and Registry packages.
- New Domain tab on server detail page with full setup wizard
- DNS health check polls every 30s (CNAME resolution status)
- Green/red health indicator for verified domains
- Empty state with "Add custom domain" button and how-it-works guide
- Top-level /dashboard/domains is now a lightweight overview linking
  to each server's domain settings
- Remove DomainSettings from sidebar (full tab gives more space)
Redesigned domain tab following Vercel's patterns:
- Color-coded status bar: green (Valid), yellow (Pending), red (Invalid)
- DNS configuration table with Type/Name/Value columns and copy buttons
- Real-time status messages ("CNAME no longer pointing to platform")
- Refresh button for on-demand DNS checks
- Clean empty state with "Add Domain" button
- Domain input with inline validation feedback
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 23, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new CLI package and publish pipeline, a static API registry, OAuth 2.0 flows with DB support and token refresh, endpoint IDs/custom domains with host routing, spec versioning/releases and analytics, SSRF hardening, request logging, many UI changes, and extensive tests and migrations.

Changes

Cohort / File(s) Summary
CI / Packaging
/.github/workflows/publish-cli.yml, /.github/workflows/validate-registry.yml, homebrew-apifold/Formula/apifold.rb
New workflows for publishing CLI and validating registry; Homebrew formula added.
CLI package & runtime
apps/cli/..., apps/cli/Dockerfile, apps/cli/package.json, apps/cli/tsconfig.json
New ESM CLI package @apifold/cli with commands (serve, transform, validate, init), config loader/schema, lightweight Express MCP server, proxy utils, Dockerfile, and package metadata.
CLI tests
apps/cli/__tests__/*
Extensive Vitest coverage for config loading/schema validation, server transports (SSE/streamable HTTP), proxy behavior, and error cases.
API registry
packages/registry/src/*, packages/registry/specs/*, packages/registry/package.json, packages/registry/tsconfig.json, packages/registry/CONTRIBUTING.md
New static registry package @apifold/registry with catalog types, validate script, many API spec entries, tests, and CONTRIBUTING guide.
Transformer: Swagger → OpenAPI
packages/transformer/src/convert.ts, packages/transformer/__tests__/convert.test.ts, packages/transformer/package.json, packages/transformer/src/index.ts
Adds autoConvert to detect/convert Swagger 2.0 specs via swagger2openapi, with tests and exports.
OAuth (web) & token exchange
apps/web/lib/oauth/*, apps/web/app/api/oauth/*, apps/web/lib/oauth/state-store.ts, apps/web/lib/oauth/providers.ts, apps/web/lib/oauth/token-exchange.ts
PKCE/state generation, provider presets, Redis-backed single-use state store, token-exchange/client-credentials/refresh flows with endpoint validation, timeouts, and normalized responses.
DB schema / types: OAuth & credentials
apps/web/lib/db/migrations/0003_oauth_credentials.sql, apps/web/lib/db/schema/credentials.ts, packages/types/src/credential.ts, apps/web/lib/validation/credential.schema.ts
Migrations, Drizzle schema and TypeScript types extended for encrypted refresh/client secrets, scopes, token metadata, provider; validation schemas updated.
Runtime token refresh & protocol changes
apps/runtime/src/oauth/token-refresher.ts, apps/runtime/src/mcp/tool-executor.ts, apps/runtime/src/index.ts, apps/runtime/src/mcp/protocol-handler.ts
Token refresher with locking and CredentialExpiredError; tool-executor handles expired credentials; protocol handler extended for profile-based tool filtering and request logging.
Credential fetcher & loader
apps/runtime/src/sync/postgres-loader.ts, apps/runtime/src/registry/server-registry.ts, apps/runtime/src/server.ts
Credential fetching supports OAuth refresh path (injects encryptFn, optional redis); server meta includes endpointId/customDomain; domain-router middleware integrated into server routing.
Endpoint IDs & custom domains
apps/web/lib/db/migrations/0006_endpoint_ids.sql, apps/web/lib/db/schema/servers.ts, apps/web/app/api/servers/[id]/domain/*, apps/web/components/servers/*, apps/web/app/dashboard/domains/*
Adds 12-hex endpointId, custom-domain fields and verification APIs/UI, DNS verify flows, and updated snippet/dashboard components to prefer verified custom domains or endpointId-based MCP URLs.
Transports & host-based routing
apps/runtime/src/transports/domain-router.ts, apps/runtime/src/transports/sse.ts, apps/runtime/src/transports/streamable-http.ts
Domain router rewrites hostname requests to /mcp/:endpointId/...; transport routes use :identifier resolved as endpointId then slug; session ownership/validation updated.
Access profiles & repository
apps/web/lib/db/migrations/0005_access_profiles.sql, apps/web/lib/db/schema/profiles.ts, apps/web/lib/db/repositories/profile.repository.ts, apps/web/lib/profiles/auto-generate.ts, apps/web/app/api/servers/[id]/profiles/route.ts, packages/types/src/profile.ts
Adds access_profiles table, Drizzle schema, ProfileRepository, auto-generation of default profiles from tools, API routes, and types; protocol handler can enforce profile-based tool access.
Spec versions & releases
apps/web/lib/db/migrations/0007_spec_versions.sql, apps/web/lib/db/schema/spec-versions.ts, apps/web/lib/db/schema/spec-releases.ts, apps/web/lib/db/repositories/spec-version.repository.ts, apps/web/lib/db/repositories/spec-release.repository.ts, packages/types/src/spec-version.ts, packages/types/src/spec-release.ts, apps/web/app/api/specs/.../versions/*, apps/web/lib/validation/spec-version.schema.ts
Introduces spec versioning and promotion: migrations, Drizzle schemas, repositories for versions/releases, APIs for listing/creating/promoting versions, diff computation, and validation schemas/types.
Analytics, materialized views & export
apps/web/lib/db/migrations/0004_analytics_views.sql, apps/web/app/api/servers/[id]/analytics/*, apps/web/app/dashboard/analytics/*, apps/web/lib/hooks/use-analytics.ts, apps/web/app/api/servers/[id]/analytics/export/route.ts
Adds hourly materialized analytics view, server analytics API and CSV export, dashboard analytics pages and client hook useAnalytics.
Logs & request tracing
apps/web/lib/db/schema/request-logs.ts, apps/web/lib/db/repositories/log.repository.ts, packages/types/src/log.ts, apps/web/app/api/servers/[id]/test/route.ts
Extends request_logs schema and types to persist request/response bodies, headers, toolName and errorMessage; server test route logs upstream responses asynchronously.
SSRF / upstream fetch hardening
apps/web/lib/ssrf-guard.ts, apps/cli/src/server/proxy.ts
Pinned agent lookup compatibility, max response size cap, redirect rejection; CLI proxy enforces protocol allowlist, timeouts, auth injection, and SSRF checks.
Web UI: pages, components & hooks
apps/web/app/*, apps/web/components/*, apps/web/lib/hooks/*, apps/web/lib/constants/navigation.ts, apps/web/lib/stores/ui-store.ts
Large UI additions/updates: credentials modal/list/provider picker, domain settings, dashboards (analytics, domains), spec/version UIs, server pages, new hooks (useRegistry, useCredentials, useSpecVersions, useAnalytics), navigation components (TopNav, TabBar, SubTabBar, MobileMenu), and UI store simplified to mobile menu.
Design, docs & security
README.md, ./.impeccable.md, docs/*, docs/ARCHITECTURE.md, docs/SECURITY.md
Documentation updates: Quick Start, CLI docs, API reference, ADRs, design tokens, security policy (credential handling, SSRF), and contribution guidance.
Migrations snapshot & partitioning
apps/web/lib/db/migrations/*, apps/web/lib/db/migrations/meta/*
Multiple DB migrations added/updated: initial schema snapshot, OAuth columns, analytics views, partitioning of events/logs, spec versions/releases, endpoint id backfill, and access profiles.
Packages & exports
apps/web/package.json, packages/registry/package.json, packages/transformer/package.json, packages/types/*, packages/ui/*
New/updated package manifests; expanded types (profiles, spec versions/releases, OAuth types), new StatusDot UI component, and styling token updates across UI components.

Sequence Diagram(s)

sequenceDiagram
  participant Browser as Browser
  participant Web as Web App (Next)
  participant Redis as Redis
  participant DB as Postgres
  participant OAuth as OAuth Provider
  participant TokenRef as Token Refresher

  Browser->>Web: POST /api/oauth/authorize { serverId, provider, clientId, clientSecret? }
  Web->>Redis: storeOAuthState(stateKey, { userId, serverId, pkceVerifier, encClientCreds })
  Web-->>Browser: 200 { authorizationUrl, state }

  Browser->>OAuth: User consents at authorizationUrl
  OAuth-->>Browser: Redirect to /api/oauth/callback?code=&state=
  Browser->>Web: GET /api/oauth/callback?code=&state=
  Web->>Redis: retrieveOAuthState(stateKey) -> stateData (deleted on read)
  Web->>Web: decrypt client creds (if stored)
  Web->>TokenRef: exchangeAuthorizationCode(tokenEndpoint, code, codeVerifier, redirectUri, clientId, clientSecret)
  TokenRef->>OAuth: POST tokenEndpoint (grant_type=authorization_code...)
  OAuth-->>TokenRef: 200 { access_token, refresh_token?, expires_in?, scope? }
  TokenRef->>DB: encrypt & persist credential (access, encrypted refresh/client secret, token_expires_at)
  TokenRef-->>Web: TokenResponse (frozen)
  Web-->>Browser: Redirect to dashboard (oauth_success)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch sprint-9-10/cli-oauth-complete

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
apps/runtime/src/transports/streamable-http.ts (2)

92-95: ⚠️ Potential issue | 🟡 Minor

Update error message to reflect new route parameter name.

The error message references /mcp/:slug/sse but the route now uses :identifier. Update for consistency.

📝 Proposed fix
     if (server.transport !== 'streamable-http') {
-      res.status(400).json({ error: 'This server uses SSE transport. Use GET /mcp/:slug/sse instead.' });
+      res.status(400).json({ error: 'This server uses SSE transport. Use GET /mcp/:identifier/sse instead.' });
       return;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 92 - 95, The
error string sent in the transport check (inside the block that tests
server.transport !== 'streamable-http' in streamable-http handler) still
references the old route param "/mcp/:slug/sse"; update that message to the
current route param name by replacing ":slug" with ":identifier" so the response
reads something like "This server uses SSE transport. Use GET
/mcp/:identifier/sse instead." and keep the same status and JSON shape.

48-53: ⚠️ Potential issue | 🟡 Minor

Remove unused isValidSlug function and constants.

The function is defined but never called. After refactoring to endpoint-ID-first routing, the slug validation is no longer needed. Also remove SLUG_MAX_LENGTH and SLUG_PATTERN constants which are only used by this function.

Note: The same unused code exists in apps/runtime/src/transports/sse.ts and should be removed there as well.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 48 - 53, Remove
the unused slug-validation code: delete the constants SLUG_MAX_LENGTH and
SLUG_PATTERN and the function isValidSlug from
apps/runtime/src/transports/streamable-http.ts, and remove the identical
declarations from apps/runtime/src/transports/sse.ts; ensure any imports or
references that depended on these symbols are also cleaned up so there are no
lingering unused variables or linter warnings.
apps/runtime/src/transports/sse.ts (1)

43-46: ⚠️ Potential issue | 🟡 Minor

Error message references old :slug route parameter.

The error message mentions POST /mcp/:slug but the route now uses :identifier.

🔧 Proposed fix
     if (server.transport !== 'sse') {
-      res.status(400).json({ error: 'This server uses streamable-http transport. Use POST /mcp/:slug instead.' });
+      res.status(400).json({ error: 'This server uses streamable-http transport. Use POST /mcp/:identifier instead.' });
       return;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/sse.ts` around lines 43 - 46, The error message
returned in the SSE transport check incorrectly references the old route
parameter ":slug"; update the response in the server.transport !== 'sse' branch
(the res.status(400).json(...) line in apps/runtime/src/transports/sse.ts) to
reference the current route parameter ":identifier" (e.g., "Use POST
/mcp/:identifier instead.") so the message matches the current route naming.
apps/web/lib/db/repositories/credential.repository.ts (1)

167-188: ⚠️ Potential issue | 🟠 Major

Reject or clear OAuth-only fields for non-OAuth credentials.

create() and update() persist tokenEndpoint, clientId, clientSecret, refreshToken, provider, tokenExpiresAt, and scopes regardless of authType. That means an api_key/bearer credential can store secrets and OAuth metadata it should never have, and switching away from OAuth leaves stale secrets behind.

Also applies to: 200-220

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/repositories/credential.repository.ts` around lines 167 -
188, The create() and update() flows in credential.repository.ts currently
persist OAuth-specific fields unconditionally; modify both functions so that
tokenEndpoint, clientId, encryptedClientSecret (via encryptCredential),
encryptedRefreshToken (via encryptCredential), provider, tokenExpiresAt, and
scopes are only set when input.authType (or the resulting authType on update) is
'oauth' — otherwise write null for scalar fields and an empty array for scopes
to clear any stale OAuth data; ensure update() also clears these fields when
authType changes away from 'oauth'.
🟠 Major comments (32)
apps/web/lib/oauth/token-exchange.ts-110-117 (1)

110-117: ⚠️ Potential issue | 🟠 Major

The response body is still unbounded here.

response.text() and response.json() buffer the full payload before you truncate or parse it, so a malicious token endpoint can still return an arbitrarily large body and tie up memory/CPU. If this flow is supposed to honor the 10 MB cap from the SSRF hardening work, the limit needs to be enforced while reading the stream, not after.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/oauth/token-exchange.ts` around lines 110 - 117, The code
currently calls response.text() and response.json() which buffer the entire
payload before truncation; modify the token exchange flow in token-exchange.ts
(the logic around response, errorBody, and json) to read from response.body as a
streamed ReadableStream and enforce the 10 MB cap while accumulating bytes,
throwing a clear error if the limit is exceeded; use the same stream-read
utility for both the non-OK branch (to produce a truncated errorBody safely) and
the OK branch (to collect the body into a buffer/string before parsing into
json) instead of calling response.text()/response.json().
apps/web/lib/oauth/token-exchange.ts-119-125 (1)

119-125: ⚠️ Potential issue | 🟠 Major

Treat missing access_token as a failed exchange.

String(json['access_token'] ?? '') converts a malformed 200 response into accessToken: '', which lets callers continue as if the exchange succeeded and potentially persist unusable credentials. This should fail fast instead.

💡 Suggested fix
-  return Object.freeze({
-    accessToken: String(json['access_token'] ?? ''),
+  const accessToken =
+    typeof json['access_token'] === 'string' ? json['access_token'].trim() : '';
+  if (!accessToken) {
+    throw new Error('Token exchange response missing access_token');
+  }
+
+  return Object.freeze({
+    accessToken,
     refreshToken: json['refresh_token'] ? String(json['refresh_token']) : null,
     expiresIn: typeof json['expires_in'] === 'number' ? json['expires_in'] : null,
     scope: json['scope'] ? String(json['scope']) : null,
     tokenType: String(json['token_type'] ?? 'Bearer'),
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/oauth/token-exchange.ts` around lines 119 - 125, The current
freeze-return object in token-exchange.ts incorrectly allows a missing
access_token to become accessToken: '', so update the token exchange to treat a
missing/empty json['access_token'] as a failed exchange: before returning the
Object.freeze result (the block that sets
accessToken/refreshToken/expiresIn/scope/tokenType), validate that
json['access_token'] exists and is a non-empty string and if not, throw a clear
Error (or return null/error result per surrounding API) to fail fast; refer to
the returned object construction (accessToken key) to locate where to add this
check and ensure callers receive a failure rather than an empty accessToken.
packages/registry/specs/slack/spec.json-1-1 (1)

1-1: ⚠️ Potential issue | 🟠 Major

Define OpenAPI auth security scheme and apply it globally.

The Slack spec (and all other registry specs) lack components.securitySchemes and global security declaration. This leaves auth requirements unspecified and triggers policy failures (CKV_OPENAPI_4, CKV_OPENAPI_5).

Suggested patch (structure)
 {
   "openapi": "3.0.3",
   "info": { "title": "Slack Web API", "version": "1.0.0" },
   "servers": [{ "url": "https://slack.com/api" }],
+  "components": {
+    "securitySchemes": {
+      "bearerAuth": {
+        "type": "http",
+        "scheme": "bearer"
+      }
+    }
+  },
+  "security": [{ "bearerAuth": [] }],
   "paths": {
     "/chat.postMessage": {
       "post": {
         "operationId": "chatPostMessage",
         "summary": "Send a message",
+        "security": [{ "bearerAuth": [] }],
         ...
       }
     },
     "/conversations.list": {
       "get": {
         "operationId": "conversationsList",
         "summary": "List channels",
+        "security": [{ "bearerAuth": [] }],
         ...
       }
     },
     "/users.list": {
       "get": {
         "operationId": "usersList",
         "summary": "List users",
+        "security": [{ "bearerAuth": [] }],
         ...
       }
     }
   }
 }

This pattern affects all 8 registry specs: github, hubspot, notion, openai, petstore, slack, stripe, and twilio.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/registry/specs/slack/spec.json` at line 1, The OpenAPI spec lacks an
auth security scheme and global security declaration; add a
components.securitySchemes entry (e.g., define an OAuth2 or bearerAuth scheme)
and apply it globally via the top-level security array so all operations (such
as operationId chatPostMessage, conversationsList, usersList) inherit the
requirement; ensure the scheme name you choose is referenced in the top-level
"security" object and include appropriate flows/scopes or bearer format in
components.securitySchemes.
apps/cli/src/index.ts-10-21 (1)

10-21: ⚠️ Potential issue | 🟠 Major

Use .parseAsync() to properly handle async command handlers.

All four commands (serve, transform, validate, init) have async handlers, but the current synchronous .parse() call won't wait for them to complete. This causes async handler errors to become unhandled promise rejections, potentially silencing failures.

♻️ Recommended: Use parseAsync with error handling
-yargs(hideBin(process.argv))
+await yargs(hideBin(process.argv))
   .scriptName('apifold')
   .command(serveCommand)
   .command(transformCommand)
   .command(validateCommand)
   .command(initCommand)
   .demandCommand(1, 'You must provide a command. Run --help for usage.')
   .strict()
   .help()
   .alias('h', 'help')
   .version(false)
-  .parse();
+  .parseAsync()
+  .catch((err) => {
+    console.error(err.message || err);
+    process.exitCode = 1;
+  });

Wrap the entire module code in an async IIFE or configure your tsconfig for top-level await support.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/index.ts` around lines 10 - 21, The CLI uses yargs(...).parse()
which does not await async command handlers (serveCommand, transformCommand,
validateCommand, initCommand); change to yargs(...).parseAsync() and await its
promise (either by converting the module to an async IIFE or using top-level
await) and ensure you attach a .catch to handle/log unhandled errors so async
handler rejections are propagated and logged instead of becoming unhandled
promise rejections.
homebrew-apifold/Formula/apifold.rb-10-10 (1)

10-10: ⚠️ Potential issue | 🟠 Major

Add a version statement or provide version through a mechanism that Homebrew can extract.

The formula uses #{version} in all four download URLs (lines 10, 19, 30, 39) but never defines version. This will result in broken URLs with an empty version segment (e.g., cli-v/apifold-darwin-arm64). Either add an explicit version statement, use a URL pattern that allows Homebrew to extract the version automatically, or ensure the version is provided at install time.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@homebrew-apifold/Formula/apifold.rb` at line 10, The download URLs in the
apifold Formula use #{version} but no version is declared, causing broken URLs;
fix by adding an explicit version statement in the Formula (e.g., add version
"x.y.z" inside class Apifold < Formula) that matches the release tag used in the
URLs, or alternatively change the URL strings to concrete release URLs that
include the version (so Homebrew can auto-detect it). Ensure the version you add
matches all four URL occurrences so the url
"https://github.com/kylefuhri/apifold/releases/download/cli-v#{version}/apifold-darwin-arm64"
and its counterparts resolve correctly.
packages/registry/specs/hubspot/spec.json-1-1 (1)

1-1: ⚠️ Potential issue | 🟠 Major

createContact needs a requestBody schema.

The only mutating operation in this spec lacks a requestBody, preventing generated tools from sending contact properties. Add a required body schema with contact properties before shipping.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/registry/specs/hubspot/spec.json` at line 1, The OpenAPI operation
createContact (POST on /crm/v3/objects/contacts) is missing a requestBody, so
clients cannot send contact properties; add a required requestBody for content
type application/json with a schema describing the contact payload (e.g., an
object with a "properties" object containing common fields like email,
firstname, lastname, phone, company and any required fields), and reference that
schema in the operationId createContact response section so generated clients
can serialize contact data when calling createContact.
packages/registry/specs/openai/spec.json-1-1 (1)

1-1: ⚠️ Potential issue | 🟠 Major

Add requestBody to /embeddings and mark required fields in /chat/completions.

The /embeddings POST endpoint has no requestBody, resulting in a tool with no usable inputs. The /chat/completions POST endpoint's requestBody is not marked as required, and its schema lacks a required array—both the endpoint body and its fields would be optional, allowing empty payloads that fail downstream.

The test fixture at packages/transformer/__tests__/fixtures/openai.json shows the correct structure: /embeddings should require model and input fields, and /chat/completions should properly reference its schema definition with required fields.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/registry/specs/openai/spec.json` at line 1, The OpenAPI spec is
missing a requestBody for the POST /embeddings (operationId createEmbedding) and
the POST /chat/completions (operationId createChatCompletion) body/schema fields
are not marked required; update the spec so /embeddings has a requestBody with
application/json schema requiring "model" and "input", and modify
/chat/completions to reference a proper requestBody schema that includes a
"required" array for required properties (e.g., model and messages), matching
the fixture structure in packages/transformer/__tests__/fixtures/openai.json;
ensure the requestBody objects are marked required where appropriate so
downstream tooling enforces payload presence.
apps/web/lib/oauth/state-store.ts-28-36 (1)

28-36: ⚠️ Potential issue | 🟠 Major

Race condition: get and del are not atomic.

Two concurrent requests with the same state parameter can both retrieve the value before either deletes it, allowing state replay within the 5-minute TTL window. Use the atomic getDel operation (available in ioredis 5.4.0+) to fix this:

🔒 Proposed fix using atomic getDel
 export async function retrieveOAuthState(state: string): Promise<OAuthState | null> {
   const redis = getRedis();
-  const raw = await redis.get(`${STATE_PREFIX}${state}`);
+  const raw = await redis.getDel(`${STATE_PREFIX}${state}`);
   if (!raw) {
     return null;
   }
-  // Delete after retrieval — state is single-use
-  await redis.del(`${STATE_PREFIX}${state}`);
-  return JSON.parse(raw) as OAuthState;
+  try {
+    return JSON.parse(raw) as OAuthState;
+  } catch {
+    return null;
+  }
 }

Also add error handling for JSON.parse—if the stored state is malformed, parsing fails and throws instead of gracefully returning null.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/oauth/state-store.ts` around lines 28 - 36, The
retrieveOAuthState function has a race due to separate redis.get and redis.del
calls and lacks JSON parse error handling; replace the two-step get+del with the
atomic Redis getDel call (use redis.getDel(`${STATE_PREFIX}${state}`)) to ensure
single-use semantics, then safely JSON.parse the returned raw inside a try/catch
and return null on parse errors; reference the getRedis() call, STATE_PREFIX
constant and retrieveOAuthState function when making the changes.
apps/web/__tests__/oauth/pkce.test.ts-16-20 (1)

16-20: ⚠️ Potential issue | 🟠 Major

Strengthen PKCE verifier bounds to full RFC 7636 requirements.

Current tests allow very short verifiers (> 0). RFC 7636 requires 43–128 chars, so insecure regressions could pass.

💡 Proposed fix
-  it('codeVerifier is a non-empty string', () => {
+  it('codeVerifier length is RFC 7636-compliant (43-128) and charset-safe', () => {
     const { codeVerifier } = generatePkceChallenge();
-
-    expect(codeVerifier.length).toBeGreaterThan(0);
+    expect(codeVerifier.length).toBeGreaterThanOrEqual(43);
+    expect(codeVerifier.length).toBeLessThanOrEqual(128);
+    expect(codeVerifier).toMatch(/^[A-Za-z0-9\-._~]+$/);
   });
@@
-  it('codeVerifier length does not exceed 128 characters (RFC 7636 max)', () => {
+  it('codeVerifier length stays within RFC 7636 bounds', () => {
     for (let i = 0; i < 10; i++) {
       const { codeVerifier } = generatePkceChallenge();
+      expect(codeVerifier.length).toBeGreaterThanOrEqual(43);
       expect(codeVerifier.length).toBeLessThanOrEqual(128);
     }
   });

Also applies to: 59-64

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/__tests__/oauth/pkce.test.ts` around lines 16 - 20, Update the tests
in pkce.test.ts that call generatePkceChallenge() (the assertions around
codeVerifier at the spots referencing lines 16-20 and 59-64) to assert RFC 7636
bounds: verify codeVerifier.length is >= 43 and <= 128 (inclusive) instead of
just > 0; keep existing assertions that codeVerifier is a string and non-empty
and apply the same length bounds to any other verifier-related assertions in the
file that validate generated PKCE verifiers.
apps/cli/src/commands/init.ts-36-45 (1)

36-45: ⚠️ Potential issue | 🟠 Major

Prevent overwrite race by making file creation atomic.

Line 37 + Line 44 uses a check-then-write flow. A concurrent create between those lines can still get overwritten.

💡 Proposed fix
-    try {
-      await fs.access(target);
-      process.stderr.write(`Config already exists: ${target}\n`);
-      process.exit(1);
-    } catch {
-      // File doesn't exist — safe to create
-    }
-
-    await fs.writeFile(target, content, 'utf-8');
-    process.stdout.write(`Created ${target}\n`);
+    try {
+      await fs.writeFile(target, content, { encoding: 'utf-8', flag: 'wx' });
+      process.stdout.write(`Created ${target}\n`);
+    } catch (error) {
+      if ((error as NodeJS.ErrnoException).code === 'EEXIST') {
+        process.stderr.write(`Config already exists: ${target}\n`);
+        process.exit(1);
+      }
+      throw error;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/commands/init.ts` around lines 36 - 45, The current
check-then-write using fs.access followed by fs.writeFile (variables target and
content) has a race condition; replace that flow with an atomic create that
fails if the file exists (e.g., use fs.open or fs.writeFile with the
exclusive/create flag 'wx' so the write will throw if target already exists) and
handle the error to print "Config already exists" and exit; remove the separate
fs.access branch and ensure process.stderr.write/process.exit(1) and
process.stdout.write("Created ...") remain in the success/error handlers around
the atomic create operation.
.github/workflows/publish-cli.yml-49-51 (1)

49-51: ⚠️ Potential issue | 🟠 Major

Deprecated runner label: macos-13 is no longer available.

The static analysis tool correctly identifies that macos-13 is not a valid GitHub-hosted runner label. Use macos-14 instead for Intel x64 macOS builds, or macos-15-intel for the latest Intel runner.

🔧 Proposed fix
-          - os: macos-13
+          - os: macos-14
             target: bun-darwin-x64
             artifact: apifold-darwin-x64
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/publish-cli.yml around lines 49 - 51, Replace the
deprecated runner label "macos-13" with a supported macOS runner (e.g.,
"macos-14") in the job matrix entry that contains os: macos-13, target:
bun-darwin-x64, artifact: apifold-darwin-x64; update the os value to "macos-14"
(or "macos-15-intel" if you need the latest Intel runner) so the workflow uses a
valid GitHub-hosted runner.
packages/registry/src/validate.ts-4-4 (1)

4-4: ⚠️ Potential issue | 🟠 Major

Run registry specs through autoConvert() here too.

This validator no longer matches the CLI/web ingestion path. Any catalog spec that only becomes valid after Swagger 2.0 conversion will fail here, or you'll miss conversion warnings that production code would surface.

🔄 Proposed fix
-import { parseSpec, transformSpec } from '@apifold/transformer';
+import { autoConvert, parseSpec, transformSpec } from '@apifold/transformer';
@@
-      const parseResult = parseSpec({ spec });
-      const transformResult = transformSpec({ spec: parseResult.spec });
+      const convertResult = await autoConvert(spec);
+      const parseResult = parseSpec({ spec: convertResult.spec });
+      const transformResult = transformSpec({ spec: parseResult.spec });

Also applies to: 22-23

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/registry/src/validate.ts` at line 4, The validator currently uses
parseSpec and transformSpec directly; update the validation flow in validate.ts
to run specs through autoConvert() before/alongside parseSpec/transformSpec so
Swagger 2.0 -> OpenAPI conversions are applied and conversion warnings/errors
are surfaced; specifically, import autoConvert from '@apifold/transformer' and
invoke autoConvert(spec) (and handle its returned convertedSpec and warnings)
prior to calling parseSpec/transformSpec (preserve existing validation logic but
feed transformedSpec into parseSpec/transformSpec and merge/return any
conversion warnings alongside validation results).
apps/cli/__tests__/server/proxy.test.ts-260-286 (1)

260-286: 🛠️ Refactor suggestion | 🟠 Major

Add regression tests for redirect blocking and the 10 MB body cap.

This suite covers protocol allowlisting, but the new SSRF hardening also depends on rejecting redirects and oversized upstream responses. Without explicit cases for those paths, the highest-risk protections can regress silently.

Also applies to: 328-375

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/__tests__/server/proxy.test.ts` around lines 260 - 286, The test
suite for proxyToolCall currently only verifies protocol allowlisting; add
regression tests to assert that proxyToolCall rejects responses that are
redirects and rejects upstream responses larger than the 10 MB cap.
Specifically, add tests alongside the existing cases (using proxyToolCall,
makeTool, makeConfig) that: 1) mockFetch to return a redirect response (status
301/302 with a Location header) and assert proxyToolCall rejects with an
SSRF/redirect error, and 2) mockFetch to return a response whose body size
exceeds 10 * 1024 * 1024 bytes (use makeJsonResponse or a custom Response with
an oversized body) and assert proxyToolCall rejects with an oversized-response
error; reuse mockFetch, makeJsonResponse, and makeConfig helpers so the tests
mirror the style of the existing protocol tests.
packages/registry/src/validate.ts-25-32 (1)

25-32: ⚠️ Potential issue | 🟠 Major

Don't count zero-tool specs as a passing validation.

A registry entry that transforms to zero tools is effectively undeployable, but this script still increments passed and exits 0. That lets broken catalog entries ship.

🚫 Proposed fix
-      if (transformResult.tools.length === 0) {
-        process.stderr.write(`  WARN: ${entry.id} — no tools generated\n`);
-      }
+      if (transformResult.tools.length === 0) {
+        throw new Error('no tools generated');
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/registry/src/validate.ts` around lines 25 - 32, The code currently
treats entries with transformResult.tools.length === 0 as a warning but still
increments passed and prints a PASS; change the logic in the validation loop
(the block that checks transformResult.tools, writes to process.stderr/stdout,
and updates passed) so that when transformResult.tools.length === 0 you write a
failing message (e.g., `FAIL: ${entry.id} — 0 tools`) to stderr or stdout and do
not increment the passed counter; leave the existing PASS message and passed++
only for cases where tools.length > 0 and keep using parseResult.warnings in the
message for non-zero tools.
apps/runtime/src/mcp/protocol-handler.ts-154-161 (1)

154-161: ⚠️ Potential issue | 🟠 Major

Handle profile fetch failures before returning from tools/call.

If fetchProfileToolIds() throws, this method currently rejects before emitting any JSON-RPC response. A transient DB/profile-store failure will surface as a transport error instead of a controlled RPC error.

🩹 Proposed fix
-    const allowedToolIds = this.fetchProfileToolIds
-      ? await this.fetchProfileToolIds(server.id)
-      : null;
+    let allowedToolIds: readonly string[] | null = null;
+    if (this.fetchProfileToolIds) {
+      try {
+        allowedToolIds = await this.fetchProfileToolIds(server.id);
+      } catch (err) {
+        this.logger.error({ err, slug: session.slug }, 'Failed to load profile tool ids');
+        return jsonRpcError(req.id, -32603, 'Failed to validate tool access');
+      }
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 154 - 161, When
enforcing profile-based tool access in the tools/call path, wrap the call to
this.fetchProfileToolIds(server.id) so any thrown error is caught and converted
into a JSON-RPC error response instead of letting the exception bubble as a
transport failure; specifically, handle failures when computing allowedToolIds
(the call to fetchProfileToolIds) by catching the error, logging or recording
it, and returning jsonRpcError(req.id, <appropriate_code>, '<clear message>')
(e.g., internal error or profile-fetch-failure) before exiting, while preserving
the existing branch that returns Tool not found when allowedToolIds is present
but tool.id is not included.
apps/web/app/api/servers/[id]/analytics/export/route.ts-15-20 (1)

15-20: ⚠️ Potential issue | 🟠 Major

Neutralize spreadsheet formula injection in CSV fields.

tool_name and error_code can be upstream-controlled. Quoting alone does not stop Excel/Sheets from treating values beginning with =, +, -, or @ as formulas when the export is opened.

🛡️ Proposed fix
 function csvField(value: string): string {
-  if (value.includes(',') || value.includes('"') || value.includes('\n')) {
-    return `"${value.replace(/"/g, '""')}"`;
+  const normalized = /^[=+\-@]/.test(value) ? `'${value}` : value;
+  if (/[",\r\n]/.test(normalized)) {
+    return `"${normalized.replace(/"/g, '""')}"`;
   }
-  return value;
+  return normalized;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/servers/`[id]/analytics/export/route.ts around lines 15 -
20, The csvField function must neutralize spreadsheet formula injection for
upstream-controlled fields (e.g., tool_name and error_code); update csvField to
detect values that begin with '=', '+', '-', or '@' and prepend a single quote
(') before any further processing, then escape inner double-quotes as currently
done and wrap fields containing commas/quotes/newlines in double quotes; ensure
the check and prefixing happen before the quote-escaping so the leading quote is
preserved and formulas cannot be evaluated when opened in Excel/Sheets.
apps/runtime/src/transports/domain-router.ts-51-63 (1)

51-63: ⚠️ Potential issue | 🟠 Major

Query string is lost during URL rewrite.

The rewrite uses req.path (which excludes query strings) but assigns to req.url (which should include them). Any query parameters on the original request will be silently dropped.

🐛 Proposed fix to preserve query strings
     const originalPath = req.path;
+    const originalSearch = req.url.includes('?') ? req.url.slice(req.url.indexOf('?')) : '';

     if (originalPath === '/sse' || originalPath === '/') {
-      req.url = `/mcp/${server.endpointId}${originalPath === '/' ? '' : originalPath}`;
+      req.url = `/mcp/${server.endpointId}${originalPath === '/' ? '' : originalPath}${originalSearch}`;
     } else if (originalPath === '/message') {
-      req.url = `/mcp/${server.endpointId}/message`;
+      req.url = `/mcp/${server.endpointId}/message${originalSearch}`;
     } else if (originalPath.startsWith('/mcp/')) {
       // Already has /mcp/ prefix — pass through
       next();
       return;
     } else {
-      req.url = `/mcp/${server.endpointId}${originalPath}`;
+      req.url = `/mcp/${server.endpointId}${originalPath}${originalSearch}`;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/domain-router.ts` around lines 51 - 63, The URL
rewrite in domain-router.ts uses req.path (originalPath) which loses query
strings but assigns to req.url, dropping queries; update the rewrite logic
around originalPath/req.url (in the block that handles '/sse', '/', '/message',
and the default case) to preserve the original query string (e.g., capture
req.url or req.originalUrl including the query part or append
req._parsedUrl.search) when you build the new `/mcp/${server.endpointId}...` URL
so that query parameters remain intact for endpoints using originalPath,
req.url, and server.endpointId.
apps/cli/src/server/proxy.ts-4-10 (1)

4-10: ⚠️ Potential issue | 🟠 Major

SSRF: Protocol check is good, but consider private IP blocking.

The protocol allowlist prevents file://, data://, etc. However, fetch by default follows redirects, which could be exploited: an attacker-controlled URL at https://evil.com/redirect could redirect to http://169.254.169.254/ (cloud metadata). Additionally, the URL could resolve to a private IP directly.

Consider:

  1. Adding redirect: 'manual' or redirect: 'error' to the fetch options
  2. Adding DNS rebinding/private IP validation similar to credential.schema.ts
🔒 Suggested fix to prevent redirect-based SSRF
   let response: globalThis.Response;
   try {
     response = await fetch(url, {
       method: method.toUpperCase(),
       headers,
       body: hasBody ? JSON.stringify(bodyPayload) : undefined,
       signal: controller.signal,
+      redirect: 'error', // Prevent redirect-based SSRF
     });
   } finally {
     clearTimeout(timeout);
   }

For complete SSRF protection, also consider validating that the resolved IP is not in private ranges before making the request, or use a DNS-pinning library.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/server/proxy.ts` around lines 4 - 10, The protocol allowlist in
validateUpstreamUrl/ALLOWED_PROTOCOLS is fine but you must also defend against
redirect- and DNS-based SSRF: when performing the proxied request, set fetch
options to disallow automatic redirects (e.g., redirect: 'manual' or 'error')
and inspect Location headers to prevent forwarding to private addresses;
additionally, add DNS resolution and private-IP checks (reuse the
IP/private-range logic from credential.schema.ts) before issuing the request in
the proxy flow so any upstream URL that resolves to
RFC1918/loopback/link-local/metadata ranges is rejected with an error.
apps/web/components/credentials/add-credential-modal.tsx-111-128 (1)

111-128: ⚠️ Potential issue | 🟠 Major

Custom provider for client credentials will fail on backend.

When selectedProvider === "custom" (line 118), the frontend sends provider: "custom" to the backend route (/api/oauth/client-credentials). However, the backend (line 38) calls getProviderPreset(input.provider), which looks up the provider in PROVIDERS_BY_ID. Since "custom" is not a registered provider preset, this returns undefined, causing a 400 error: Unknown provider: custom (line 40).

This is asymmetrical with the oauth2_authcode flow, which explicitly rejects custom at line 86-87.

Fix by either:

  1. Hide the custom option from ProviderPicker when authType === "oauth2_client_creds" (consistent with authcode), or
  2. Add backend support for custom providers by requiring additional fields (e.g., tokenEndpoint, scopeSeparator) in the request payload and handling them separately.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/credentials/add-credential-modal.tsx` around lines 111 -
128, The frontend currently sends provider: "custom" for client credentials
which the backend rejects; to fix, disallow selecting the "custom" provider when
authType === "oauth2_client_creds" by filtering/hiding the custom option in the
ProviderPicker used by add-credential-modal (ensure selectedProvider cannot be
"custom" before clientCreds.mutateAsync is called), and keep the existing
validation around clientId/clientSecret and providerId (where providerId =
selectedProvider === "custom" ? "custom" : selectedProvider.id) so the form
never submits "custom" for the client-credentials flow.
apps/web/app/api/oauth/authorize/route.ts-22-25 (1)

22-25: ⚠️ Potential issue | 🟠 Major

Don’t derive the OAuth redirect URI directly from the inbound request origin.

Using request.url here makes redirect_uri depend on the current Host/proxy setup. That is brittle for providers that require an exact registered callback URL, and it becomes especially risky now that this PR adds custom-domain and host-header routing. Build this from a canonical configured app origin or a validated allowlist instead.

Also applies to: 73-76

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/oauth/authorize/route.ts` around lines 22 - 25, The
buildCallbackUrl function currently uses request.url which ties redirect_uri to
the incoming Host; change it to use a canonical configured origin (or an
allowed-origin lookup) instead: read the app canonical origin from configuration
(env var or app settings) and validate it against an allowlist if needed, then
construct the callback using that canonical origin (e.g.,
`${canonicalOrigin}/api/oauth/callback`) inside buildCallbackUrl; also update
the equivalent logic referenced around lines 73-76 to use the same
canonical-origin / allowlist approach rather than request.url.
apps/cli/__tests__/server/lightweight.test.ts-418-427 (1)

418-427: ⚠️ Potential issue | 🟠 Major

Wait for the first SSE session to be established before asserting the 409.

The fixed 50ms sleep does not guarantee the first /sse request has registered its session yet, so the second connect can intermittently win the race and return 200. Wait for the first response or the initial endpoint event instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/__tests__/server/lightweight.test.ts` around lines 418 - 427, The
test currently sleeps 50ms after starting the first SSE request (ctrl1 /
res1Promise) which can race; instead wait for the first SSE to be established by
awaiting the first response or the initial SSE "endpoint" event before issuing
the second fetch. Concretely, use res1Promise (the fetch started with
AbortController ctrl1) and await its response (or parse its readable stream
until you see the "endpoint" event) to ensure the session is registered, then
perform the second fetch and assert status 409; keep the AbortController ctrl1
to close the first connection after the assertion.
apps/runtime/__tests__/unit/token-refresher.test.ts-95-103 (1)

95-103: ⚠️ Potential issue | 🟠 Major

Freeze time or widen this boundary case.

This assertion is only 1ms past the cutoff while futureDate() and isTokenExpired() read the clock independently. A tiny scheduling delay is enough to flip it to expired and make the test flaky in CI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/__tests__/unit/token-refresher.test.ts` around lines 95 - 103,
The test is flaky because futureDate() and isTokenExpired() sample the clock
separately; freeze time or widen the margin so the 1ms boundary can't flip due
to scheduling. Update the test around the two cases to either (a) use Jest fake
timers (jest.useFakeTimers / jest.setSystemTime) so
futureDate(TOKEN_EXPIRY_BUFFER_MS + 1) and isTokenExpired() see the same time,
or (b) increase the offset from +1ms to a safer delta (e.g., +10ms) when
creating expiresAt; reference the test functions and constants futureDate,
isTokenExpired, and TOKEN_EXPIRY_BUFFER_MS to locate and modify the assertions
accordingly.
apps/cli/__tests__/server/lightweight.test.ts-63-95 (1)

63-95: ⚠️ Potential issue | 🟠 Major

This helper has a TOCTOU port race and never closes the server.

startTestServer() probes a free port with a temporary listener, closes it, sleeps 10ms, and then asks startServer() to bind the same port. Another worker can grab that port in between, and the returned stop() is a no-op, so these listeners stay alive for the rest of the worker. Please have startServer() return the actual http.Server bound on port 0, then read the assigned port and close it in afterAll.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/__tests__/server/lightweight.test.ts` around lines 63 - 95, The
startTestServer helper has a TOCTOU race and never closes servers; change the
flow so startServer returns the actual http.Server it starts (instead of only
resolving when listening) and update startTestServer to call startServer with
port: 0 (or omit port so the server binds an ephemeral port), read the assigned
port from the returned server.address().port to build baseUrl, and implement
stop to call server.close() (and register server.close() in afterAll in the test
file); update any call sites of startServer to handle the returned http.Server
and remove the temporary probe/sleep logic and the no-op stop.
apps/cli/src/commands/transform.ts-39-63 (1)

39-63: ⚠️ Potential issue | 🟠 Major

Wrap the conversion/output path in the same CLI error handling.

From Line 39 onward, autoConvert(), parseSpec(), transformSpec(), and writeFile() can still throw, which will surface as an unhandled stack trace instead of the stable Error: ... output used above.

🛡️ Proposed fix
-    const converted = await autoConvert(raw);
-    if (converted.converted) {
-      process.stderr.write(`Auto-converted from Swagger ${converted.originalVersion} to OpenAPI 3.0\n`);
-    }
-
-    const parsed = parseSpec({ spec: converted.spec as Record<string, unknown> });
-    const result = transformSpec({
-      spec: parsed.spec,
-      filterTags: argv['filterTags'] as string[] | undefined,
-      filterMethods: argv['filterMethods'] as HttpMethod[] | undefined,
-      includeDeprecated: argv['includeDeprecated'] as boolean,
-    });
-
-    if (result.warnings.length > 0) {
-      for (const w of result.warnings) {
-        process.stderr.write(`warning: ${w.message}\n`);
-      }
-    }
-
-    const output = JSON.stringify(result.tools, null, 2);
-
-    if (argv['output']) {
-      await fs.writeFile(argv['output'] as string, output, 'utf-8');
-      process.stderr.write(`Wrote ${result.tools.length} tools to ${argv['output']}\n`);
-    } else {
-      process.stdout.write(output + '\n');
-    }
+    try {
+      const converted = await autoConvert(raw);
+      if (converted.converted) {
+        process.stderr.write(`Auto-converted from Swagger ${converted.originalVersion} to OpenAPI 3.0\n`);
+      }
+
+      const parsed = parseSpec({ spec: converted.spec as Record<string, unknown> });
+      const result = transformSpec({
+        spec: parsed.spec,
+        filterTags: argv['filterTags'] as string[] | undefined,
+        filterMethods: argv['filterMethods'] as HttpMethod[] | undefined,
+        includeDeprecated: argv['includeDeprecated'] as boolean,
+      });
+
+      for (const w of result.warnings) {
+        process.stderr.write(`warning: ${w.message}\n`);
+      }
+
+      const output = JSON.stringify(result.tools, null, 2);
+      if (argv['output']) {
+        await fs.writeFile(argv['output'] as string, output, 'utf-8');
+        process.stderr.write(`Wrote ${result.tools.length} tools to ${argv['output']}\n`);
+      } else {
+        process.stdout.write(output + '\n');
+      }
+    } catch (err) {
+      const msg = err instanceof Error ? err.message : String(err);
+      process.stderr.write(`Error: ${msg}\n`);
+      process.exit(1);
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/commands/transform.ts` around lines 39 - 63, Wrap the sequence
that calls autoConvert, parseSpec, transformSpec and the optional fs.writeFile
(the block using converted, parsed, result, output and argv['output']) in the
same top-level CLI try/catch used elsewhere so any thrown errors produce the
normalized "Error: ..." output instead of an unhandled stack trace;
specifically, enclose calls to autoConvert, parseSpec, transformSpec and the
awaited fs.writeFile in a try block and in the catch call the same
error-reporting/exit behavior your CLI uses (use process.stderr.write or the
existing CLI error helper and exit non-zero) so all failures are handled
consistently.
apps/cli/src/commands/serve.ts-27-39 (1)

27-39: ⚠️ Potential issue | 🟠 Major

Persist the registry fallback base URL before starting the server.

Lines 56-60 derive a fallback baseUrl, but they only log it. startServer() still receives the original config, so serve --registry ... can print "Using base URL..." and then proxy without one.

🛠️ Suggested fix
-    const config = await loadConfig({
+    let config = await loadConfig({
@@
         const baseUrl = servers?.[0]?.['url'] as string | undefined;
         if (baseUrl) {
+          config = { ...config, baseUrl };
           process.stdout.write(`Using base URL from spec: ${baseUrl}\n`);
         }

Also applies to: 55-60

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/commands/serve.ts` around lines 27 - 39, The code computes a
fallback baseUrl (derived for registry use) but only logs it instead of updating
the runtime config, so startServer(config) still gets the original config
without the fallback; update the config returned by loadConfig to persist the
fallback (e.g., set config.baseUrl = fallbackBaseUrl or create a newConfig with
baseUrl overridden) before calling startServer, referencing the loadConfig
result (config) and the startServer invocation to ensure the server uses the
registry-derived base URL.
apps/web/app/dashboard/analytics/page.tsx-40-45 (1)

40-45: ⚠️ Potential issue | 🟠 Major

Reorder the top-level state branches.

On the first render, serversLoading is true while serverId is still "", so Line 93 shows "No servers yet" instead of the skeleton. The same ordering also makes failed queries fall through to empty states, which hides outages as "No data yet."

🩹 Suggested fix
-  const { data: servers, isLoading: serversLoading } = useServers();
+  const { data: servers, isLoading: serversLoading, isError: serversError } = useServers();
@@
-  const { data, isLoading: analyticsLoading } = useAnalytics(serverId, range);
+  const { data, isLoading: analyticsLoading, isError: analyticsError } = useAnalytics(serverId, range);
@@
-        {!serverId || (!servers?.length && !serversLoading) ? (
-          <EmptyState icon={BarChart3} title="No servers yet" description="Create an MCP server to start seeing analytics." />
-        ) : isLoading ? (
+        {serversError || analyticsError ? (
+          <EmptyState
+            icon={Activity}
+            title="Couldn't load analytics"
+            description="Please try again in a moment."
+          />
+        ) : isLoading ? (
           <AnalyticsSkeleton />
+        ) : !serverId || !servers?.length ? (
+          <EmptyState icon={BarChart3} title="No servers yet" description="Create an MCP server to start seeing analytics." />
         ) : !data || data.overview.totalCalls === 0 ? (
           <EmptyAnalytics serverName={selectedServer?.name ?? "this server"} range={range} />
         ) : (

Also applies to: 93-99

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/analytics/page.tsx` around lines 40 - 45, The current
render computes serverId and selectedServer using servers before checking
serversLoading, causing loading states to be hidden; change the order so loading
wins: keep useServers() and useAnalytics(serverId, range) calls but derive
serverId only after servers have finished loading (e.g., const serverId =
serversLoading ? "" : (selectedServerId ?? servers?.[0]?.id ?? "")), compute
selectedServer from servers only when not loading, and set isLoading =
serversLoading || (selectedServer ? analyticsLoading : false) so the UI shows
the skeleton while servers load and surfaces failed queries instead of falling
through to empty-state messages; update references to useServers,
serversLoading, selectedServerId, serverId, selectedServer, useAnalytics,
analyticsLoading, and isLoading accordingly.
apps/web/lib/db/repositories/profile.repository.ts-93-108 (1)

93-108: ⚠️ Potential issue | 🟠 Major

Enforce the default-profile guard on updates too.

Line 124 already treats default profiles as protected during delete, but update() still lets callers rewrite them in place. That means the generated preset profiles can silently change semantics without ever being deleted.

🔒 Suggested fix
     const rows = await this.db
       .update(accessProfiles)
       .set(updateValues)
-      .where(and(eq(accessProfiles.id, id), eq(accessProfiles.userId, userId)))
+      .where(
+        and(
+          eq(accessProfiles.id, id),
+          eq(accessProfiles.userId, userId),
+          eq(accessProfiles.isDefault, false),
+        ),
+      )
       .returning();
 
     if (rows.length === 0) {
-      throw new Error('Profile not found or access denied');
+      throw new Error('Profile not found, access denied, or is a default profile');
     }

Also applies to: 117-130

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/repositories/profile.repository.ts` around lines 93 - 108,
Before performing the update in the update(userId: string, id: string, input:
UpdateProfileInput) method, load the existing access profile (select from
accessProfiles where id and userId match) and if its default flag is set (the
default/preset flag on accessProfiles) throw an error to prevent modification of
default profiles; only proceed to call this.db.update(accessProfiles)...set(...)
if the existing row exists and is not a default/preset profile.
apps/cli/src/server/lightweight.ts-128-143 (1)

128-143: ⚠️ Potential issue | 🟠 Major

Guard against the SSE session disappearing while the tool call is in flight.

Line 140 awaits handleJsonRpc(), but the close handler can null out session before Line 142 runs. At that point session.res.write(...) will throw and the POST fails.

🐛 Suggested fix
   app.post('/message', async (req: Request, res: Response) => {
     const callerSessionId = req.headers['x-session-id'] as string | undefined;
-    if (!session || session.id !== callerSessionId) {
+    const activeSession = session;
+    if (!activeSession || activeSession.id !== callerSessionId) {
       res.status(401).json({ error: 'Invalid or missing session' });
       return;
     }
 
     if (!isValidJsonRpc(req.body)) {
       res.status(400).json({ error: 'Invalid JSON-RPC request' });
       return;
     }
 
     const response = await handleJsonRpc(req.body, config, toolMap, logger);
-
-    session.res.write(`event: message\ndata: ${JSON.stringify(response)}\n\n`);
+    if (session?.id !== activeSession.id || activeSession.res.destroyed || activeSession.res.writableEnded) {
+      res.status(410).json({ error: 'Session closed' });
+      return;
+    }
+
+    activeSession.res.write(`event: message\ndata: ${JSON.stringify(response)}\n\n`);
     res.status(202).json({ ok: true });
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/server/lightweight.ts` around lines 128 - 143, The POST /message
handler can null out session while awaiting handleJsonRpc, causing
session.res.write to throw; fix by capturing the current session into a local
const (e.g., const currentSession = session) before calling await handleJsonRpc,
then after the await verify currentSession is still valid (not null and has a
writable res) before calling currentSession.res.write, and otherwise skip the
SSE write (optionally log or handle the dropped session) to avoid throwing;
apply this change in the app.post('/message' ...) handler around the
handleJsonRpc and session.res.write usage.
apps/cli/src/server/lightweight.ts-10-15 (1)

10-15: ⚠️ Potential issue | 🟠 Major

Accept JSON-RPC notifications (without id) during the MCP handshake.

isValidJsonRpc() requires id on all messages, but MCP notifications like notifications/initialized are sent without an id field per the JSON-RPC specification. The current validation rejects spec-compliant notifications right after initialize, breaking clients. Notifications must be accepted and handled without emitting a JSON-RPC response.

Also applies to: 35-42, 128-143, 153-175

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/cli/src/server/lightweight.ts` around lines 10 - 15, The JSON-RPC
Request interface and validation currently require an id, rejecting
spec-compliant notifications; update the JsonRpcRequest type so id is optional
(id?: string | number) and change the isValidJsonRpc(...) validation function to
accept messages that lack id as valid notifications. Also audit places that
assume id exists (handlers that call sendResponse/sendError) — e.g., the code
paths referenced by isValidJsonRpc and the MCP handshake handlers — and guard
those response-emitting calls so they only send a JSON-RPC response when
message.id is present (treat messages without id as notifications that are
handled silently).
apps/web/app/api/servers/[id]/profiles/route.ts-70-85 (1)

70-85: ⚠️ Potential issue | 🟠 Major

Enforce the [id] path scope on profile mutations.

serverId is validated in both handlers but never reaches the repository call. A user who owns multiple servers can update or delete a profile from server B through /servers/server-a/profiles?... as long as they know profileId. Include serverId in the update/delete criteria.

Suggested route-side change
-    const profile = await profileRepo.update(userId, profileId, input);
+    const profile = await profileRepo.update(userId, serverId, profileId, input);
...
-    await profileRepo.delete(userId, profileId);
+    await profileRepo.delete(userId, serverId, profileId);

The repository implementation needs the matching serverId constraint in its lookup/WHERE clause as well.

Also applies to: 97-109

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/servers/`[id]/profiles/route.ts around lines 70 - 85, The
route validates serverId but doesn't pass it to the repository, allowing
cross-server profile mutations; update the handlers that call
ProfileRepository.update and the delete handler (the code that reads profileId
and calls ProfileRepository.delete/erase) to pass serverId alongside userId and
profileId, and then modify the ProfileRepository's update/delete lookup to
include server_id = serverId in the WHERE clause (ensure the repository methods
accept a serverId parameter and use it in their query criteria) so profile
operations are scoped to the [id] path; also keep existing uuidParam.parse and
schema validation intact.
apps/web/app/api/servers/[id]/domain/route.ts-99-110 (1)

99-110: ⚠️ Potential issue | 🟠 Major

Make the PUT path idempotent for an unchanged hostname.

This always rotates domainVerificationToken and clears domainVerifiedAt, even when normalizedDomain already matches server.customDomain. Re-saving settings would de-verify a working production domain. Only reset verification when the hostname actually changes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/api/servers/`[id]/domain/route.ts around lines 99 - 110, The PUT
handler currently always generates verificationToken (randomBytes(16)) and calls
db.update(mcpServers).set(...) resetting domainVerificationToken and
domainVerifiedAt; change it to first compare normalizedDomain with the existing
server.customDomain and only rotate the token / null domainVerifiedAt when the
hostname actually changed; if unchanged, avoid generating a new
verificationToken and update only other fields (or return early) so an identical
domain save does not de-verify the domain.
apps/runtime/src/oauth/token-refresher.ts-87-104 (1)

87-104: ⚠️ Potential issue | 🟠 Major

Omit client_secret from the request body when the credential is public.

The encrypted_client_secret field is optional (string | null in the schema), but the current code always includes client_secret= in the request body—even when empty. OAuth 2.0 providers, especially those using PKCE for public clients, treat a blank client secret as failed authentication and reject the request. When encrypted_client_secret is null, the parameter should not be included at all.

Suggested change
-  let clientSecret: string;
+  let clientSecret: string | null;
   try {
     refreshToken = decryptFn(encryptedRefreshToken);
     clientSecret = credential.encrypted_client_secret
       ? decryptFn(credential.encrypted_client_secret)
-      : '';
+      : null;
   } catch {
     throw new CredentialExpiredError('Failed to decrypt OAuth credentials');
   }
 
   // Exchange refresh token for new access token
   const body = new URLSearchParams({
     grant_type: 'refresh_token',
     refresh_token: refreshToken,
     client_id: clientId,
-    client_secret: clientSecret,
   });
+  if (clientSecret) {
+    body.set('client_secret', clientSecret);
+  }

Note: The test at line 184–194 of the test file validates the current (problematic) behavior and will need to be updated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/oauth/token-refresher.ts` around lines 87 - 104, The request
body currently always includes client_secret (possibly empty) which breaks
public-client refreshes; modify the code around decryptFn, clientSecret, and the
URLSearchParams "body" construction so that client_secret is only added when
credential.encrypted_client_secret is non-null (i.e., when decryptFn produced a
non-empty clientSecret). Concretely, build the params (grant_type,
refresh_token, client_id) first and then conditionally append client_secret to
the URLSearchParams if clientSecret is truthy, instead of passing
client_secret='' into new URLSearchParams; also update the related test that
expects the old behavior.

Comment on lines +48 to +68
if (redis) {
const lockKey = `${REFRESH_LOCK_PREFIX}${credential.id}`;
const acquired = await redis.set(lockKey, '1', 'EX', REFRESH_LOCK_TTL_SECONDS, 'NX');
if (!acquired) {
// Another worker is refreshing — wait briefly then re-read the fresh token
logger.debug({ credentialId: credential.id }, 'Refresh lock held by another worker, waiting');
await new Promise((r) => setTimeout(r, 2000));
const { rows } = await db.query<{ readonly encrypted_key: string }>(
'SELECT encrypted_key FROM credentials WHERE id = $1',
[credential.id],
);
const fresh = rows[0];
if (!fresh) throw new CredentialExpiredError('Credential not found after lock wait');
return decryptFn(fresh.encrypted_key);
}
// Lock acquired — proceed with refresh, release in finally block
try {
return await doRefresh(db, logger, decryptFn, encryptFn, credential);
} finally {
await redis.del(lockKey).catch(() => {});
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n apps/runtime/src/oauth/token-refresher.ts

Repository: Work90210/APIFold

Length of output: 7775


The Redis refresh lock is unsafe against TTL expiry.

This lock stores a constant value '1' and unconditionally deletes the key in finally. If doRefresh takes longer than the 30-second TTL—possible given network latency, decryption, and database operations—another worker can acquire the same key. When the first worker's finally block executes, it deletes the second worker's lock, breaking the mutual exclusion.

The non-owner path (lines 54–61) has an additional flaw: it sleeps 2 seconds, then re-reads encrypted_key from the database without checking whether token_expires_at is still valid. If the freshly-refreshed token has already expired by the time the waiter reads it—or if the wait itself causes the window to close—the code returns a stale token. The caller can then immediately retry refresh, recreating the double-refresh race the lock is meant to prevent, especially with providers that rotate refresh tokens on use.

Use an owner token (e.g., a UUID in the lock value) with compare-and-delete for safe unlock. In the waiter path, query token_expires_at alongside encrypted_key and only reuse the token after confirming it is no longer expired; otherwise, re-attempt refresh.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/oauth/token-refresher.ts` around lines 48 - 68, Replace the
brittle constant-lock pattern around REFRESH_LOCK_PREFIX+credential.id with an
owner-token lock: generate a unique owner id (UUID) and set it as the value in
redis.set(..., 'NX', 'EX', REFRESH_LOCK_TTL_SECONDS); when releasing the lock in
the finally block, perform a compare-and-delete (EVAL Lua script or Redis client
atomic compare-and-del) that only deletes the key if its value equals the owner
id to avoid removing another worker's lock; also consider renewing the TTL only
if you still hold the same owner id. In the waiter path (the branch where
acquired is falsy) re-query the DB for both encrypted_key and token_expires_at,
check token_expires_at to ensure the token is still valid before returning
decryptFn(encrypted_key); if expired, retry the refresh flow (i.e., attempt to
acquire the lock and call doRefresh) instead of returning a stale token. Apply
these changes around the existing symbols: REFRESH_LOCK_PREFIX, credential.id,
redis.set, redis.del (replace with compare-and-delete), decryptFn, doRefresh and
the DB query that selects encrypted_key so it also selects token_expires_at;
keep throwing CredentialExpiredError if credential row is missing.

Comment on lines +58 to +65
{
id: 'github',
name: 'GitHub',
authorizationEndpoint: 'https://github.com/login/oauth/authorize',
tokenEndpoint: 'https://github.com/login/oauth/access_token',
defaultScopes: ['repo', 'read:user'],
scopeSeparator: ' ',
},
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Check if the OAuth callback/token exchange code handles GitHub's response format

# Search for token exchange implementation that uses the GitHub token endpoint
rg -n -A 10 "oauth.v2.access|oauth/access_token" --type=ts

# Search for Accept header handling in OAuth code
rg -n "Accept.*application/json" apps/web/app/api/oauth/ apps/web/lib/oauth/

Repository: Work90210/APIFold

Length of output: 3646


🏁 Script executed:

# Search for OAuth route handlers and callback implementations
fd -t f -e ts -e js "oauth" apps/web/app/api/ 2>/dev/null | head -20

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Search for token exchange or callback implementations
rg -n "callback|exchange|token" apps/web/app/api/ --type=ts -l 2>/dev/null | head -20

Repository: Work90210/APIFold

Length of output: 291


🏁 Script executed:

# Look for response parsing logic
rg -n "URLSearchParams|parse|JSON.parse" apps/web/app/api/ --type=ts -B 2 -A 5 2>/dev/null | head -50

Repository: Work90210/APIFold

Length of output: 3487


🏁 Script executed:

cat -n apps/web/app/api/oauth/callback/route.ts

Repository: Work90210/APIFold

Length of output: 5343


🏁 Script executed:

cat -n apps/web/lib/oauth/token-exchange.ts

Repository: Work90210/APIFold

Length of output: 4599


🌐 Web query:

GitHub OAuth token endpoint response format Accept header json

💡 Result:

The GitHub OAuth token endpoint is POST https://github.com/login/oauth/access_token. It returns OAuth access tokens for OAuth apps during the authorization code flow (web application flow) or device code flow. By default (no Accept header or Accept: text/plain), the response is form-encoded: access_token=gho_16C7e42F292c6912E7710c838347Ae178B4a&scope=repo,gist&token_type=bearer With Accept: application/json, the response is JSON: { "access_token": "gho_16C7e42F292c6912E7710c838347Ae178B4a", "scope": "repo,gist", "token_type": "bearer" } With Accept: application/xml, the response is XML: <token_type>bearer</token_type> repo,gist <access_token>gho_16C7e42F292c6912E7710c838347Ae178B4a</access_token> This format is consistent across flows (authorization code and device code). The official documentation is at https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/authorizing-oauth-apps.

Citations:


Add Accept: application/json header to GitHub token endpoint requests and handle both response formats.

GitHub's token endpoint defaults to returning URL-encoded responses (access_token=...&scope=...&token_type=bearer). The current executeTokenRequest function at line 100-105 in apps/web/lib/oauth/token-exchange.ts does not set the Accept: application/json header and assumes a JSON response without checking the response Content-Type. When GitHub returns its default URL-encoded format, the response.json() call at line 117 will fail with a parsing error, causing token exchange to fail.

Either set the Accept: application/json header in the fetch request to ensure JSON responses, or parse both URL-encoded and JSON response formats as fallbacks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/oauth/providers.ts` around lines 58 - 65, The GitHub provider's
token exchange can return URL-encoded or JSON responses; update the
executeTokenRequest function in apps/web/lib/oauth/token-exchange.ts to set the
request header "Accept: application/json" for the GitHub tokenEndpoint (provider
id 'github') and also robustly handle both response formats: after fetch, check
response.headers.get('content-type') and attempt response.json(), but if parsing
fails or content-type indicates URL-encoded, fall back to parsing
response.text() with URLSearchParams to extract access_token, scope, and
token_type; ensure any downstream code that reads the token (same function) uses
the parsed values consistently.

Comment on lines +15 to +23
function validateTokenEndpoint(endpoint: string): void {
const parsed = new URL(endpoint);
if (!ALLOWED_PROTOCOLS.has(parsed.protocol)) {
throw new Error('Token endpoint must use HTTPS');
}
if (PRIVATE_HOSTNAME_PATTERNS.some((p) => p.test(parsed.hostname))) {
throw new Error('Token endpoint must not point to a private address');
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n apps/web/lib/oauth/token-exchange.ts

Repository: Work90210/APIFold

Length of output: 4599


Harden the token request against DNS rebinding and redirect SSRF vulnerabilities.

The endpoint validation checks only the hostname string at validation time, not the resolved IP addresses. A DNS rebinding attack could return a public address during validation and a private RFC1918 address during the actual fetch() call. Additionally, the fetch() call follows redirects by default, allowing an attacker-controlled endpoint to redirect to a private address and bypass validation entirely, exposing client credentials in the request body.

Prevent redirect-based SSRF by rejecting redirects for this request:

Fix redirect SSRF
     response = await fetch(tokenEndpoint, {
       method: 'POST',
+      redirect: 'error',
       headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
       body: body.toString(),
       signal: controller.signal,
     });

For full SSRF mitigation, also resolve and pin the final IP address at request time, rejecting RFC1918/link-local addresses before fetch() executes.

Additionally, missing access_token fields in the response coerce to empty strings (line 120), masking malformed token responses as success. Validate required fields are present and non-empty.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/oauth/token-exchange.ts` around lines 15 - 23, The current
validateTokenEndpoint only checks the URL string and the token fetch follows
redirects and doesn't re-check resolved IPs, so fix by: 1) Update the token
exchange path that performs the POST (the fetch call) to disallow redirects (set
fetch option redirect to "error" or "manual"/"error" depending on runtime) so
redirect-based SSRF cannot bypass validation; 2) Before calling fetch, perform
DNS resolution (e.g., using dns.promises.lookup or lookupAll) for the
parsed.hostname and pin the resolved IP address to be used for the request,
immediately reject the request if any resolved IP is in RFC1918/link-local or
other PRIVATE_HOSTNAME_PATTERNS (and also check IPv6 ULA/loopback); 3) Ensure
validateTokenEndpoint remains for quick syntactic checks but add the runtime IP
check in the same token-exchange flow (reference validateTokenEndpoint and the
function that executes the fetch/post request); and 4) After receiving the token
response, validate that required fields like access_token exist and are
non-empty (do not coerce missing access_token to empty string) and return an
explicit error if fields are missing or empty.

on_macos do
on_arm do
url "https://github.com/kylefuhri/apifold/releases/download/cli-v#{version}/apifold-darwin-arm64"
sha256 "PLACEHOLDER"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

git ls-files homebrew-apifold/Formula/apifold.rb

Repository: Work90210/APIFold

Length of output: 96


🏁 Script executed:

cat -n homebrew-apifold/Formula/apifold.rb

Repository: Work90210/APIFold

Length of output: 1780


Replace the placeholder SHA-256 digests before merging.

All four sha256 values are currently "PLACEHOLDER" (lines 11, 20, 31, 40). Homebrew requires valid SHA-256 digests for each artifact URL; the formula will fail to install on every target platform until these are replaced with actual checksums from the corresponding release artifacts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@homebrew-apifold/Formula/apifold.rb` at line 11, The formula currently has
four sha256 entries set to the string "PLACEHOLDER" which prevents installation;
replace each sha256 "PLACEHOLDER" call in apifold.rb with the real SHA-256
digest for the corresponding artifact (one digest per occurrence) obtained from
the release assets for each platform/URL referenced in the formula (ensure you
update the sha256 values for the four sha256(...) statements so they match the
actual downloaded files).

@@ -0,0 +1 @@
{"openapi":"3.0.3","info":{"title":"Stripe API","version":"2023-10-16"},"servers":[{"url":"https://api.stripe.com/v1"}],"paths":{"/v1/charges":{"get":{"operationId":"listCharges","summary":"List charges","parameters":[{"name":"limit","in":"query","schema":{"type":"integer"}}],"responses":{"200":{"description":"OK"}}},"post":{"operationId":"createCharge","summary":"Create a charge","requestBody":{"content":{"application/json":{"schema":{"type":"object","properties":{"amount":{"type":"integer"},"currency":{"type":"string"}}}}}},"responses":{"200":{"description":"Created"}}}},"/v1/customers":{"get":{"operationId":"listCustomers","summary":"List customers","responses":{"200":{"description":"OK"}}},"post":{"operationId":"createCustomer","summary":"Create a customer","responses":{"200":{"description":"Created"}}}},"/v1/invoices":{"get":{"operationId":"listInvoices","summary":"List invoices","responses":{"200":{"description":"OK"}}}}}}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Duplicate /v1 in server URL and paths will produce incorrect URLs.

The server URL is https://api.stripe.com/v1 but paths also include /v1/ (e.g., /v1/charges). When combined, this produces https://api.stripe.com/v1/v1/charges, which is incorrect.

Either remove /v1 from the server URL or from all paths.

🐛 Fix: Remove /v1 from paths (recommended)
-{"openapi":"3.0.3","info":{"title":"Stripe API","version":"2023-10-16"},"servers":[{"url":"https://api.stripe.com/v1"}],"paths":{"/v1/charges":{"get":...
+{"openapi":"3.0.3","info":{"title":"Stripe API","version":"2023-10-16"},"servers":[{"url":"https://api.stripe.com/v1"}],"paths":{"/charges":{"get":...

Change all paths from /v1/charges, /v1/customers, /v1/invoices to /charges, /customers, /invoices.

Additionally, consider marking amount and currency as required in the createCharge request body schema, as these are required fields in the actual Stripe API.

🧰 Tools
🪛 Checkov (3.2.508)

[high] 2: Ensure that the global security field has rules defined

(CKV_OPENAPI_4)


[high] 2: Ensure that security operations is not empty.

(CKV_OPENAPI_5)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/registry/specs/stripe/spec.json` at line 1, The OpenAPI spec has a
duplicated "/v1" because the server URL is "https://api.stripe.com/v1" while
paths are defined as "/v1/charges", "/v1/customers", "/v1/invoices"; update the
spec so paths are "/charges", "/customers", "/invoices" (or alternatively change
the server URL to "https://api.stripe.com" if you prefer keeping path prefixes),
and while editing the createCharge operation (operationId "createCharge") update
its requestBody schema to mark "amount" and "currency" as required properties.

Complete redesign of the dashboard UI to match Vercel's monochromatic,
data-dense aesthetic. Restructured information architecture with two-level
navigation (workspace → server detail sidebar). Added spec versioning with
diff engine, breaking change detection, and environment-based deployments.

UX Overhaul:
- Monochromatic design tokens (pure grays, blue accent, Geist fonts)
- Two-level navigation: dashboard home → server detail sidebar (9 items)
- All pages redesigned: compact rows, table layouts, tight spacing
- Marketing pages converted from navy/gradient to monochromatic
- Console redesigned with JSON editor, auto-templates, parameter hints
- Tools page as proper table with columns
- Inline expandable log rows with full request/response detail
- Server overview with production deployment card + deployments list

Spec Lifecycle Management:
- spec_versions and spec_releases database tables
- Diff engine comparing tools with breaking change detection
- Version history UI with expandable diff view
- Environment-based deployments (Production/Preview)
- Promote and Rollback actions
- Import flow auto-creates version entries

Infrastructure Fixes:
- Runtime tsx guard (.ts + .js), RUNTIME_PORT env, CORS config
- Request logging pipeline connected (was never wired)
- Detailed logs: request body, response body, headers, error messages
- SSRF guard Node 25 compatibility fix
- Server-level analytics page
- Server settings page with enable/disable + delete
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (7)
apps/web/components/layout/command-palette.tsx (1)

75-100: ⚠️ Potential issue | 🟡 Minor

The palette’s accessibility semantics are inconsistent.

The search box has no explicit label, and the results are exposed as a listbox even though each option contains a nested button and there is no listbox keyboard model. Use a labelled input plus either plain list/button markup or a real combobox/listbox implementation.

♿ Suggested fix
           <input
             type="text"
+            aria-label="Search commands"
             placeholder="Type a command or search..."
             value={query}
             onChange={(e) => setQuery(e.target.value)}
             className="flex-1 bg-transparent text-sm outline-none placeholder:text-muted-foreground"
             autoFocus
           />
@@
-        <ul className="max-h-64 overflow-y-auto p-1" role="listbox">
+        <ul className="max-h-64 overflow-y-auto p-1">
@@
-            <li key={item.href} role="option" aria-selected={false}>
+            <li key={item.href}>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/layout/command-palette.tsx` around lines 75 - 100, The
component uses a labeled-as-listbox input without proper combobox behavior and
the input lacks an accessible label; update the input to include an explicit
label or aria-label (e.g., set aria-label="Command palette" on the input that
uses query and setQuery) and remove the incorrect listbox/listbox option
semantics from the results (remove role="listbox", role="option" and
aria-selected) so each result is a plain button that invokes handleSelect(href)
(from the filtered.map block); alternatively implement a full combobox pattern
if you want keyboard navigation, but the immediate fix is to add the input
label/aria-label and drop the listbox/option roles so the buttons are
accessible.
apps/web/app/dashboard/specs/new/page.tsx (2)

36-39: ⚠️ Potential issue | 🟠 Major

Reset the wizard state when this page mounts.

wizardStep is global, but parsedSpec is local. If the user leaves this page on step 1/2 and comes back, the store still points to that step while parsedSpec reinitializes to null, so none of the step panels render. Keep the step local here or call resetWizard() on mount/unmount.

Suggested fix
-import { useState } from "react";
+import { useEffect, useState } from "react";
...
   const importSpec = useImportSpec();
   const { toast } = useToast();
   const [parsedSpec, setParsedSpec] = useState<ParsedSpec | null>(null);
+
+  useEffect(() => {
+    resetWizard();
+    return () => resetWizard();
+  }, [resetWizard]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/specs/new/page.tsx` around lines 36 - 39, The global
wizardStep is out of sync with the local parsedSpec causing no panel to render;
call resetWizard() when this component mounts (and optionally on unmount) so the
global step is reset to the initial state to match parsedSpec, or alternatively
move wizardStep into local component state; locate useUIStore usage (wizardStep,
setWizardStep, resetWizard) and add a useEffect that invokes resetWizard() on
mount (and return resetWizard in the cleanup if you want to reset on unmount) to
fix the mismatch.

41-45: ⚠️ Potential issue | 🟠 Major

Move spec fetching/parsing off the client.

The URL path only works for hosts that allow browser CORS, and both paths assume JSON (res.json() / JSON.parse()), so valid YAML OpenAPI/Swagger documents fail outright. Posting the raw URL/file to a server endpoint and parsing there will make this flow work for real-world specs and keep URL validation in one place.

Also applies to: 53-56

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/specs/new/page.tsx` around lines 41 - 45, The
client-side handler handleUrlSubmit is fetching and JSON-parsing remote specs
(and the similar file handler at lines 53-56), which breaks on CORS and non-JSON
specs; instead, send the URL or uploaded file to a new server endpoint (e.g.,
POST /api/parse-spec) that fetches the resource server-side, detects/accepts
JSON or YAML, parses into a common shape (you can reuse or call extractSpecInfo
on the server), and returns the standardized info; update handleUrlSubmit to
POST { url } to that endpoint, consume the normalized response, and likewise
change the file upload handler to POST the file to the same endpoint so all
validation/parsing happens server-side.
apps/web/app/(marketing)/components/mobile-nav.tsx (1)

3-5: ⚠️ Potential issue | 🟡 Minor

Fix import order per ESLint rules.

The pipeline reports import order violations. Framework imports (next/link) and library imports (lucide-react) should come before react imports per your ESLint configuration.

Proposed fix
-import { useState } from "react";
 import Link from "next/link";
 import { Menu, X } from "lucide-react";
+import { useState } from "react";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/`(marketing)/components/mobile-nav.tsx around lines 3 - 5, The
import order violates ESLint rules: move framework and library imports before
React imports in mobile-nav.tsx by importing Next's Link and lucide-react icons
(symbols: Link, Menu, X) before importing useState from React; reorder the top
import statements so they appear as: next/link, lucide-react, then react to
satisfy the configured import order.
apps/web/app/(marketing)/components/hero.tsx (1)

1-5: ⚠️ Potential issue | 🟡 Minor

Fix import ordering to clear current ESLint warnings.

Line 1–5 currently violate the configured import order rules (including missing blank line between groups), which is already reported in CI.

🔧 Proposed import-order fix
-import Link from "next/link";
-import { Suspense } from "react";
 import { ArrowRight } from "lucide-react";
+import Link from "next/link";
+import { Suspense } from "react";
+
 import { GithubStars } from "./github-stars";
 import { TerminalDemo } from "./terminal-demo";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/`(marketing)/components/hero.tsx around lines 1 - 5, The import
statements in hero.tsx (imports: Link, Suspense, ArrowRight, GithubStars,
TerminalDemo) violate the project's ESLint import-order rules; reorder them into
the configured groups (third-party packages first like react and next, then icon
libraries, then local components) and add the required blank line between groups
so imports follow the configured grouping and spacing (e.g., import
React/Suspense and Link first, then ArrowRight, then local ./github-stars and
./terminal-demo) to clear the CI warnings.
apps/web/app/(marketing)/components/pricing-cards.tsx (1)

1-2: ⚠️ Potential issue | 🟡 Minor

Adjust import order to satisfy ESLint config.

Line 1–2 are in the wrong order per CI (lucide-react should come before next/link).

🔧 Proposed import-order fix
-import Link from "next/link";
 import { Check, ArrowRight } from "lucide-react";
+import Link from "next/link";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/`(marketing)/components/pricing-cards.tsx around lines 1 - 2,
The imports in pricing-cards.tsx are out of the project's ESLint import-order:
move the lucide-react import (Check, ArrowRight) above the next/link import
(Link) so the line importing from "lucide-react" appears before the line
importing from "next/link"; update the top-of-file import order accordingly to
satisfy the linter.
apps/web/app/(marketing)/components/self-host-section.tsx (1)

148-153: ⚠️ Potential issue | 🟡 Minor

Replace the placeholder Discord link before release.

Line 148 uses href="#", which is a broken destination and causes a jump-to-top interaction. Please swap it with the real invite URL (or render a disabled/non-link CTA until available).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/`(marketing)/components/self-host-section.tsx around lines 148 -
153, The "Join Discord" anchor in the SelfHostSection component currently uses a
placeholder href="#" which causes a jump-to-top; update the anchor (the element
rendering the "Join Discord" link in self-host-section.tsx) to use the real
Discord invite URL, or if the invite isn't ready, render a non-clickable CTA
(replace the anchor with a button or add aria-disabled/role and remove href) so
clicking won't navigate; ensure the element still shows the Users icon and
existing classes for visual parity.
🟡 Minor comments (12)
apps/web/lib/validation/spec-version.schema.ts-10-14 (1)

10-14: ⚠️ Potential issue | 🟡 Minor

Prefix matching still lets malformed URLs through.

/^https?:\/\// only checks the scheme prefix, so values like https:// or http://exa mple.com still pass validation. Reject these with real URL parsing at the schema boundary.

Also applies to: 20-24

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/validation/spec-version.schema.ts` around lines 10 - 14, The
current regex on sourceUrl only checks the scheme prefix and lets malformed
strings pass; replace the regex-based check with real URL parsing by using a Zod
refinement that attempts new URL(value) and verifies the protocol is 'http:' or
'https:' (e.g., in the sourceUrl schema replace the .regex(...) with a
.refine(...) that returns true only when new URL(value) succeeds and
url.protocol is http/https) and preserve the .max(2000). Apply the same change
to the other URL field(s) in this schema so all URL validations use actual
parsing and protocol checks.
apps/web/components/layout/top-nav.tsx-14-16 (1)

14-16: ⚠️ Potential issue | 🟡 Minor

Close the mobile menu after navigation.

mobileMenuOpen is shared state and only gets cleared from mobile-menu link clicks. Navigating via the brand link, settings link, command palette, or browser history can leave the old panel open on the next page. Reset it in an effect keyed by pathname.

Suggested fix
+import { useEffect } from "react";
 import Link from "next/link";
+import { usePathname } from "next/navigation";
 ...
 export function TopNav() {
+  const pathname = usePathname();
   const mobileMenuOpen = useUIStore((s) => s.mobileMenuOpen);
   const setMobileMenuOpen = useUIStore((s) => s.setMobileMenuOpen);
   const { open: openCommandPalette } = useCommandPalette();
+
+  useEffect(() => {
+    setMobileMenuOpen(false);
+  }, [pathname, setMobileMenuOpen]);

Also applies to: 63-63

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/layout/top-nav.tsx` around lines 14 - 16, Add an effect
in top-nav.tsx that closes the shared mobile menu whenever navigation or the
command palette opens: watch the current pathname (from your router hook) and
the command palette state (openCommandPalette from useCommandPalette) and call
setMobileMenuOpen(false) whenever pathname changes or openCommandPalette becomes
true; this ensures mobileMenuOpen is reset after navigation via
brand/settings/browser history or when the command palette is used.
apps/web/app/dashboard/servers/[id]/tools/page.tsx-89-97 (1)

89-97: ⚠️ Potential issue | 🟡 Minor

Make each toggle button self-describing.

In this flat row layout, assistive tech users will hear several identical "Enable" / "Disable" buttons with little context. Add the tool name to the accessible label; aria-pressed also makes the current state explicit.

Suggested fix
               <Button
                 variant="ghost"
                 size="sm"
                 className="w-20 shrink-0 text-xs"
                 onClick={() => handleToggle(tool.id, tool.isActive)}
                 disabled={updateTool.isPending}
+                aria-label={`${tool.isActive ? "Disable" : "Enable"} ${tool.name}`}
+                aria-pressed={tool.isActive}
               >
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/tools/page.tsx around lines 89 - 97, The
toggle Button for each tool is not self-describing for assistive tech; update
the Button (where Button is rendered with onClick={() => handleToggle(tool.id,
tool.isActive)} and disabled={updateTool.isPending}) to include an aria-label
that contains the tool name (e.g., "Enable {tool.name}" / "Disable {tool.name}")
and add aria-pressed={tool.isActive} so screen readers get both action and
current state while keeping the visible text unchanged.
apps/web/app/dashboard/specs/[id]/page.tsx-198-198 (1)

198-198: ⚠️ Potential issue | 🟡 Minor

Unsafe type assertion on diffSummary.

Casting version.diffSummary as SpecDiff | null assumes the JSONB matches SpecDiff exactly. If the stored shape differs (e.g., from an older version or migration), this could cause runtime errors when accessing diff.tools.

Consider adding runtime validation or a type guard.

🛡️ Suggested type guard
function isSpecDiff(value: unknown): value is SpecDiff {
  return (
    value !== null &&
    typeof value === "object" &&
    "tools" in value &&
    Array.isArray((value as SpecDiff).tools)
  );
}

// Usage:
const diff = isSpecDiff(version.diffSummary) ? version.diffSummary : null;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/specs/`[id]/page.tsx at line 198, Replace the unsafe
assertion "version.diffSummary as SpecDiff | null" with a runtime type guard
that verifies the shape before casting: implement an "isSpecDiff(value:
unknown): value is SpecDiff" function that checks value is non-null, an object,
has a "tools" property and that "tools" is an array, then set "const diff =
isSpecDiff(version.diffSummary) ? version.diffSummary : null"; use this guard
wherever code reads "diff.tools" to avoid runtime errors from malformed JSONB.
apps/web/lib/db/migrations/0007_spec_versions.sql-30-30 (1)

30-30: ⚠️ Potential issue | 🟡 Minor

FK version_id lacks ON DELETE clause — same concern as Drizzle schema.

If a spec_versions row is deleted while referenced by spec_releases, this will fail. Add explicit behavior (ON DELETE RESTRICT or ON DELETE CASCADE) for clarity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/0007_spec_versions.sql` at line 30, The foreign
key definition for version_id in spec_releases lacks an explicit ON DELETE
clause which can cause failed deletes of spec_versions; update the FK on
version_id (referencing spec_versions(id)) in
apps/web/lib/db/migrations/0007_spec_versions.sql to include an explicit ON
DELETE behavior (e.g., ON DELETE RESTRICT or ON DELETE CASCADE) consistent with
the Drizzle schema — choose and apply the appropriate action to the version_id
constraint so deletes behave deterministically.
apps/web/lib/db/migrations/0007_spec_versions.sql-40-41 (1)

40-41: ⚠️ Potential issue | 🟡 Minor

current_version_id FK may orphan specs rows.

If a referenced spec_versions row is deleted, specs.current_version_id becomes a dangling reference. Consider:

  • ON DELETE SET NULL if the column should become null when the version is removed
  • ON DELETE RESTRICT to prevent version deletion while it's the current version
🛡️ Suggested fix
-ALTER TABLE specs ADD COLUMN current_version_id UUID REFERENCES spec_versions(id);
+ALTER TABLE specs ADD COLUMN current_version_id UUID REFERENCES spec_versions(id) ON DELETE SET NULL;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/0007_spec_versions.sql` around lines 40 - 41, The
new foreign key on specs.current_version_id currently allows dangling references
if a spec_versions row is deleted; update the migration to declare the FK with
an explicit ON DELETE policy (either ON DELETE SET NULL if you want
specs.current_version_id to become NULL when its version is removed, or ON
DELETE RESTRICT if you want to block deletion of a spec_versions row when it is
referenced as the current version) by modifying the ALTER TABLE statement that
adds current_version_id referencing spec_versions(id).
apps/web/lib/db/schema/spec-releases.ts-13-15 (1)

13-15: ⚠️ Potential issue | 🟡 Minor

Missing onDelete behavior on versionId foreign key.

Unlike serverId (which has onDelete: 'cascade'), versionId has no delete behavior specified. This means deleting a spec_version will fail with a FK constraint violation if any release references it.

If this is intentional (to prevent version deletion while promoted), consider adding onDelete: 'restrict' explicitly for clarity. If releases should be cleaned up, use onDelete: 'cascade' or onDelete: 'set null' (with nullable column).

🛡️ Suggested fix (if restrict is intended)
     versionId: uuid('version_id')
-      .references(() => specVersions.id)
+      .references(() => specVersions.id, { onDelete: 'restrict' })
       .notNull(),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/schema/spec-releases.ts` around lines 13 - 15, The foreign
key column versionId in the spec-releases table (defined in spec-releases.ts and
referencing specVersions.id) lacks an explicit onDelete behavior; update the
column definition for versionId to include the desired onDelete option (e.g.,
onDelete: 'restrict' to prevent deletion, onDelete: 'cascade' to remove releases
when a spec version is deleted, or onDelete: 'set null' and make versionId
nullable) so the DB enforces the intended delete semantics consistently with
serverId which uses onDelete: 'cascade'.
apps/web/app/dashboard/servers/[id]/settings/page.tsx-39-42 (1)

39-42: ⚠️ Potential issue | 🟡 Minor

Missing error handling in delete operation.

If deleteServer.mutateAsync throws, the error is unhandled and the user may see an uncaught promise rejection. The navigation to /dashboard should only occur on success.

Proposed fix
 const handleDelete = async () => {
-  await deleteServer.mutateAsync(server.id);
-  router.push("/dashboard");
+  try {
+    await deleteServer.mutateAsync(server.id);
+    router.push("/dashboard");
+  } catch {
+    // Error is handled by React Query; optionally show a toast
+  }
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/settings/page.tsx around lines 39 - 42,
The delete flow in handleDelete currently calls
deleteServer.mutateAsync(server.id) without handling failures and always calls
router.push("/dashboard"); wrap the mutateAsync call in a try/catch so
navigation only happens on success: call await
deleteServer.mutateAsync(server.id) inside try, then router.push("/dashboard")
on success, and handle errors in catch (e.g., surface a toast/error state or log
via processLogger) to avoid unhandled promise rejections; keep the function
async and ensure any UI loading state is set/reset around the operation.
apps/web/app/dashboard/specs/new/registry/[id]/page.tsx-57-59 (1)

57-59: ⚠️ Potential issue | 🟡 Minor

Silent failure provides no user feedback.

When the deploy fails, deploying is reset but the user receives no indication of what went wrong. Consider showing an error toast or message.

Proposed improvement
+import { toast } from "@apifold/ui"; // or your toast utility

 const handleDeploy = async () => {
   setDeploying(true);
   try {
     const specModule = await import(`@apifold/registry/specs/${entry.id}/spec.json`);
     const rawSpec = specModule.default ?? specModule;

     await importSpec.mutateAsync({
       name: entry.name,
       version: '1.0.0',
       rawSpec,
     });

     router.push('/dashboard/servers');
-  } catch {
+  } catch (err) {
     setDeploying(false);
+    toast.error("Failed to deploy. Please try again.");
   }
 };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/specs/new/registry/`[id]/page.tsx around lines 57 -
59, The catch block currently swallows errors and only calls
setDeploying(false); update the catch to accept the error (e.g., catch (err))
and surface feedback to the user: keep setDeploying(false) but also call the
app's toast/error UI (or set an error state) with the error message (for example
via a toast.error or setError(message)) so failures show a readable message;
reference the catch block around setDeploying and the deploying/setDeploying
state handlers in page.tsx to implement this.
apps/web/components/logs/log-table.tsx-56-56 (1)

56-56: ⚠️ Potential issue | 🟡 Minor

Type assertion assumes reqBody is always an object.

The cast reqBody as Record<string, unknown> will fail the Object.keys() check if reqBody is an array (which is valid JSON). Consider a safer check.

Proposed fix
-{reqBody && Object.keys(reqBody as Record<string, unknown>).length > 0 && (
+{reqBody && typeof reqBody === 'object' && reqBody !== null && !Array.isArray(reqBody) && Object.keys(reqBody).length > 0 && (

Or simply display the body if it's truthy:

-{reqBody && Object.keys(reqBody as Record<string, unknown>).length > 0 && (
+{reqBody && (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/logs/log-table.tsx` at line 56, The code uses a type
assertion (reqBody as Record<string, unknown>) before calling Object.keys, which
breaks if reqBody is an array or null; update the conditional in the LogTable
render so it first checks reqBody is non-null and is a plain object (e.g.,
typeof reqBody === "object" && !Array.isArray(reqBody) && reqBody !== null &&
Object.keys(reqBody).length > 0) or simply guard with Boolean(reqBody) if you
only need to show any truthy body; adjust the expression around reqBody where
it's referenced in the component (the JSX conditional using reqBody and
Object.keys) to use this safer check.
apps/web/components/logs/log-detail-modal.tsx-88-88 (1)

88-88: ⚠️ Potential issue | 🟡 Minor

Type assertion assumes reqBody is always an object.

Same issue as in log-table.tsx: the cast reqBody as Record<string, unknown> assumes it's an object. Arrays are valid JSON request bodies and would fail this check incorrectly.

Proposed fix
-{reqBody && Object.keys(reqBody as Record<string, unknown>).length > 0 && (
+{reqBody && (
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/logs/log-detail-modal.tsx` at line 88, The conditional
treats reqBody as always an object; update the check in log-detail-modal.tsx
(the JSX conditional that currently uses Object.keys(reqBody as Record<string,
unknown>).length) to handle arrays and objects safely: ensure reqBody is
non-null/undefined, then if Array.isArray(reqBody) check reqBody.length > 0,
otherwise if typeof reqBody === 'object' use Object.keys(reqBody).length > 0;
reference the reqBody variable and replace the existing cast-based check with
this combined array/object-aware condition (same fix as applied in
log-table.tsx).
apps/web/app/dashboard/servers/[id]/console/page.tsx-86-121 (1)

86-121: ⚠️ Potential issue | 🟡 Minor

Count failed executions in history too.

The header dots and {history.length} calls are sourced from history, but onError never appends an entry. A transport/runtime failure will therefore render an error response while the session summary still shows the previous state.

🩹 Suggested fix
   const handleExecute = useCallback(() => {
     if (!selectedToolName) return;
+    const toolName = selectedToolName;
     let parsed: Record<string, unknown>;
     try {
       const val: unknown = JSON.parse(body);
@@
     testTool.mutate(
-      { serverId: id, toolName: selectedToolName, input: parsed },
+      { serverId: id, toolName, input: parsed },
       {
         onSuccess: (result) => {
           setResponse(result);
           setHistory((prev) => [
-            { toolName: selectedToolName, isError: result.isError, durationMs: result.durationMs, timestamp: new Date() },
+            { toolName, isError: result.isError, durationMs: result.durationMs, timestamp: new Date() },
             ...prev.slice(0, 19),
           ]);
         },
         onError: (err) => {
+          setHistory((prev) => [
+            { toolName, isError: true, durationMs: 0, timestamp: new Date() },
+            ...prev.slice(0, 19),
+          ]);
           setResponse({
             content: { error: err instanceof Error ? err.message : "Request failed. Is the MCP runtime online?" },
             isError: true,
             durationMs: 0,

Also applies to: 127-136

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/console/page.tsx around lines 86 - 121,
handleExecute currently updates history only on success, so transport/runtime
failures never increment history; modify the onError handler in handleExecute to
call setHistory the same way onSuccess does (prepend an entry) using
selectedToolName, isError: true, durationMs: 0 (or err.durationMs if available),
and timestamp: new Date(); update the onError block that sets setResponse to
also call setHistory((prev) => [{ toolName: selectedToolName, isError: true,
durationMs: 0, timestamp: new Date() }, ...prev.slice(0,19)]), ensuring you
reference the existing handleExecute, onError, setHistory and setResponse
symbols.
🧹 Nitpick comments (16)
apps/web/lib/constants/navigation.ts (1)

7-9: Redundant type annotation conflicts with as const.

The explicit type readonly NavItem[] combined with as const is redundant. The as const assertion already makes the array readonly with literal types, but the explicit type annotation widens the literals back to string and boolean. This may cause type inference issues.

Choose one approach:

♻️ Option 1: Keep explicit type, remove `as const`
-export const NAV_ITEMS: readonly NavItem[] = [
-  { title: "Overview", href: "/dashboard", exact: true },
-] as const;
+export const NAV_ITEMS: readonly NavItem[] = [
+  { title: "Overview", href: "/dashboard", exact: true },
+];
♻️ Option 2: Use `as const satisfies` for type checking with literal preservation
-export const NAV_ITEMS: readonly NavItem[] = [
-  { title: "Overview", href: "/dashboard", exact: true },
-] as const;
+export const NAV_ITEMS = [
+  { title: "Overview", href: "/dashboard", exact: true },
+] as const satisfies readonly NavItem[];
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/constants/navigation.ts` around lines 7 - 9, NAV_ITEMS currently
uses both an explicit type annotation and as const which conflict; either remove
the as const and keep the explicit readonly NavItem[] type, or preserve literal
types by declaring NAV_ITEMS without the explicit type and use "as const
satisfies readonly NavItem[]" (e.g., keep the identifier NAV_ITEMS and the
NavItem shape) so the array stays literal/readonly while still being
type-checked against NavItem; update the declaration for NAV_ITEMS accordingly.
apps/web/components/layout/sub-tab-bar.tsx (1)

29-43: Consider adding aria-current="page" for active links.

For improved accessibility, the active link should indicate its current state to assistive technologies.

♿ Proposed accessibility enhancement
             <Link
               key={item.href}
               href={item.href}
+              aria-current={isActive ? "page" : undefined}
               className={cn(
                 "relative px-3 py-2.5 text-[13px] transition-colors duration-150",
                 isActive
                   ? "font-medium text-foreground"
                   : "text-muted-foreground hover:text-foreground",
               )}
             >
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/layout/sub-tab-bar.tsx` around lines 29 - 43, The active
link rendered by the Link component in sub-tab-bar.tsx should include an
accessibility indicator: when isActive is true, add aria-current="page" to the
Link element (the one using key={item.href}, href={item.href},
className={cn(...)} and rendering {item.title}) so assistive technologies can
recognize the current page; conditionally include aria-current only for the
active branch to avoid false positives.
apps/web/lib/db/migrations/0000_loving_sasquatch.sql (2)

39-50: Consider a composite index for common analytics queries on request_logs.

The table has separate indexes on server_id, timestamp, and status_code, plus a composite (server_id, timestamp). For analytics queries that filter by server, time range, and aggregate by status, a composite index including status_code could improve performance.

This is a suggestion for future optimization if analytics queries become slow—monitor query patterns first before adding.

Also applies to: 90-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/0000_loving_sasquatch.sql` around lines 39 - 50,
Add a composite index to speed analytics that filter by server and time range
and aggregate by status by creating an index on the request_logs table covering
(server_id, timestamp, status_code); name it something like
idx_request_logs_server_timestamp_status to make it discoverable and document it
next to the existing single-column and (server_id, timestamp) indexes so you can
remove or adjust later if monitoring shows it isn’t needed.

52-62: Consider adding a unique constraint on (user_id, name) for specs.

The mcp_servers table has a unique constraint on (user_id, slug) (Line 87), but the specs table only has a non-unique index on (user_id, name). This allows a user to create multiple specs with the same name, which could lead to confusion in the UI and API.

🗃️ Proposed unique constraint
-CREATE INDEX "idx_specs_user_name" ON "specs" USING btree ("user_id","name");
+CREATE UNIQUE INDEX "idx_specs_user_name" ON "specs" USING btree ("user_id","name");

If duplicate spec names are intentional (e.g., for versioning), please disregard this suggestion.

Also applies to: 96-97

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/0000_loving_sasquatch.sql` around lines 52 - 62,
The specs table currently allows duplicate names per user; add a unique
constraint on (user_id, name) to prevent duplicates by altering the CREATE TABLE
for "specs" (or add a UNIQUE INDEX) so that the combination of "user_id" and
"name" is unique; update the migration that defines the "specs" table
(referenced by the "specs" table and its columns "user_id" and "name") to
include this constraint, and if duplicates already exist in the database ensure
to deduplicate or add a pre-migration cleanup step before applying the unique
constraint.
packages/ui/src/components/status-dot.tsx (1)

17-24: Consider whether aria-hidden="true" is more appropriate for decorative usage.

If StatusDot is typically used alongside a visible text label (e.g., "Online" text next to the dot), using aria-label={variant} creates redundant announcements. Consider either:

  1. Using aria-hidden="true" when decorative (common case)
  2. Adding a decorative?: boolean prop to toggle between the two behaviors
♿ Proposed enhancement for decorative usage
 interface StatusDotProps {
   readonly variant: StatusDotVariant;
   readonly className?: string;
+  readonly decorative?: boolean;
 }

 function StatusDot({ variant, className, decorative = false }: StatusDotProps) {
   return (
     <span
       className={cn("inline-block h-2 w-2 shrink-0 rounded-full", VARIANT_CLASSES[variant], className)}
-      aria-label={variant}
+      aria-label={decorative ? undefined : variant}
+      aria-hidden={decorative ? true : undefined}
     />
   );
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/ui/src/components/status-dot.tsx` around lines 17 - 24, The
StatusDot component currently always sets aria-label which causes redundant
screenreader announcements; update StatusDotProps to include an optional
decorative?: boolean prop (default true) and change StatusDot to set
aria-hidden={decorative} and only include aria-label={variant} when decorative
is false so decorative dots remain hidden from AT while non-decorative ones
expose the variant; locate the component by the StatusDot function and
VARIANT_CLASSES constant to implement this toggle.
apps/web/app/dashboard/servers/page.tsx (1)

10-17: Share the auth-mode formatter.

This mapping now exists here and on the server detail page. Pulling it into a shared helper keeps the list/detail labels from drifting when another auth mode is added.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/page.tsx` around lines 10 - 17, Extract the
duplicate auth-mode mapping into a single shared helper: move the function
formatAuth(mode: McpServer["authMode"]): string into a new or existing shared
utility module (e.g., export it from a common helpers file), export it, then
replace the local definitions in both the servers list and server detail
components by importing and using that exported formatAuth. Ensure the helper
keeps the same mapping branches ("none" -> "No auth", "api_key" -> "API Key",
"bearer" -> "Bearer", "oauth2_authcode" -> "OAuth2", "oauth2_client_creds" ->
"OAuth2 CC") and preserves the signature/return type so callers don’t need
changes.
apps/web/components/specs/diff-badge.tsx (1)

27-29: Consider accessibility of the BREAKING badge text size.

The text-[10px] class produces very small text (10px). WCAG recommends a minimum of 12px for legibility. Since this badge conveys important information, consider using text-xs (12px) instead.

Suggested change
-        <Badge variant="destructive" className="text-[10px] px-1.5 py-0">
+        <Badge variant="destructive" className="text-xs px-1.5 py-0">
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/specs/diff-badge.tsx` around lines 27 - 29, The BREAKING
Badge uses a hardcoded small font class "text-[10px]" which is below WCAG
recommendations; update the Badge component instance (the JSX element rendering
BREAKING in diff-badge.tsx) to use a larger accessible font class (replace
"text-[10px]" with "text-xs") and keep the existing padding classes ("px-1.5
py-0") so spacing remains the same; ensure the change is made on the Badge
element that currently has variant="destructive".
apps/web/components/dashboard/spec-card.tsx (2)

7-10: Unused index prop in interface.

The index prop is declared as optional but is never used in the component. Per the AI summary, the index prop was intentionally removed from usage but the interface still declares it.

♻️ Suggested fix
 interface SpecCardProps {
   readonly spec: Spec;
-  readonly index?: number;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/dashboard/spec-card.tsx` around lines 7 - 10, Remove the
unused optional index property from the SpecCardProps interface: locate the
interface named SpecCardProps and delete the "index?: number" declaration so the
prop type only includes the used "spec: Spec" field, ensuring props passed to
the SpecCard component match the actual usage.

12-22: Duplicate timeAgo helper — extract to shared utility.

Same function exists in apps/web/app/dashboard/page.tsx. Extract to a shared location to avoid drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/dashboard/spec-card.tsx` around lines 12 - 22, The
timeAgo function is duplicated; extract it into a single shared utility and
import it where needed: create a shared exported helper named timeAgo (e.g., in
a common utils module), move the implementation from spec-card.tsx into that
module, update spec-card.tsx and the dashboard page to import the shared timeAgo
instead of their local copies, and delete the duplicate definitions so both
components use the single exported timeAgo function.
apps/web/app/dashboard/page.tsx (2)

12-22: Duplicate timeAgo helper — consider extracting to a shared utility.

This function is duplicated in apps/web/components/dashboard/spec-card.tsx. Extract to a shared utility (e.g., @/lib/utils/date.ts) to avoid maintenance drift.

Also, new Date(date) on line 13 is redundant if date is already a Date. If callers may pass strings, the signature should be date: Date | string.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/page.tsx` around lines 12 - 22, The timeAgo helper is
duplicated and should be extracted from the page-local function timeAgo into a
shared utility (e.g., create a new function timeAgo exported from a module like
"@/lib/utils/date.ts") and replace both uses (apps/web/app/dashboard/page.tsx
and apps/web/components/dashboard/spec-card.tsx) with imports from that utility;
also update the timeAgo signature to accept Date | string and normalize the
input once inside the utility (avoid redundant new Date calls when a Date is
passed) and update callers to pass their original values unchanged.

54-57: Inefficient object/array spreading in reduce loop.

Each iteration creates a new accumulator object and a new array, resulting in O(n²) allocations. For typical dashboard sizes this is negligible, but a mutable accumulator is cleaner.

♻️ Suggested fix
-  const serversBySpec = (servers ?? []).reduce<Record<string, McpServer[]>>((acc, server) => {
-    const specId = server.specId;
-    return { ...acc, [specId]: [...(acc[specId] ?? []), server] };
-  }, {});
+  const serversBySpec = (servers ?? []).reduce<Record<string, McpServer[]>>((acc, server) => {
+    (acc[server.specId] ??= []).push(server);
+    return acc;
+  }, {});
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/page.tsx` around lines 54 - 57, The reduce for
serversBySpec is creating new objects/arrays each iteration; update the reducer
to use a mutable accumulator: initialize acc as an empty Record<string,
McpServer[]>, then for each server (from servers) compute specId and if
acc[specId] is undefined set it to an empty array, push the server into
acc[specId], and return acc; keep the type Record<string, McpServer[]> and
reference the existing symbols serversBySpec, servers, specId and McpServer so
the change only replaces the spread-with-new-object logic with in-place
accumulation.
apps/web/components/logs/log-table.tsx (1)

10-13: Unused onSelectLog prop.

The onSelectLog prop is declared in LogTableProps but is never used in the component. If this callback is no longer needed after the refactor to inline expansion, consider removing it to avoid confusion.

Proposed fix
 interface LogTableProps {
   readonly logs: readonly RequestLog[];
-  readonly onSelectLog?: (log: RequestLog) => void;
 }
 ...
-export function LogTable({ logs }: LogTableProps) {
+export function LogTable({ logs }: LogTableProps) {

Also applies to: 77-77

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/logs/log-table.tsx` around lines 10 - 13, The
LogTableProps interface declares an unused onSelectLog callback (onSelectLog?:
(log: RequestLog) => void) that is not referenced in the LogTable component;
remove this unused prop from the LogTableProps interface and any prop
destructuring or type references in the LogTable function/component to avoid
confusion, or if the selection behavior is desired, wire up onSelectLog inside
the LogTable (e.g., call onSelectLog(log) from the row click/expand handler).
Update the component signature and any callers to match the removed/added prop.
apps/web/components/logs/log-detail-modal.tsx (1)

102-102: Duplicated JSON formatting logic.

The JSON parse/stringify pattern for pretty-printing with fallback appears in both log-table.tsx and here. Consider extracting a shared utility.

Example utility
// lib/utils/format-json.ts
export function formatJsonSafe(value: string): string {
  try {
    return JSON.stringify(JSON.parse(value), null, 2);
  } catch {
    return value;
  }
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/logs/log-detail-modal.tsx` at line 102, Extract the
duplicated JSON pretty-print/fallback logic into a shared utility (e.g.,
formatJsonSafe) and replace the inline IIFE in LogDetailModal's content prop and
the similar code in LogTable with calls to that utility; locate the expression
in log-detail-modal.tsx (the content={(() => { try { return
JSON.stringify(JSON.parse(resBody), null, 2); } catch { return resBody; } })()}
) and the matching block in log-table.tsx, implement export function
formatJsonSafe(value: string): string in a common lib (or utils) file, and
import+use formatJsonSafe(resBody) in both components.
apps/web/lib/diff/spec-diff.ts (1)

201-204: Consider single-pass counting.

The three filter calls iterate the array separately. For large tool lists, a single reduce would be more efficient. This is a minor optimization and acceptable as-is for typical spec sizes.

Optional optimization
-  const addedCount = toolChanges.filter((c) => c.type === 'added').length;
-  const removedCount = toolChanges.filter((c) => c.type === 'removed').length;
-  const modifiedCount = toolChanges.filter((c) => c.type === 'modified').length;
+  const { addedCount, removedCount, modifiedCount } = toolChanges.reduce(
+    (acc, c) => {
+      if (c.type === 'added') acc.addedCount++;
+      else if (c.type === 'removed') acc.removedCount++;
+      else if (c.type === 'modified') acc.modifiedCount++;
+      return acc;
+    },
+    { addedCount: 0, removedCount: 0, modifiedCount: 0 },
+  );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/diff/spec-diff.ts` around lines 201 - 204, Replace the three
separate filter calls that compute addedCount, removedCount, and modifiedCount
(and the some for isBreaking) by a single pass over toolChanges using
Array.prototype.reduce: iterate toolChanges once and accumulate an object with
added, removed, modified counters and a breaking boolean, then set
addedCount/removedCount/modifiedCount/isBreaking from that accumulator; update
the code that references addedCount/removedCount/modifiedCount/isBreaking
accordingly (symbols: toolChanges, addedCount, removedCount, modifiedCount,
isBreaking).
apps/web/app/dashboard/servers/[id]/settings/page.tsx (1)

30-30: Silent null return when server data is missing.

Returning null after loading completes but no server exists leaves the user with a blank page. Consider redirecting to /dashboard or showing a "Server not found" message.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/settings/page.tsx at line 30, The
component currently returns null when server is missing (the `if (!server)
return null;` check), leaving a blank page; instead either redirect the user to
the dashboard or render a clear "Server not found" UI. Replace the silent return
in the page component with a redirect (e.g., use Next.js navigation methods like
redirect('/dashboard') or notFound()) or render a visible fallback
message/component (e.g., a "Server not found" banner and a link back to
`/dashboard`) so users aren’t left with an empty page.
apps/web/app/dashboard/analytics/page.tsx (1)

107-361: Extract the duplicated analytics dashboard before these pages drift further.

Dashboard, Metric, LatencyRow, EmptyAnalytics, and AnalyticsSkeleton are nearly copy-pasted from app/dashboard/servers/[id]/analytics/page.tsx, and the two copies already differ in colors/borders/token usage. Pull the shared rendering into a single component and keep only the page-specific header/server-selector logic here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/analytics/page.tsx` around lines 107 - 361, The
Dashboard and its helper components (Dashboard, Metric, LatencyRow,
EmptyAnalytics, AnalyticsSkeleton) are duplicated across pages; extract them
into a single shared presentational component (e.g., create a shared
AnalyticsDashboard component) and replace the duplicate definitions here with an
import and a small page-specific wrapper that provides page-only props (like
server selector/header and range handling). Move the rendering logic currently
in Dashboard plus Metric, LatencyRow, EmptyAnalytics, AnalyticsSkeleton into the
shared component file, keep only page-specific logic in this file, update usages
to pass in data, usage, topTools, errorBreakdown, failingTools, recentActivity,
overview and gradientId/range as props, and ensure styling/token usage is
unified in the shared component so both pages import and render the same
implementation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 0f5a3512-e3c8-452a-9756-8cf475ca6e2e

📥 Commits

Reviewing files that changed from the base of the PR and between 6c22f4a and 9dfd2df.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (139)
  • .impeccable.md
  • apps/runtime/src/index.ts
  • apps/runtime/src/mcp/protocol-handler.ts
  • apps/web/app/(marketing)/components/cta-banner.tsx
  • apps/web/app/(marketing)/components/faq.tsx
  • apps/web/app/(marketing)/components/features-grid.tsx
  • apps/web/app/(marketing)/components/footer.tsx
  • apps/web/app/(marketing)/components/github-stars.tsx
  • apps/web/app/(marketing)/components/hero.tsx
  • apps/web/app/(marketing)/components/how-it-works.tsx
  • apps/web/app/(marketing)/components/mobile-nav.tsx
  • apps/web/app/(marketing)/components/navbar.tsx
  • apps/web/app/(marketing)/components/pricing-cards.tsx
  • apps/web/app/(marketing)/components/self-host-section.tsx
  • apps/web/app/(marketing)/components/terminal-demo.tsx
  • apps/web/app/(marketing)/components/works-with-bar.tsx
  • apps/web/app/(marketing)/layout.tsx
  • apps/web/app/(marketing)/pricing/page.tsx
  • apps/web/app/api/servers/[id]/test/route.ts
  • apps/web/app/api/specs/[id]/versions/[vid]/promote/route.ts
  • apps/web/app/api/specs/[id]/versions/[vid]/route.ts
  • apps/web/app/api/specs/[id]/versions/route.ts
  • apps/web/app/api/specs/route.ts
  • apps/web/app/changelog/page.tsx
  • apps/web/app/dashboard/analytics/page.tsx
  • apps/web/app/dashboard/domains/page.tsx
  • apps/web/app/dashboard/error.tsx
  • apps/web/app/dashboard/layout.tsx
  • apps/web/app/dashboard/not-found.tsx
  • apps/web/app/dashboard/page.tsx
  • apps/web/app/dashboard/servers/[id]/analytics/page.tsx
  • apps/web/app/dashboard/servers/[id]/console/page.tsx
  • apps/web/app/dashboard/servers/[id]/credentials/page.tsx
  • apps/web/app/dashboard/servers/[id]/export/page.tsx
  • apps/web/app/dashboard/servers/[id]/layout.tsx
  • apps/web/app/dashboard/servers/[id]/logs/page.tsx
  • apps/web/app/dashboard/servers/[id]/page.tsx
  • apps/web/app/dashboard/servers/[id]/settings/page.tsx
  • apps/web/app/dashboard/servers/[id]/tools/page.tsx
  • apps/web/app/dashboard/servers/page.tsx
  • apps/web/app/dashboard/settings/_components/account-section.tsx
  • apps/web/app/dashboard/settings/_components/billing-section.tsx
  • apps/web/app/dashboard/settings/_components/budget-limit.tsx
  • apps/web/app/dashboard/settings/_components/danger-zone.tsx
  • apps/web/app/dashboard/settings/_components/plan-banner.tsx
  • apps/web/app/dashboard/settings/_components/plan-comparison.tsx
  • apps/web/app/dashboard/settings/_components/settings-header.tsx
  • apps/web/app/dashboard/settings/_components/usage-dashboard.tsx
  • apps/web/app/dashboard/settings/_components/usage-meter.tsx
  • apps/web/app/dashboard/specs/[id]/layout.tsx
  • apps/web/app/dashboard/specs/[id]/page.tsx
  • apps/web/app/dashboard/specs/new/page.tsx
  • apps/web/app/dashboard/specs/new/registry/[id]/page.tsx
  • apps/web/app/dashboard/specs/new/registry/page.tsx
  • apps/web/app/dashboard/specs/page.tsx
  • apps/web/app/globals.css
  • apps/web/app/layout.tsx
  • apps/web/app/providers.tsx
  • apps/web/components/console/response-viewer.tsx
  • apps/web/components/console/schema-form.tsx
  • apps/web/components/console/session-history.tsx
  • apps/web/components/credentials/add-credential-modal.tsx
  • apps/web/components/credentials/credential-list-item.tsx
  • apps/web/components/credentials/provider-picker.tsx
  • apps/web/components/dashboard/spec-card-skeleton.tsx
  • apps/web/components/dashboard/spec-card.tsx
  • apps/web/components/dashboard/stats-row.tsx
  • apps/web/components/dashboard/usage-warning.tsx
  • apps/web/components/export/format-selector.tsx
  • apps/web/components/layout/breadcrumbs.tsx
  • apps/web/components/layout/command-palette.tsx
  • apps/web/components/layout/header.tsx
  • apps/web/components/layout/mobile-menu.tsx
  • apps/web/components/layout/sidebar.tsx
  • apps/web/components/layout/sub-tab-bar.tsx
  • apps/web/components/layout/tab-bar.tsx
  • apps/web/components/layout/theme-toggle.tsx
  • apps/web/components/layout/top-nav.tsx
  • apps/web/components/logs/filter-bar.tsx
  • apps/web/components/logs/log-detail-modal.tsx
  • apps/web/components/logs/log-retention-notice.tsx
  • apps/web/components/logs/log-table.tsx
  • apps/web/components/marketing/landing-footer.tsx
  • apps/web/components/marketing/landing-nav.tsx
  • apps/web/components/servers/config-form.tsx
  • apps/web/components/servers/domain-settings.tsx
  • apps/web/components/servers/server-card-skeleton.tsx
  • apps/web/components/servers/snippet-copier.tsx
  • apps/web/components/servers/tool-detail.tsx
  • apps/web/components/shared/page-header.tsx
  • apps/web/components/specs/diff-badge.tsx
  • apps/web/components/specs/file-dropzone.tsx
  • apps/web/components/specs/operation-preview.tsx
  • apps/web/components/specs/spec-header.tsx
  • apps/web/components/specs/step-indicator.tsx
  • apps/web/components/specs/version-diff.tsx
  • apps/web/lib/constants/navigation.ts
  • apps/web/lib/db/migrations/0000_loving_sasquatch.sql
  • apps/web/lib/db/migrations/0007_spec_versions.sql
  • apps/web/lib/db/migrations/meta/0000_snapshot.json
  • apps/web/lib/db/migrations/meta/_journal.json
  • apps/web/lib/db/repositories/log.repository.ts
  • apps/web/lib/db/repositories/spec-release.repository.ts
  • apps/web/lib/db/repositories/spec-version.repository.ts
  • apps/web/lib/db/schema/index.ts
  • apps/web/lib/db/schema/request-logs.ts
  • apps/web/lib/db/schema/spec-releases.ts
  • apps/web/lib/db/schema/spec-versions.ts
  • apps/web/lib/db/schema/specs.ts
  • apps/web/lib/diff/spec-diff.ts
  • apps/web/lib/hooks/index.ts
  • apps/web/lib/hooks/use-spec-versions.ts
  • apps/web/lib/ssrf-guard.ts
  • apps/web/lib/stores/ui-store.ts
  • apps/web/lib/validation/spec-version.schema.ts
  • apps/web/package.json
  • packages/types/src/index.ts
  • packages/types/src/log.ts
  • packages/types/src/spec-release.ts
  • packages/types/src/spec-version.ts
  • packages/types/src/spec.ts
  • packages/ui/src/components/badge.tsx
  • packages/ui/src/components/button.tsx
  • packages/ui/src/components/card.tsx
  • packages/ui/src/components/code-block.tsx
  • packages/ui/src/components/dialog.tsx
  • packages/ui/src/components/dropdown-menu.tsx
  • packages/ui/src/components/empty-state.tsx
  • packages/ui/src/components/index.ts
  • packages/ui/src/components/input.tsx
  • packages/ui/src/components/sheet.tsx
  • packages/ui/src/components/skeleton.tsx
  • packages/ui/src/components/status-dot.tsx
  • packages/ui/src/components/table.tsx
  • packages/ui/src/components/toast.tsx
  • packages/ui/src/tokens/colors.css
  • packages/ui/src/tokens/spacing.css
  • packages/ui/src/tokens/typography.css
  • packages/ui/tailwind.config.ts
💤 Files with no reviewable changes (3)
  • apps/web/components/layout/breadcrumbs.tsx
  • apps/web/components/layout/header.tsx
  • apps/web/components/layout/sidebar.tsx
✅ Files skipped from review due to trivial changes (48)
  • apps/web/app/dashboard/settings/_components/billing-section.tsx
  • apps/web/components/servers/server-card-skeleton.tsx
  • apps/web/app/dashboard/not-found.tsx
  • apps/web/app/(marketing)/components/cta-banner.tsx
  • apps/web/app/(marketing)/components/github-stars.tsx
  • apps/web/app/(marketing)/components/faq.tsx
  • apps/web/app/(marketing)/components/works-with-bar.tsx
  • apps/web/app/(marketing)/pricing/page.tsx
  • apps/web/components/dashboard/usage-warning.tsx
  • apps/web/app/dashboard/settings/_components/settings-header.tsx
  • apps/web/app/(marketing)/layout.tsx
  • apps/web/app/dashboard/error.tsx
  • apps/web/components/servers/tool-detail.tsx
  • apps/web/app/dashboard/settings/_components/danger-zone.tsx
  • apps/web/app/dashboard/settings/_components/plan-comparison.tsx
  • apps/web/app/dashboard/settings/_components/usage-dashboard.tsx
  • apps/web/components/export/format-selector.tsx
  • apps/web/components/logs/filter-bar.tsx
  • apps/web/components/marketing/landing-footer.tsx
  • apps/web/components/specs/operation-preview.tsx
  • apps/web/components/specs/step-indicator.tsx
  • apps/web/components/shared/page-header.tsx
  • apps/web/lib/db/schema/index.ts
  • apps/web/lib/db/schema/specs.ts
  • apps/web/lib/db/migrations/meta/_journal.json
  • packages/types/src/spec.ts
  • packages/ui/src/components/index.ts
  • apps/web/components/specs/spec-header.tsx
  • .impeccable.md
  • packages/ui/src/components/skeleton.tsx
  • packages/ui/src/components/input.tsx
  • apps/web/app/(marketing)/components/navbar.tsx
  • packages/ui/src/components/code-block.tsx
  • apps/web/app/dashboard/settings/_components/account-section.tsx
  • apps/web/app/dashboard/settings/_components/plan-banner.tsx
  • packages/ui/src/components/empty-state.tsx
  • apps/web/package.json
  • apps/web/app/dashboard/servers/[id]/export/page.tsx
  • apps/web/components/console/session-history.tsx
  • apps/web/app/dashboard/settings/_components/usage-meter.tsx
  • apps/web/components/specs/file-dropzone.tsx
  • packages/types/src/spec-release.ts
  • apps/web/lib/hooks/index.ts
  • packages/types/src/spec-version.ts
  • apps/web/components/console/response-viewer.tsx
  • apps/web/app/dashboard/settings/_components/budget-limit.tsx
  • apps/web/components/credentials/provider-picker.tsx
  • apps/web/components/console/schema-form.tsx
🚧 Files skipped from review as they are similar to previous changes (10)
  • apps/web/components/servers/config-form.tsx
  • apps/web/app/api/specs/route.ts
  • apps/web/components/servers/snippet-copier.tsx
  • apps/web/app/dashboard/domains/page.tsx
  • apps/web/app/dashboard/servers/[id]/credentials/page.tsx
  • apps/web/lib/ssrf-guard.ts
  • apps/web/components/credentials/credential-list-item.tsx
  • apps/web/components/credentials/add-credential-modal.tsx
  • apps/web/app/dashboard/specs/new/registry/page.tsx
  • apps/web/components/servers/domain-settings.tsx

Comment thread apps/runtime/src/index.ts Outdated
Comment thread apps/runtime/src/mcp/protocol-handler.ts
Comment on lines +54 to 56
const latestVersion = versions?.[0];
const productionUrl = `https://${server.slug}.${PLATFORM_DOMAIN}`;

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t infer production from versions[0].

useSpecVersions() gives you versions ordered by versionNumber, not by deployment environment. After a new version is created but not promoted, versions[0] is the newest preview, yet this page treats it as production (latestVersion, isProduction = i === 0, rollback target = versions[1]). That mislabels the deployment list and can roll production back to the wrong release. Drive these states from the persisted production release instead of array order.

Also applies to: 70-91, 210-257

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/page.tsx around lines 54 - 56, The code
currently treats versions[0] as production (latestVersion and isProduction
logic) which is wrong; instead fetch the persisted production release id (e.g.,
server.productionVersionId or the field that stores the production version) and
derive production state by matching versions.find(v => v.id ===
productionVersionId); replace uses of latestVersion = versions?.[0],
isProduction = i === 0, and rollback target = versions[1] with logic that
locates the productionVersion from the persisted id (use versions.find and
compute indexes/targets from that match) and update related areas
(useSpecVersions usage and the blocks around the noted ranges: 54-56, 70-91,
210-257) so production labelling, domain/productionUrl decisions, and rollback
targets come from the persisted production release rather than array order.

Comment on lines +47 to +48
const specModule = await import(`@apifold/registry/specs/${entry.id}/spec.json`);
const rawSpec = specModule.default ?? specModule;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if `@apifold/registry` exports the specs in a way that supports dynamic imports
fd -e json 'spec.json' --exec dirname {} \; | head -20

# Check for any export configuration in the registry package
cat packages/registry/package.json 2>/dev/null | jq '.exports // .main'

Repository: Work90210/APIFold

Length of output: 451


🏁 Script executed:

# Check the file content to see the exact import and context
head -60 apps/web/app/dashboard/specs/new/registry/[id]/page.tsx | tail -20

Repository: Work90210/APIFold

Length of output: 581


🏁 Script executed:

# Check tsconfig to understand how `@apifold/registry` is aliased
cat tsconfig.json | grep -A 5 -B 5 "@apifold" || cat apps/web/tsconfig.json 2>/dev/null | grep -A 5 -B 5 "@apifold"

Repository: Work90210/APIFold

Length of output: 246


🏁 Script executed:

# Check next.config.js for webpack or import handling
cat next.config.js 2>/dev/null | head -50

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check if spec files are directly accessible as JSON
ls -la packages/registry/specs/slack/ 2>/dev/null | head -20

Repository: Work90210/APIFold

Length of output: 289


🏁 Script executed:

# Check the registry package configuration
cat packages/registry/package.json

Repository: Work90210/APIFold

Length of output: 853


🏁 Script executed:

# Check if there's a root tsconfig mapping `@apifold/registry`
find . -name "tsconfig.json" -type f | xargs grep -l "@apifold/registry" 2>/dev/null

Repository: Work90210/APIFold

Length of output: 45


🏁 Script executed:

# Look for any build warnings, errors, or issues related to this import
rg "spec.json|dynamic.*import" apps/web/ -t tsx -t ts -B 2 -A 2

Repository: Work90210/APIFold

Length of output: 88


🏁 Script executed:

# Check if specs are included in any build output or if there's special handling
find . -name ".next" -o -name "dist" | head -5

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Search for the import pattern in the codebase using -t ts instead
rg "spec\.json" apps/web/ -A 2 -B 2

Repository: Work90210/APIFold

Length of output: 4320


🏁 Script executed:

# Check if specs directory is included in the build or distributed
ls -la packages/registry/specs/ | head -20

Repository: Work90210/APIFold

Length of output: 621


🏁 Script executed:

# Check the registry dist output to see what's actually built
ls -la packages/registry/dist/ 2>/dev/null | head -20

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Look for any build configuration or vite/webpack config
find packages/registry -name "*.config.*" -o -name "tsconfig.json"

Repository: Work90210/APIFold

Length of output: 92


🏁 Script executed:

# Check if there are any test files or documentation about using specs
rg "registry.*spec" packages/registry/ --type json -B 2 -A 2

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check registry's tsconfig to understand the build
cat packages/registry/tsconfig.json

Repository: Work90210/APIFold

Length of output: 231


🏁 Script executed:

# Check if specs are copied or included in the build process
cat packages/registry/package.json | grep -A 10 "scripts"

Repository: Work90210/APIFold

Length of output: 338


🏁 Script executed:

# Check if specs are actually in the dist output or if there's a build issue
find packages/registry/dist -type f 2>/dev/null | head -20

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Look for any build scripts or configuration that might handle spec files
find . -name "package.json" -exec grep -l "registry.*spec" {} \; 2>/dev/null

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check the actual build directory structure to see if specs are included
tree -L 3 packages/registry/ 2>/dev/null || find packages/registry -type d | head -20

Repository: Work90210/APIFold

Length of output: 745


🏁 Script executed:

# Check if specs are included in package distribution by looking at .npmignore or package.json files field
cat packages/registry/.npmignore 2>/dev/null || echo "No .npmignore found"

Repository: Work90210/APIFold

Length of output: 80


🏁 Script executed:

# Check package.json for a "files" field that might include specs
cat packages/registry/package.json | jq '.files'

Repository: Work90210/APIFold

Length of output: 65


🏁 Script executed:

# Verify the dist directory is truly empty
ls -la packages/registry/dist/ 2>/dev/null || echo "dist directory is empty or missing"

Repository: Work90210/APIFold

Length of output: 95


🏁 Script executed:

# Check what's actually in node_modules for `@apifold/registry` (if the workspace is installed)
find node_modules/@apifold/registry -type d -name specs 2>/dev/null || echo "Not found in node_modules"

Repository: Work90210/APIFold

Length of output: 86


🏁 Script executed:

# Look at the registry package to see if specs should be accessible as part of the package
cat packages/registry/package.json | grep -E '"main"|"exports"|"files"' -A 5

Repository: Work90210/APIFold

Length of output: 228


🏁 Script executed:

# Check if the registry package is configured to include specs in the published package
# Look at what would be included when the package is published
cat packages/registry/package.json | grep -E '"files"|"exports"|"main"' -B 2 -A 3

Repository: Work90210/APIFold

Length of output: 271


🏁 Script executed:

# Check if the app's build config has any special handling for `@apifold/registry`
find apps/web -name "next.config.*" -o -name "webpack.config.*" -o -name "*.config.js" | xargs cat 2>/dev/null

Repository: Work90210/APIFold

Length of output: 2351


🏁 Script executed:

# Check the actual structure - does the import path reference need specs to be in dist or source?
# Look for any build-time logic that might copy specs
grep -r "specs" packages/registry/ --include="*.json" --include="*.ts" --include="*.js" 2>/dev/null | head -20

Repository: Work90210/APIFold

Length of output: 476


🏁 Script executed:

# Verify the import would fail by checking if `@apifold/registry` actually exposes specs
# In a monorepo context, does pnpm or yarn link the specs?
cat pnpm-workspace.yaml 2>/dev/null || cat lerna.json 2>/dev/null || cat package.json | grep -E '"workspaces"|"pnpm"' -A 5

Repository: Work90210/APIFold

Length of output: 100


🏁 Script executed:

# Check the registry's index.ts to see if there's a getSpec function or similar
cat packages/registry/src/index.ts

Repository: Work90210/APIFold

Length of output: 4073


🏁 Script executed:

# Verify if specs are actually accessible via the exported API
grep -A 20 "export" packages/registry/src/index.ts

Repository: Work90210/APIFold

Length of output: 1768


🏁 Script executed:

# Check how the dynamic import is supposed to work - are specs supposed to be in dist?
ls -la packages/registry/dist/ 2>/dev/null && find packages/registry/dist -type f 2>/dev/null || echo "dist is empty"

Repository: Work90210/APIFold

Length of output: 74


🏁 Script executed:

# Critical check: Is there any post-build script that copies specs to dist?
cat packages/registry/package.json | jq '.scripts'

Repository: Work90210/APIFold

Length of output: 222


🏁 Script executed:

# Check if specs are referenced anywhere else in the codebase or if there's a different pattern being used
rg "getSpec\|specs.*import" packages/ apps/ --type ts --type json -B 2 -A 2

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Final check: verify that specs won't be available in production by checking how the package would be distributed
# Look at what files would actually be included in a package
cat packages/registry/package.json | jq '{files, main, exports}'

Repository: Work90210/APIFold

Length of output: 216


The dynamic import will fail in production — specs are not being bundled with the package.

The template literal with entry.id prevents webpack from static analysis, but the more critical issue is that the specs/ directory is not being distributed with the @apifold/registry package. Since package.json has no files field and the TypeScript build only outputs dist/, the specs won't be available at runtime in production. This works in development (monorepo workspace) but will fail when the package is published or deployed.

Consider:

  • Adding specs to the package distribution (e.g., "files": ["dist", "specs"] and copying specs to dist during build)
  • Or importing spec metadata/catalog from the compiled package instead of dynamic imports
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/specs/new/registry/`[id]/page.tsx around lines 47 -
48, The dynamic import using
`import(\`@apifold/registry/specs/${entry.id}/spec.json\`)` (see `specModule`,
`rawSpec`, and `entry.id` in page.tsx) will fail in production because the
`specs/` folder isn’t included in the published package; fix by either adding
the specs to the package distribution (update the package.json `files` to
include "specs" and/or ensure your build copies `specs/` into `dist/` so the
runtime path `@apifold/registry/specs/...` exists) or eliminate the dynamic
runtime import by compiling a spec catalog into the package (export a metadata
map from the compiled `dist` and import that map instead of using
`entry.id`-templated dynamic imports).

Comment thread apps/web/components/layout/theme-toggle.tsx Outdated
Comment on lines +139 to +141
"policies": {},
"checkConstraints": {},
"isRLSEnabled": false
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

These tenant tables still have no database-side isolation.

All of the user-scoped tables here are shipping with isRLSEnabled: false. With OAuth credentials and request/response data now living in Postgres, a single missed user_id predicate in app code becomes a cross-tenant read; I’d add RLS policies in a migration rather than relying only on repository filters.

Also applies to: 296-298, 412-414, 611-613, 719-721, 891-893

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/migrations/meta/0000_snapshot.json` around lines 139 - 141,
The tenant/user-scoped tables in the snapshot have isRLSEnabled: false and an
empty policies object, so add a DB migration that enables RLS for each
user-scoped table and creates appropriate row-level policies that enforce
user_id scoping (e.g., USING and WITH CHECK predicates that compare the row's
user_id to the current authenticated user id), then update the snapshot JSON
entries (set isRLSEnabled: true and populate the policies object) so the schema
and runtime migrations remain in sync; look for the keys isRLSEnabled, policies,
and the user_id columns in the affected table definitions to locate where to
change.

Comment on lines +93 to +97
requestBody: input.requestBody ?? null,
responseBody: input.responseBody ?? null,
requestHeaders: input.requestHeaders ?? null,
errorMessage: input.errorMessage ?? null,
toolName: input.toolName ?? null,
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Redact and truncate these fields before persisting them.

This shared insert path now stores request/response payloads and headers exactly as supplied. If any caller passes unsanitized data, Authorization/Cookie headers, OAuth tokens, API keys, PII, and oversized payloads end up in request_logs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/repositories/log.repository.ts` around lines 93 - 97, The
insert path is persisting requestBody, responseBody, requestHeaders,
errorMessage, and toolName verbatim; before writing these fields in the
repository (the shared insert function in log.repository.ts), sanitize and
truncate them: redact sensitive headers (at minimum "authorization" and "cookie"
keys, and any header names matching /token|api[-_]?key/i) from requestHeaders,
remove or mask PII from bodies, and truncate
requestBody/responseBody/errorMessage/toolName to a safe max length (e.g., 2–4
KB) or use a helper like sanitizeAndTruncate(value, maxLen) to perform
masking/truncation; replace the direct assignments of
input.requestBody/responseBody/requestHeaders/errorMessage/toolName with the
sanitized results so nothing sensitive or oversized is persisted.

Comment on lines +19 to +23
requestBody: jsonb('request_body'),
responseBody: text('response_body'),
requestHeaders: jsonb('request_headers'),
errorMessage: text('error_message'),
toolName: text('tool_name'),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not persist raw secrets in request_logs.

With the new runtime inserts behind these columns, this table can now store Authorization/Cookie headers, OAuth tokens, and request/response payloads. That turns log views and backups into a long-lived secrets store. Please whitelist/redact sensitive headers and cap or encrypt bodies before writing them here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/schema/request-logs.ts` around lines 19 - 23, The
request_logs schema currently persists raw request_body, response_body and
request_headers (columns requestBody, responseBody, requestHeaders,
errorMessage, toolName); update the code that writes to this table to (1)
whitelist-only headers into requestHeaders and redact/remove sensitive headers
like Authorization, Cookie, Set-Cookie, and any OAuth tokens before insert, (2)
truncate or cap requestBody/responseBody to a safe maximum size or store a
hash/pointer and encrypt large bodies before storing, and (3) redact secrets
found in errorMessage/toolName fields; implement these transformations in the
centralized insert/update path that populates these columns so all runtime
inserts are filtered.

Comment thread apps/web/lib/validation/spec-version.schema.ts
Comment thread packages/types/src/log.ts
Comment on lines +9 to +13
readonly requestBody?: Record<string, unknown> | null;
readonly responseBody?: string | null;
readonly requestHeaders?: Record<string, string> | null;
readonly errorMessage?: string | null;
readonly toolName?: string | null;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Potential assignments to new log fields =="
rg -n -C3 --type=ts --type=tsx '\b(requestHeaders|requestBody|responseBody|errorMessage)\s*:'

echo
echo "== Redaction/sanitization helpers in repo =="
rg -n -C3 --type=ts --type=tsx '\b(redact|sanitize|mask)(\w+)?\b'

echo
echo "== Direct sensitive header references near logging paths =="
rg -n -C3 --type=ts --type=tsx '\b(authorization|cookie|set-cookie|x-api-key)\b'

Repository: Work90210/APIFold

Length of output: 151


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Potential assignments to new log fields =="
rg -n -C3 --type=ts '\b(requestHeaders|requestBody|responseBody|errorMessage)\s*:'

echo
echo "== Redaction/sanitization helpers in repo =="
rg -n -C3 --type=ts '\b(redact|sanitize|mask)(\w+)?\b'

echo
echo "== Direct sensitive header references near logging paths =="
rg -n -C3 --type=ts '\b(authorization|cookie|set-cookie|x-api-key)\b'

Repository: Work90210/APIFold

Length of output: 36214


🏁 Script executed:

# First, let's read the target file to understand the full contract
cat -n packages/types/src/log.ts

Repository: Work90210/APIFold

Length of output: 1268


🏁 Script executed:

# Check if log.ts is the only file or if there are related files
fd -e ts -e js | grep -i log | head -20

Repository: Work90210/APIFold

Length of output: 363


🏁 Script executed:

# Find all places that create request logs or use CreateRequestLogInput
rg -n 'CreateRequestLogInput' --type=ts

echo "---"

# Find where logs are actually written/persisted
rg -n 'createLog|insertLog|insert.*log' --type=ts -i

echo "---"

# Check the request logger middleware
cat -n apps/runtime/src/middleware/request-logger.ts | head -100

Repository: Work90210/APIFold

Length of output: 3533


🏁 Script executed:

# Search for where requestBody is being assigned in logging contexts
rg -B5 -A5 'requestBody.*=' --type=ts apps/web/app/api/servers/[id]/test/route.ts

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check if there are other endpoints/routes that create logs
fd -e ts | xargs grep -l 'createLog\|requestLog\|CreateRequestLogInput' | head -20

Repository: Work90210/APIFold

Length of output: 412


🏁 Script executed:

# Check the protocol-handler.ts INSERT statement - what fields are being populated?
sed -n '225,250p' apps/runtime/src/mcp/protocol-handler.ts

Repository: Work90210/APIFold

Length of output: 895


🏁 Script executed:

# See if there are other places where request logs are created
rg -n 'logRepository\.create|requestLogs\.create' --type=ts -B5 -A10

Repository: Work90210/APIFold

Length of output: 43


🏁 Script executed:

# Check what input.arguments contains - understand if it's sensitive
rg -n 'input\.arguments' --type=ts apps/web/app/api/servers/[id]/test/route.ts -B10 -A2

Repository: Work90210/APIFold

Length of output: 1190


Fix incomplete sanitization of request body in logging.

While requestHeaders are already being sanitized (masking Authorization and X-API-Key with ***), requestBody is currently logged directly without redaction. Line 100 in apps/web/app/api/servers/[id]/test/route.ts passes input.arguments through to the log contract as-is, which can expose API credentials or PII embedded in user-provided parameters. Redact or truncate sensitive fields from request bodies before logging (similar to the existing 50K truncation for responseBody).

- Update error messages from :slug to :identifier in transport handlers
- Remove unused isValidSlug function and constants from both SSE and
  streamable-http transports
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
apps/runtime/src/transports/streamable-http.ts (1)

75-78: Implement the advertised slug fallback.

This branch is currently endpoint-id-or-slug, not endpoint-id-then-slug: once identifier matches the 12-hex pattern, getBySlug() is never attempted. If you want true fallback behavior, do the endpoint-id lookup first and only fall back on miss. The same branch exists in apps/runtime/src/transports/sse.ts (Lines 26-29).

♻️ Proposed fix
-    const server = /^[a-f0-9]{12}$/.test(identifier)
-      ? registry.getByEndpointId(identifier)
-      : registry.getBySlug(identifier);
+    const server =
+      (/^[a-f0-9]{12}$/.test(identifier) ? registry.getByEndpointId(identifier) : undefined) ??
+      registry.getBySlug(identifier);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/streamable-http.ts` around lines 75 - 78, The
current logic chooses getByEndpointId or getBySlug based solely on the regex
match, preventing a true fallback; change it to try endpoint-id lookup first and
only call getBySlug if that returns a falsy value. Concretely, for identifier:
if /^[a-f0-9]{12}$/.test(identifier) then call
registry.getByEndpointId(identifier) into a temp (e.g., server) and if that is
falsy call registry.getBySlug(identifier); otherwise call
registry.getBySlug(identifier) directly; apply the same fix to the analogous
branch using registry.getByEndpointId/getBySlug in the SSE transport.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/runtime/src/transports/sse.ts`:
- Around line 22-23: The SSE and message routes mounted as
router.get('/mcp/:identifier/sse', ...) (and the corresponding POST / streamable
HTTP handler) are reachable by custom-domain rewrites before createMcpAuth() and
the per-server rate limiter are applied, so requests bypass auth and throttling;
fix by ensuring the MCP auth and limiter run before domain-router rewrites or by
attaching createMcpAuth() and the per-server limiter directly to these route
handlers (the router.get('/mcp/:identifier/sse') and the POST stream handler) so
that auth+rate-limit middleware execute for requests rewritten from custom
domains; adjust mounting order in server.ts or add the middleware to the routes
in apps/runtime/src/transports/sse.ts (and the message/POST handlers) to
guarantee protection.

---

Nitpick comments:
In `@apps/runtime/src/transports/streamable-http.ts`:
- Around line 75-78: The current logic chooses getByEndpointId or getBySlug
based solely on the regex match, preventing a true fallback; change it to try
endpoint-id lookup first and only call getBySlug if that returns a falsy value.
Concretely, for identifier: if /^[a-f0-9]{12}$/.test(identifier) then call
registry.getByEndpointId(identifier) into a temp (e.g., server) and if that is
falsy call registry.getBySlug(identifier); otherwise call
registry.getBySlug(identifier) directly; apply the same fix to the analogous
branch using registry.getByEndpointId/getBySlug in the SSE transport.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2a939d53-2094-45d8-be56-12828d3d46d1

📥 Commits

Reviewing files that changed from the base of the PR and between 9dfd2df and a0d00ad.

📒 Files selected for processing (2)
  • apps/runtime/src/transports/sse.ts
  • apps/runtime/src/transports/streamable-http.ts

Comment on lines +22 to +23
// SSE endpoint — routes by endpoint ID (12 hex chars) with slug fallback
router.get('/mcp/:identifier/sse', (req: Request, res: Response) => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Custom-domain /sse and /message bypass auth and rate limiting.

apps/runtime/src/transports/domain-router.ts (Lines 46-70) rewrites custom-domain /sse and /message requests into these routes, but apps/runtime/src/server.ts (Lines 56-79) mounts createMcpAuth() and the per-server limiter on /mcp/:slug before that rewrite happens. Those requests therefore skip both middlewares and land here unauthenticated and unthrottled. The same ordering also leaves custom-domain POST / exposed for the streamable HTTP transport.

Also applies to: 76-77

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/transports/sse.ts` around lines 22 - 23, The SSE and message
routes mounted as router.get('/mcp/:identifier/sse', ...) (and the corresponding
POST / streamable HTTP handler) are reachable by custom-domain rewrites before
createMcpAuth() and the per-server rate limiter are applied, so requests bypass
auth and throttling; fix by ensuring the MCP auth and limiter run before
domain-router rewrites or by attaching createMcpAuth() and the per-server
limiter directly to these route handlers (the router.get('/mcp/:identifier/sse')
and the POST stream handler) so that auth+rate-limit middleware execute for
requests rewritten from custom domains; adjust mounting order in server.ts or
add the middleware to the routes in apps/runtime/src/transports/sse.ts (and the
message/POST handlers) to guarantee protection.

- Fix DbClient type mismatch between runtime and protocol handler
- Wrap fetchProfileToolIds in try/catch to prevent unhandled exceptions
- Fix theme toggle flash by initializing state from localStorage
- Add header sanitization in LogRepository (redact auth/cookie/api-key)
- Use byte-accurate size check for rawSpec validation (TextEncoder)
- Add TODO for production version source (should use spec_releases)
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (3)
apps/web/lib/db/repositories/log.repository.ts (1)

17-33: ⚠️ Potential issue | 🟠 Major

Harden log sanitization scope and size limits before persistence.

Line 17–33 only redacts a narrow header subset, and Line 106–110 still stores requestBody / responseBody / errorMessage / toolName without masking or byte limits. This can still persist secrets/PII and oversized payloads.

💡 Proposed patch
-const REDACTED_HEADERS = new Set(['authorization', 'cookie', 'x-api-key']);
+const REDACTED_HEADER_PATTERN = /^(authorization|cookie|.*token.*|.*api[-_]?key.*)$/i;
 const REDACTED_VALUE = '***';
+const MAX_LOG_TEXT_BYTES = 4096;
+const textEncoder = new TextEncoder();

 export class LogRepository extends BaseRepository<
@@
   private sanitizeHeaders(
     headers: Record<string, string> | null | undefined,
   ): Record<string, string> | null {
     if (!headers) return null;
     return Object.fromEntries(
       Object.entries(headers).map(([key, value]) =>
-        REDACTED_HEADERS.has(key.toLowerCase()) ? [key, REDACTED_VALUE] : [key, value],
+        REDACTED_HEADER_PATTERN.test(key) ? [key, REDACTED_VALUE] : [key, value],
       ),
     );
   }
+
+  private truncateText(value: string | null | undefined): string | null {
+    if (value == null) return null;
+    const bytes = textEncoder.encode(value);
+    if (bytes.byteLength <= MAX_LOG_TEXT_BYTES) return value;
+    return new TextDecoder().decode(bytes.slice(0, MAX_LOG_TEXT_BYTES));
+  }
+
+  private boundBody(
+    body: Record<string, unknown> | null | undefined,
+  ): Record<string, unknown> | null {
+    if (!body) return null;
+    const serialized = JSON.stringify(body);
+    if (textEncoder.encode(serialized).byteLength <= MAX_LOG_TEXT_BYTES) return body;
+    return { truncated: true };
+  }
@@
-          requestBody: input.requestBody ?? null,
-          responseBody: input.responseBody ?? null,
+          requestBody: this.boundBody(input.requestBody),
+          responseBody: this.truncateText(input.responseBody),
           requestHeaders: this.sanitizeHeaders(input.requestHeaders as Record<string, string> | null | undefined),
-          errorMessage: input.errorMessage ?? null,
-          toolName: input.toolName ?? null,
+          errorMessage: this.truncateText(input.errorMessage),
+          toolName: this.truncateText(input.toolName),

Also applies to: 106-110

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/lib/db/repositories/log.repository.ts` around lines 17 - 33, The
current LogRepository.sanitizeHeaders uses a small REDACTED_HEADERS set (and
REDACTED_VALUE) and does not protect
requestBody/responseBody/errorMessage/toolName; update sanitizeHeaders to
include a broader list of sensitive header keys (e.g., any header names
containing "auth", "token", "cookie", "credential", "secret", "api", "key",
"password") and normalize keys before checking REDACTED_HEADERS, and keep using
REDACTED_VALUE for masked values; additionally, before persisting fields
requestBody, responseBody, errorMessage, and toolName (where they are assigned
in LogRepository), apply masking/truncation: detect binary vs text, remove or
replace known secret patterns (tokens, emails, credit-card-like numbers) and
enforce a configurable byte limit (e.g., truncate to MAX_LOG_BYTES and append a
truncation marker) and replace overly large or redacted content with a standard
placeholder, ensuring all changes reference REDACTED_HEADERS/REDACTED_VALUE,
sanitizeHeaders, and the requestBody/responseBody/errorMessage/toolName
assignments so secrets and oversized payloads are never stored raw.
apps/web/components/layout/theme-toggle.tsx (1)

36-44: ⚠️ Potential issue | 🟡 Minor

Toggle still reads from DOM instead of React state.

The toggle function derives next from isDarkMode() which reads the DOM directly. This can cause state/DOM desync during rapid clicks or if React hasn't re-rendered yet. Derive next from React state using the functional updater pattern.

🔧 Suggested fix
 const toggle = useCallback(() => {
-  const next = !isDarkMode();
-  setDark(next);
-  try {
-    localStorage.setItem("theme", next ? "dark" : "light");
-  } catch {
-    // localStorage may be unavailable
-  }
+  setDark((prev) => {
+    const next = !prev;
+    try {
+      localStorage.setItem("theme", next ? "dark" : "light");
+    } catch {
+      // localStorage may be unavailable
+    }
+    return next;
+  });
 }, []);

With this change, the isDarkMode() helper (lines 7-9) becomes unused and can be removed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/components/layout/theme-toggle.tsx` around lines 36 - 44, The toggle
function currently computes next via isDarkMode() (which reads the DOM) causing
potential state/DOM desync; change toggle to use the React state updater (e.g.,
setDark(prev => { const next = !prev; try { localStorage.setItem("theme", next ?
"dark" : "light"); } catch {} return next; })) so next is derived from prev
state rather than DOM, and then remove the now-unused isDarkMode() helper;
ensure references to setDark and localStorage.setItem remain intact.
apps/web/app/dashboard/servers/[id]/page.tsx (1)

54-58: ⚠️ Potential issue | 🟠 Major

Stop deriving production from array order.

versions[0], versions[1], and i === 0 still drive the production card, rollback target, and deployment badges. As soon as a newer preview exists, this page will label the wrong release as production and can roll back to the wrong version.

Also applies to: 79-90, 171-175, 213-257

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/page.tsx around lines 54 - 58, The code
incorrectly treats versions[0] as the production release (variables and UI logic
using latestVersion, versions[0], versions[1], and i === 0), causing preview
releases to be mislabelled; replace these array-order checks with an explicit
production selection: locate where latestVersion = versions?.[0] and all places
that rely on i === 0 or versions[0]/versions[1], and change them to select the
production release by searching the versions array for an explicit flag/property
(e.g., v.promoted === true or v.release_type === 'production') or use the real
promoted/spec_release value once available (fallback to the previous behavior
only if no explicit production marker exists); update productionUrl construction
to use that selected production release's slug/host if needed and ensure
rollback/badge/production-card logic references this explicit productionRelease
variable instead of array index positions.
🧹 Nitpick comments (2)
apps/runtime/src/mcp/protocol-handler.ts (2)

161-174: Past review concern addressed; consider Set for lookup consistency.

The try/catch around fetchProfileToolIds correctly addresses the previous review finding. However, line 172 uses includes() (O(n)) while handleToolsList uses a Set (O(1)). For consistency and slight efficiency gain with larger profiles:

♻️ Optional: Use Set for O(1) lookup
     try {
       allowedToolIds = this.fetchProfileToolIds
         ? await this.fetchProfileToolIds(server.id)
         : null;
     } catch (err) {
       this.logger.error({ err, slug: session.slug }, 'Failed to fetch profile tool IDs');
       return jsonRpcError(req.id, -32603, 'Failed to load profile');
     }

-    if (allowedToolIds && !allowedToolIds.includes(tool.id)) {
+    if (allowedToolIds && !new Set(allowedToolIds).has(tool.id)) {
       return jsonRpcError(req.id, -32002, 'Tool not found');
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 161 - 174, Convert the
readonly string[] lookup to a Set for O(1) membership checks: after calling
fetchProfileToolIds (symbol: fetchProfileToolIds) create a Set<string> (e.g.,
allowedToolIdsSet) from allowedToolIds when it is non-null and then replace the
includes() check on tool.id with allowedToolIdsSet.has(tool.id); update the
local variable usage around allowedToolIds and the membership test (currently
using includes) so the code mirrors handleToolsList's Set-based lookup.

117-124: Consider explicit try/catch for profile fetch consistency.

Unlike handleToolsCall (lines 163-170), fetchProfileToolIds here relies on the outer catch block. If it throws, the error message will be "Failed to load tools" rather than a more specific profile-related message. This inconsistency could complicate debugging.

♻️ Optional: Add explicit handling for profile fetch failure
     let tools = [...toolMap.values()];

     // Profile-based filtering: only show tools allowed by the active profile
-    const allowedToolIds = this.fetchProfileToolIds
-      ? await this.fetchProfileToolIds(server.id)
-      : null;
+    let allowedToolIds: readonly string[] | null = null;
+    if (this.fetchProfileToolIds) {
+      try {
+        allowedToolIds = await this.fetchProfileToolIds(server.id);
+      } catch (err) {
+        this.logger.error({ err, slug: session.slug }, 'Failed to fetch profile tool IDs');
+        return jsonRpcError(req.id, -32603, 'Failed to load profile');
+      }
+    }

     if (allowedToolIds) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/runtime/src/mcp/protocol-handler.ts` around lines 117 - 124, Wrap the
call to this.fetchProfileToolIds(server.id) in its own try/catch so
profile-fetch failures produce a clear, specific error instead of being subsumed
by the outer "Failed to load tools" catch; specifically, around the
fetchProfileToolIds invocation capture any thrown error, log or rethrow a
descriptive profile-related message (e.g., "Failed to load profile tool ids")
and only set allowedToolIds/allowedSet and apply tools = tools.filter((t) =>
allowedSet.has(t.id)) when the fetch succeeds.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@apps/runtime/src/mcp/protocol-handler.ts`:
- Around line 224-244: writeRequestLog is inserting the wrong values into
columns: the SQL currently writes tool name into the method column and leaves
tool_name NULL; fix by updating writeRequestLog to accept an optional
errorMessage?: string, change the INSERT to explicitly include tool_name and
error_message in the column list, and reorder the VALUES array so method
receives the HTTP method and tool_name receives the tool name (e.g., VALUES
($1,..., $6 method, $7 tool_name, $8 status_code, $9 duration_ms, $10
error_message)), then update all call sites (the places calling writeRequestLog
/ the call around line 218) to pass the correct method and toolName arguments
and supply null or the actual error string for errorMessage when applicable so
error_message is populated on failures.

In `@apps/web/app/dashboard/servers/`[id]/page.tsx:
- Around line 77-78: promoteVersion.isPending is used globally which causes
unrelated buttons to show the wrong pending state; instead introduce local UI
state (e.g., activeAction: 'promote' | 'rollback' | 'rowAction' | null and
activeVersionId: string | null) and set these in each onClick (for
promoteVersion.mutate, rollbackVersion.mutate and each row action) before
calling the mutation, then compute button disabled/text by comparing the local
activeAction/activeVersionId to the current button's action and versionId (e.g.,
disabled={activeAction === 'promote' && activeVersionId === version.id} and
label={activeAction === 'promote' && activeVersionId === version.id ?
'Promoting...' : 'Promote'}), and clear the local state in the mutation's
onSettled/onSuccess/onError handlers so only the clicked action shows pending
feedback; update the handlers referenced (promoteVersion, rollbackVersion and
the row action click handlers) accordingly.
- Line 5: The import list on the top of the file includes an unused symbol
Rocket which triggers ESLint; remove Rocket from the named imports in the import
statement (the one importing Radio, Shield, ShieldOff, Gauge, Globe, Wrench,
Terminal, BarChart3, ChevronRight, ExternalLink, GitBranch, Rocket) so the
import becomes only the used icons. Ensure no other references to Rocket remain
in this module.

---

Duplicate comments:
In `@apps/web/app/dashboard/servers/`[id]/page.tsx:
- Around line 54-58: The code incorrectly treats versions[0] as the production
release (variables and UI logic using latestVersion, versions[0], versions[1],
and i === 0), causing preview releases to be mislabelled; replace these
array-order checks with an explicit production selection: locate where
latestVersion = versions?.[0] and all places that rely on i === 0 or
versions[0]/versions[1], and change them to select the production release by
searching the versions array for an explicit flag/property (e.g., v.promoted ===
true or v.release_type === 'production') or use the real promoted/spec_release
value once available (fallback to the previous behavior only if no explicit
production marker exists); update productionUrl construction to use that
selected production release's slug/host if needed and ensure
rollback/badge/production-card logic references this explicit productionRelease
variable instead of array index positions.

In `@apps/web/components/layout/theme-toggle.tsx`:
- Around line 36-44: The toggle function currently computes next via
isDarkMode() (which reads the DOM) causing potential state/DOM desync; change
toggle to use the React state updater (e.g., setDark(prev => { const next =
!prev; try { localStorage.setItem("theme", next ? "dark" : "light"); } catch {}
return next; })) so next is derived from prev state rather than DOM, and then
remove the now-unused isDarkMode() helper; ensure references to setDark and
localStorage.setItem remain intact.

In `@apps/web/lib/db/repositories/log.repository.ts`:
- Around line 17-33: The current LogRepository.sanitizeHeaders uses a small
REDACTED_HEADERS set (and REDACTED_VALUE) and does not protect
requestBody/responseBody/errorMessage/toolName; update sanitizeHeaders to
include a broader list of sensitive header keys (e.g., any header names
containing "auth", "token", "cookie", "credential", "secret", "api", "key",
"password") and normalize keys before checking REDACTED_HEADERS, and keep using
REDACTED_VALUE for masked values; additionally, before persisting fields
requestBody, responseBody, errorMessage, and toolName (where they are assigned
in LogRepository), apply masking/truncation: detect binary vs text, remove or
replace known secret patterns (tokens, emails, credit-card-like numbers) and
enforce a configurable byte limit (e.g., truncate to MAX_LOG_BYTES and append a
truncation marker) and replace overly large or redacted content with a standard
placeholder, ensuring all changes reference REDACTED_HEADERS/REDACTED_VALUE,
sanitizeHeaders, and the requestBody/responseBody/errorMessage/toolName
assignments so secrets and oversized payloads are never stored raw.

---

Nitpick comments:
In `@apps/runtime/src/mcp/protocol-handler.ts`:
- Around line 161-174: Convert the readonly string[] lookup to a Set for O(1)
membership checks: after calling fetchProfileToolIds (symbol:
fetchProfileToolIds) create a Set<string> (e.g., allowedToolIdsSet) from
allowedToolIds when it is non-null and then replace the includes() check on
tool.id with allowedToolIdsSet.has(tool.id); update the local variable usage
around allowedToolIds and the membership test (currently using includes) so the
code mirrors handleToolsList's Set-based lookup.
- Around line 117-124: Wrap the call to this.fetchProfileToolIds(server.id) in
its own try/catch so profile-fetch failures produce a clear, specific error
instead of being subsumed by the outer "Failed to load tools" catch;
specifically, around the fetchProfileToolIds invocation capture any thrown
error, log or rethrow a descriptive profile-related message (e.g., "Failed to
load profile tool ids") and only set allowedToolIds/allowedSet and apply tools =
tools.filter((t) => allowedSet.has(t.id)) when the fetch succeeds.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 6c769e4d-f0c8-4beb-adff-283616a4caa6

📥 Commits

Reviewing files that changed from the base of the PR and between a0d00ad and d2b0118.

📒 Files selected for processing (6)
  • apps/runtime/src/index.ts
  • apps/runtime/src/mcp/protocol-handler.ts
  • apps/web/app/dashboard/servers/[id]/page.tsx
  • apps/web/components/layout/theme-toggle.tsx
  • apps/web/lib/db/repositories/log.repository.ts
  • apps/web/lib/validation/spec-version.schema.ts
✅ Files skipped from review due to trivial changes (1)
  • apps/web/lib/validation/spec-version.schema.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/runtime/src/index.ts

Comment thread apps/runtime/src/mcp/protocol-handler.ts
Comment thread apps/web/app/dashboard/servers/[id]/page.tsx Outdated
Comment on lines +77 to +78
disabled={promoteVersion.isPending}
onClick={() => {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Scope the pending label to the action that was clicked.

promoteVersion.isPending is shared by rollback and every row action, so one mutation makes unrelated buttons flip copy (Rolling back... / Promoting...) and disable together. Track the active {action, versionId} locally so the feedback stays accurate.

Also applies to: 93-94, 250-260

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@apps/web/app/dashboard/servers/`[id]/page.tsx around lines 77 - 78,
promoteVersion.isPending is used globally which causes unrelated buttons to show
the wrong pending state; instead introduce local UI state (e.g., activeAction:
'promote' | 'rollback' | 'rowAction' | null and activeVersionId: string | null)
and set these in each onClick (for promoteVersion.mutate, rollbackVersion.mutate
and each row action) before calling the mutation, then compute button
disabled/text by comparing the local activeAction/activeVersionId to the current
button's action and versionId (e.g., disabled={activeAction === 'promote' &&
activeVersionId === version.id} and label={activeAction === 'promote' &&
activeVersionId === version.id ? 'Promoting...' : 'Promote'}), and clear the
local state in the mutation's onSettled/onSuccess/onError handlers so only the
clicked action shows pending feedback; update the handlers referenced
(promoteVersion, rollbackVersion and the row action click handlers) accordingly.

KyleFuehri added 6 commits March 23, 2026 20:23
- Remove unused Rocket import from server overview page
- Store 'POST' in method column and tool name in tool_name column
  (previously tool name was incorrectly stored in method)
- Remove unused imports: uuidParam, requestLogs, BarChart, Bar, Badge,
  RegistryEntry, Copy, cn, McpTool
- Remove unused variables: handleDeploy, isDeploying, deploying state
- Prefix unused params with underscore: _index
- All packages lint and build cleanly
- Add .eslintrc.cjs extending @apifold/eslint-config/base
- Remove unused RegistryMeta import
- Fix import ordering in validate.ts
- Add --ext .ts flag to lint command for ESLint 8 compatibility
@github-actions
Copy link
Copy Markdown
Contributor

Coverage Report for packages/transformer

Status Category Percentage Covered / Total
🔵 Lines 96.34% (🎯 80%) 607 / 630
🔵 Statements 96.34% (🎯 80%) 607 / 630
🔵 Functions 97.36% (🎯 80%) 37 / 38
🔵 Branches 94.84% (🎯 80%) 239 / 252
File Coverage
File Stmts Branches Functions Lines Uncovered Lines
Changed Files
packages/transformer/src/convert.ts 92% 100% 66.66% 92% 39-40
packages/transformer/src/index.ts 100% 100% 100% 100%
Generated in workflow #162 for commit 7747a0c by the Vitest Coverage Report Action

@Work90210 Work90210 merged commit 1976c8b into master Mar 23, 2026
9 of 11 checks passed
@Work90210 Work90210 deleted the sprint-9-10/cli-oauth-complete branch March 28, 2026 15:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant