feat: extract Relay, Connections, and Backfill services from Console monolith#440
feat: extract Relay, Connections, and Backfill services from Console monolith#440jeevanpillay merged 441 commits intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR introduces a multi-service webhook and connection architecture, consolidating gateway operations into standalone Hono services. It establishes three new apps (gateway, connections, backfill) for webhook routing, OAuth management, and event-driven backfill; consolidates database schema to PlanetScale with new gateway tables; implements provider-specific webhook handlers; and removes legacy OAuth and webhook utilities from console packages. Major refactoring shifts auth flows to use a connections service, replaces monolithic backfill with event-driven workers, and updates console to consume gateway-managed installations directly. Changes
Estimated Code Review Effort🎯 5 (Critical) | ⏱️ ~120 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
There was a problem hiding this comment.
Actionable comments posted: 19
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/gateway/.env.example`:
- Line 15: Update the ENCRYPTION_KEY entry in .env.example to require a 256-bit
AES key encoded as exactly 64 hexadecimal characters (32 bytes) instead of “32+
hex chars”; change the comment for ENCRYPTION_KEY to state “AES-256-GCM key —
exactly 64 hex chars (32 bytes)” and optionally add a short hint on how to
generate one (e.g., openssl rand -hex 32) so callers know to produce the correct
length.
In `@apps/gateway/src/app.ts`:
- Around line 9-11: The tenantMiddleware exported from middleware/tenant.ts is
never applied to the mounted route groups (connections and admin), so either
import and apply tenantMiddleware when mounting those routers (e.g., ensure
apps/gateway/src/app.ts calls app.route("/connections", tenantMiddleware,
connections) and app.route("/admin", tenantMiddleware, admin)) or, if tenant
isolation is meant to be enforced inside the routers, add tenantMiddleware to
the specific route handlers in routes/connections.ts and routes/admin.ts (while
preserving intentional unauthenticated endpoints like /admin/health,
/admin/delivery-status, and OAuth callbacks under /connections). If the
middleware is intentionally unused, add a short comment in
apps/gateway/src/app.ts explaining why.
In `@apps/gateway/src/env.ts`:
- Around line 1-2: Replace the Next.js-specific import and usage: swap imports
from "@t3-oss/env-nextjs" and "@t3-oss/env-nextjs/presets-zod" to
"@t3-oss/env-core" (and its presets if needed), update any createEnv call to use
the env-core API (e.g., createEnv({... , runtimeEnv: process.env }) instead of
experimental__runtimeEnv), and remove Next.js-only presets/enforcements (like
NEXT_PUBLIC_ prefix enforcement from the vercel preset) so createEnv and any
preset usage (reference createEnv and vercel in the diff) read values from
process.env appropriate for a Hono service.
In `@apps/gateway/src/lib/crypto.ts`:
- Around line 14-20: The deriveKey flow should reject invalid or too-short
secrets instead of padding/truncating: in deriveKey remove padEnd/slice and
validate hexSecret length and characters (expect 64 hex chars for a 32-byte key)
and throw a clear error if not; also harden hexToBytes (or the code that does
parseInt on hex pairs) to check each hex pair matches /^[0-9a-fA-F]{2}$/ and
throw if parseInt yields NaN so no invalid pair becomes 0 silently. Update
deriveKey to call this validator before importing the key (reference functions
deriveKey and hexToBytes/where parseInt is used). Ensure errors include which
env variable/secret is invalid for easier debugging.
- Around line 119-129: The timingSafeEqual function currently leaks timing info
via the early length check and the final string equality check; remove the early
return (if (a.length !== b.length) ...) and the final a === b comparison, always
normalize both inputs to the same fixed-length hex representation (as you
already do with padEnd(...).slice(...)), convert to bytes via hexToBytes for
both a and b, iterate over the full byte arrays (use a fixed 64-byte length) and
accumulate differences into a single result variable using bitwise OR of XORs,
then return result === 0; ensure you never short-circuit based on string length
or content and only rely on the accumulated byte-level result in
timingSafeEqual.
In `@apps/gateway/src/lib/keys.ts`:
- Around line 8-31: Dynamic Redis key segments are not escaped which allows
collisions (e.g., connectionKey("abc:resources") ==
connectionResourcesKey("abc")); update all key factories (connectionKey,
orgConnectionsKey, providerAccountKey, resourceKey, connectionResourcesKey,
oauthStateKey, webhookSeenKey) to sanitize/escape their dynamic inputs before
interpolation (for example encodeURIComponent or base64-encode each dynamic
segment, or prefix each segment with its length) so colons in input cannot merge
segments; apply the same escaping strategy consistently across all functions and
decode where needed.
In `@apps/gateway/src/lib/qstash.ts`:
- Around line 1-5: Move the import of getQStashClient so it appears before any
re-exports and eliminate the redundant duplicate specifier: import
getQStashClient from "@vendor/qstash" (or import { getQStashClient } ...) first,
then either re-export the types/symbols (QStashClient, Receiver) after that or
export them as types (export type { QStashClient, Receiver }) to avoid repeating
the module source; ensure qstash is created using the imported getQStashClient()
and that the exported symbols (QStashClient, Receiver) still reference the same
module.
In `@apps/gateway/src/middleware/auth.ts`:
- Line 12: Replace the direct inequality check "apiKey !== env.GATEWAY_API_KEY"
with a constant-time comparison using Node's crypto.timingSafeEqual: keep the
existing early-return for !apiKey, then ensure both values are the same length
(or convert to Buffers and compare lengths) and call timingSafeEqual on the
Buffers of apiKey and env.GATEWAY_API_KEY to decide authorization; reference the
variables apiKey and env.GATEWAY_API_KEY and add a require/import of crypto if
not present so the middleware in auth.ts uses crypto.timingSafeEqual safely.
In `@apps/gateway/src/middleware/tenant.ts`:
- Around line 4-6: TenantVariables currently requires orgId but the middleware
only sets it conditionally; change the interface so orgId is optional (orgId?:
string) to match runtime behavior and then audit usages of TenantVariables or
any functions that consume it (e.g., middleware handlers or downstream handlers)
to handle the possibly undefined orgId safely.
- Around line 18-22: The middleware currently allows extracting org_id from
query params globally; update tenantMiddleware so orgId is taken from
c.req.query("org_id") only when the request path matches the OAuth callback
routes (e.g., check c.req.path or c.req.url for the known callback pattern) and
otherwise ignore query org_id, preserving header extraction via
c.req.header("X-Org-Id"); after validating the path, set the tenant context with
c.set("orgId", orgId) only when the callback-path check passes to prevent tenant
spoofing.
In `@apps/gateway/src/routes/admin.ts`:
- Line 7: The module-level const startTime = Date.now() reflects the current
serverless instance cold-start, not global service uptime—rename the variable
(for example to instanceStartTime or instance_start_time) and update the health
endpoint to return instance_uptime_ms (computed from Date.now() -
instanceStartTime) instead of uptime_ms; also add a brief comment above the
variable explaining it is per-instance cold-start time so consumers understand
the semantics and update any other references to startTime in the admin route
handler accordingly.
- Around line 95-103: Replace the placeholder API-key style protection for the
admin.post("/delivery-status") handler with QStash HMAC verification: import the
exported Receiver from apps/gateway/src/lib/qstash.ts, instantiate it (e.g., new
Receiver()), call receiver.verify({ signature: c.req.header("upstash-signature")
?? "", body: await c.req.text() }) at the start of the handler, and if it
returns false respond with c.json({ error: "unauthorized" }, 401); do not rely
on apiKeyAuth for this endpoint and ensure the verification happens before any
processing.
In `@apps/gateway/src/routes/connections.ts`:
- Around line 14-25: The GET route defined as
connections.get("/:provider/authorize", (c) => { ... }) is currently public;
wrap or apply the existing apiKeyAuth middleware to this route so requests must
present a valid API key before reaching the handler (e.g., add apiKeyAuth as the
route middleware for the "/:provider/authorize" endpoint), keeping the existing
response body and status intact; ensure you reference the same route signature
and middleware symbol apiKeyAuth when making the change.
In `@apps/gateway/src/routes/webhooks.ts`:
- Line 1: The code imports Hono directly from "hono" (e.g., the Hono symbol in
webhooks.ts), which violates the vendor-abstraction rule; create a vendor
re-export module that imports and re-exports Hono (e.g., vendor/hono exporting
Hono) and update all gateway imports to use that alias (e.g., import { Hono }
from "@vendor/hono") across the affected files (replace direct "hono" imports in
places referencing the Hono symbol such as webhooks.ts, connections.ts,
admin.ts, app.ts, middleware/auth.ts, middleware/tenant.ts).
- Around line 14-24: The route handler webhooks.post("/:provider") currently
returns c.json({...}, 501) which signals "Not Implemented" and causes provider
retries; change the response status to 202 Accepted instead by updating the
c.json call in the webhooks.post handler (where provider is read via
c.req.param("provider")) so the body remains the same but the HTTP status code
passed to c.json is 202 to indicate asynchronous acceptance rather than a
permanent not-implemented error.
In `@package.json`:
- Line 62: Update the workspace catalog and package dependency for vercel: add
an entry vercel: ^50.23.2 to the catalog section in pnpm-workspace.yaml, then
change the dependency in package.json from "vercel": "^50.23.2" to "vercel":
"catalog:" so it follows the workspace catalog convention used for other dev
tools (e.g., typescript, eslint, prettier, turbo).
In `@vendor/qstash/env.ts`:
- Around line 1-2: Replace the Next.js-specific import with the
framework-agnostic env core: change the import of createEnv from
"@t3-oss/env-nextjs" to "@t3-oss/env-core" in vendor/qstash/env.ts and keep the
existing zod schema and createEnv usage (the exported setup using createEnv and
any variables defined there such as the same schema names) so the package
remains server-only and framework-agnostic.
In `@vendor/qstash/src/client.ts`:
- Around line 77-93: The publishToTopic function wraps a single
PublishToApiResponse into an artificial one-element array and returns
QStashPublishResponse[], which contradicts the comment and vendor-wrapper
guideline; change publishToTopic to return the single response shape directly
(e.g., QStashPublishResponse or PublishToApiResponse) instead of an array,
update the function signature and its caller expectations, and keep the call to
this.client.publishJSON({ topic, body, headers, ...(retries !== undefined && {
retries }), ...(delay !== undefined && { delay }), ...(deduplicationId && {
deduplicationId }) }) intact; adjust any types (PublishToTopicOptions,
QStashPublishResponse) so the method returns the single response type and remove
the artificial array wrapping.
- Around line 101-103: getQStashClient currently uses a process-wide singleton
(clientInstance) so the first token passed is locked in and subsequent calls
with different tokens are ignored; change the cache to be token-aware by
replacing clientInstance with a Map keyed by the token (e.g., tokenKey = token
?? 'default') and update getQStashClient to look up the Map and create a new
QStashClient(token) only when that key is absent (or alternatively throw if a
different token is requested); reference getQStashClient, clientInstance and
QStashClient when making this change so callers receive the correct client per
token.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml,!**/pnpm-lock.yaml
📒 Files selected for processing (27)
apps/gateway/.env.exampleapps/gateway/api/index.tsapps/gateway/package.jsonapps/gateway/src/app.tsapps/gateway/src/env.tsapps/gateway/src/index.tsapps/gateway/src/lib/crypto.tsapps/gateway/src/lib/keys.tsapps/gateway/src/lib/qstash.tsapps/gateway/src/lib/redis.tsapps/gateway/src/middleware/auth.tsapps/gateway/src/middleware/tenant.tsapps/gateway/src/providers/types.tsapps/gateway/src/routes/admin.tsapps/gateway/src/routes/connections.tsapps/gateway/src/routes/webhooks.tsapps/gateway/tsconfig.jsonapps/gateway/turbo.jsonapps/gateway/vercel.jsonpackage.jsonvendor/qstash/env.tsvendor/qstash/eslint.config.jsvendor/qstash/package.jsonvendor/qstash/src/client.tsvendor/qstash/src/index.tsvendor/qstash/src/receiver.tsvendor/qstash/tsconfig.json
There was a problem hiding this comment.
Actionable comments posted: 44
♻️ Duplicate comments (8)
apps/gateway/src/lib/keys.ts (1)
9-32: Redis key segment escaping issue remains unaddressed from the previous review.Dynamic string inputs like
id,orgId,connId,accountId,resourceId,deliveryId, andtokenare still interpolated unescaped, allowing colon-containing values to produce key collisions (e.g.,connectionKey("abc:resources")==connectionResourcesKey("abc")).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/lib/keys.ts` around lines 9 - 32, The dynamic Redis key segments can contain colons and cause collisions; update the key helpers (connectionKey, orgConnectionsKey, providerAccountKey, resourceKey, connectionResourcesKey, oauthStateKey, webhookSeenKey) to escape each interpolated segment by calling a shared helper (e.g., escapeKeySegment) that safely encodes input (URL-safe base64 or encodeURIComponent-style escaping) before constructing the `gw:...` string so no raw colons or unsafe characters are placed into the final key.apps/gateway/.env.example (1)
14-14:⚠️ Potential issue | 🟠 MajorRequire exact AES-256 key length in the env template.
Line 14 says “32+ hex chars”, but AES-256-GCM requires exactly 32 bytes (64 hex chars). This should be explicit to prevent invalid key material.
🔧 Proposed fix
-ENCRYPTION_KEY= # AES-256-GCM for token encryption (32+ hex chars) +ENCRYPTION_KEY= # AES-256-GCM key — exactly 64 hex chars (32 bytes), e.g. `openssl rand -hex 32`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/.env.example` at line 14, Update the ENCRYPTION_KEY description in the .env template to require an exact 32-byte AES-256 key (64 hex characters) instead of "32+ hex chars"; specifically change the ENCRYPTION_KEY comment so it clearly states "exactly 64 hex chars (32 bytes) for AES-256-GCM" to prevent invalid key lengths and make key requirements unambiguous for consumers of the ENCRYPTION_KEY variable.apps/gateway/src/env.ts (1)
1-2:⚠️ Potential issue | 🟠 MajorUse env-core for this non-Next runtime and replace Next-specific runtime config.
This gateway service is Hono-based;
@t3-oss/env-nextjsandexperimental__runtimeEnvare Next-oriented. Use@t3-oss/env-corewithruntimeEnv: process.env.As per coding guidelines: `apps/**` requires no direct third-party imports and use of project abstractions.🔧 Proposed fix
-import { createEnv } from "@t3-oss/env-nextjs"; -import { vercel } from "@t3-oss/env-nextjs/presets-zod"; +import { createEnv } from "@t3-oss/env-core"; +import { vercel } from "@t3-oss/env-core/presets-zod"; @@ - experimental__runtimeEnv: { - NODE_ENV: process.env.NODE_ENV, - }, + runtimeEnv: process.env,Also applies to: 49-51
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/env.ts` around lines 1 - 2, Replace the Next.js-specific env setup: stop importing createEnv and vercel from "@t3-oss/env-nextjs"; instead use the core API (from "@t3-oss/env-core") and initialize createEnv with runtimeEnv: process.env for this non-Next Hono gateway. Update the module that currently exports the env (the file using createEnv and vercel) to call createEnv with runtimeEnv: process.env and drop the vercel preset, and ensure you use the project's environment abstraction (apps/** convention) rather than adding a direct third-party import in this apps/gateway module.apps/gateway/src/app.ts (1)
1-20: App structure is clean and well-organized.Route mounting follows clear path conventions. Health check at
/is appropriate for Phase 1. The tenant middleware concern (defined but unused) was already flagged in a prior review.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/app.ts` around lines 1 - 20, The prior review noted a defined but unused tenant middleware; update this file to either remove the unused tenant middleware definition/import or apply it to the app routes so it is used: locate the tenant middleware symbol (e.g., tenant or tenantMiddleware) and either delete its import/definition or register it with Hono (for example by using app.use(tenant) or attaching it to the specific route groups like app.route("/workflows", tenant, workflows)) so there are no unused middleware symbols remaining.apps/gateway/src/routes/workflows.ts (1)
1-1: Directhonoimport — use vendor abstraction.Same violation as flagged on the sibling route files.
import { Hono } from "hono"bypasses the@vendor/*abstraction layer required for allapps/**TypeScript files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/routes/workflows.ts` at line 1, The file imports Hono directly which violates the vendor abstraction policy; replace the direct import "import { Hono } from 'hono'" with the vendor-wrapped import (e.g., import { Hono } from '@vendor/hono') and update any other direct references to the "hono" package in this module (ensure uses of Hono still match the type/API) so the file consumes the package through the `@vendor` abstraction instead of importing from "hono" directly.apps/gateway/src/routes/webhooks.ts (1)
1-1: Directhonoimport — same violation as flagged on sibling files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/routes/webhooks.ts` at line 1, The file directly imports Hono from "hono" which duplicates the same violation in sibling files; replace the direct import of the Hono symbol with the project's shared/central Hono wrapper used elsewhere (i.e., import the Hono app or factory from the repo's common wrapper module the other routes use) so the route uses the centralized export instead of importing "hono" directly; update any usages of Hono in this file (e.g., the Hono constructor/instance) to use that shared wrapper API.apps/gateway/src/routes/admin.ts (1)
133-145:delivery-statusendpoint has no authentication and logs raw payloads.This has already been flagged in a prior review regarding QStash signature verification. Additionally,
console.log("[delivery-status]", body)will log webhook delivery payloads which may contain sensitive data. Use structured logging with redacted fields or remove the raw body dump before production.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/routes/admin.ts` around lines 133 - 145, The delivery-status route handler (admin.post("/delivery-status")) currently accepts and logs the raw webhook payload from c.req.json and has no authentication or QStash signature verification; update this handler to validate the QStash signature or other webhook auth before processing (e.g., verify headers and signature using your QStash verification utility or middleware) and remove the console.log of the full body; replace it with structured logging via your logger (processLogger or similar) that only records non-sensitive metadata (messageId, deliveryId, status) or a redacted version of the payload, and ensure any sensitive fields are stripped or masked before logging and that unauthenticated requests are rejected with a 401/403.apps/gateway/src/routes/connections.ts (1)
25-55:GET /:provider/authorizeis missingapiKeyAuthmiddleware.This route is public despite initiating the OAuth flow and writing to Redis.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/routes/connections.ts` around lines 25 - 55, The route handler connections.get("/:provider/authorize", tenantMiddleware, async (c) => { ... }) is missing the apiKeyAuth middleware so it is currently public while it writes OAuth state to Redis; add apiKeyAuth to the middleware chain (e.g., connections.get("/:provider/authorize", apiKeyAuth, tenantMiddleware, async (c) => ...)), ensure apiKeyAuth is imported where routes are defined, and keep the rest of the logic (provider lookup via getProvider, orgId check, redis.hset using oauthStateKey, redis.expire, and provider.getAuthorizationUrl) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/gateway/.env.example`:
- Around line 16-21: The .env.example is missing required environment keys
referenced by the runtime env schema in env.ts; add empty placeholders for
GITHUB_PRIVATE_KEY and TURSO_DATABASE_URL (same style as the existing GITHUB_*
entries) so validation/local setup won't fail, and also add the other missing
keys noted around lines 33-35 to keep the template in sync with the schema.
In `@apps/gateway/package.json`:
- Line 16: Replace the direct version pin for the dependency "@hono/node-server"
in apps/gateway/package.json with the catalog alias form by changing the
dependency value to "catalog:`@hono/node-server`"; update the dependencies block
entry for "@hono/node-server" accordingly and run install to verify the package
manager resolves the catalog alias.
- Line 25: Replace the hard-pinned dependency string for "hono" in package.json
(currently "^4.12.2") with the catalog reference used elsewhere (e.g.,
"catalog:hono") so this package consumes the shared catalog entry (matching
vendor/upstash-workflow). Update the "hono" entry to "catalog:hono" and then run
your package manager install/lockfile update so the lockfile reflects the
cataloged dependency.
In `@apps/gateway/src/env.ts`:
- Line 19: Replace the loose ENCRYPTION_KEY schema (currently
z.string().min(32)) with a strict validator that requires exactly 64 hexadecimal
characters to represent a 32-byte AES-256 key (e.g., a regex for
/^[A-Fa-f0-9]{64}$/); update the ENCRYPTION_KEY schema so only valid hex key
material is accepted and ensure this aligns with the validation used in
api/console/src/env.ts and apps/console/src/env.ts; this prevents deriveKey in
apps/gateway/src/lib/crypto.ts from receiving malformed input that it would
otherwise pad silently.
In `@apps/gateway/src/lib/github-jwt.ts`:
- Around line 98-102: The current base64UrlEncodeBytes function uses
String.fromCharCode(...bytes) which will break for large inputs due to the V8
argument length limit; change base64UrlEncodeBytes to build the binary string
incrementally (e.g., iterate bytes in a loop or append in chunks) instead of
spreading into fromCharCode, then call btoa on that constructed string and keep
the existing replace calls for URL-safe base64; update references to
base64UrlEncodeBytes only (no other API changes).
- Around line 69-92: importPrivateKey assumes the input is base64-encoded PEM
(double-base64 decoding) which breaks if env.GITHUB_PRIVATE_KEY is a raw PEM;
add a short JSDoc on importPrivateKey (or createGitHubAppJWT) describing the
expected encoding and implement a safe guard: detect if base64Pem already looks
like PEM (startsWith "-----BEGIN") and use it directly, otherwise atob-decode
it, then proceed with stripping headers and inner decoding; also validate and
throw a clear error if the resulting PEM content is malformed before calling
crypto.subtle.importKey.
- Around line 56-63: The response JSON is unsafely cast to { token: string } so
data.token may be undefined; update the code that reads the GitHub installation
token (the variable data and the return of the token) to validate the response
shape after await response.json(): ensure data is an object with a non-empty
string token (e.g., typeof data.token === 'string' && data.token.length > 0) and
if validation fails throw a descriptive Error including the raw response body or
relevant fields so the function's Promise<string> contract is honored.
In `@apps/gateway/src/lib/related-projects.ts`:
- Line 1: Replace the direct third-party import of withRelatedProject from
"@vercel/related-projects" with a vendor wrapper: create a module named
"@vendor/related-projects" that re-exports withRelatedProject (export {
withRelatedProject } from "@vercel/related-projects"), then update the import in
this file to import { withRelatedProject } from "@vendor/related-projects" so
the app code only references the vendor abstraction rather than the upstream
package.
In `@apps/gateway/src/lib/secrets.ts`:
- Line 1: Update the import of the SQL helper to use the vendor abstraction:
replace the current import of the symbol eq from "drizzle-orm" with an import
from "@vendor/db" (i.e., import { eq } from "@vendor/db";) so app-layer code
uses the vendor-reexported drizzle utilities; locate the import statement that
currently imports eq and change its module specifier to "@vendor/db".
In `@apps/gateway/src/providers/github.ts`:
- Around line 25-34: In getAuthorizationUrl, the options.redirectPath is only
checked for truthiness but its value is ignored; update the logic to use the
provided path when present by constructing redirectUri as
`${gatewayBaseUrl}${options.redirectPath}` (instead of the hardcoded
"/connections/github/callback"), or alternatively rename the option to
includeRedirectUri and keep the existing hardcoded callback—adjust the method
signature and callers accordingly to reflect the chosen approach (function:
getAuthorizationUrl, symbol: options.redirectPath / includeRedirectUri).
- Around line 90-111: Replace the Node-only Buffer.from usage in revokeToken
with a Web Crypto/edge-friendly base64 encoder (use btoa on the
`${env.GITHUB_CLIENT_ID}:${env.GITHUB_CLIENT_SECRET}` string) so the credential
creation is compatible with the crypto.ts goal, and simplify the error check by
removing the redundant status !== 204 arm — just throw when response.ok is false
in revokeToken to correctly handle non-2xx responses.
In `@apps/gateway/src/providers/linear.ts`:
- Around line 66-78: The revokeToken function contains a redundant check
("response.status !== 204") because response.ok is true for 204; remove the dead
check and simply throw an error when !response.ok. Update the async
revokeToken(accessToken: string) implementation to rely on response.ok alone (in
the block that currently references response.ok && response.status) so the error
path throws using the response.status (or response.statusText) when response.ok
is false; the target symbol to change is revokeToken in the Linear provider.
- Around line 127-149: The registerWebhook and deregisterWebhook functions are
incorrectly using env.LINEAR_CLIENT_SECRET as the Bearer token when calling
Linear's GraphQL API; update both to fetch the connection by connectionId and
use that connection's stored accessToken (the OAuth access token obtained via
exchangeCode) for the Authorization header (mirroring how revokeToken uses the
access token), replacing any usage of env.LINEAR_CLIENT_SECRET; ensure the token
retrieval happens before the fetch and throw a clear error if the connection or
accessToken is missing.
In `@apps/gateway/src/providers/schemas.ts`:
- Around line 102-105: The decodeSentryToken function currently returns an empty
installationId when the input lacks a ":" which leads callers like exchangeCode
to build invalid Sentry URLs; change decodeSentryToken to validate raw and on
malformed input either throw a descriptive Error (e.g., "Invalid Sentry token:
missing installationId") or return a distinct sentinel (e.g., installationId:
null) and update callers (notably exchangeCode) to handle that case instead of
proceeding — ensure the new behavior is documented in the function comment and
adjust any consumer logic to abort and surface an error rather than constructing
URLs with an empty installationId.
- Around line 64-70: The githubOAuthResponseSchema can yield a generic Zod parse
error when GitHub returns an error body; update the schema or validation flow so
provider errors surface clearly: either convert githubOAuthResponseSchema into a
discriminated union (success shape with access_token vs error shape with error
and error_description) or add a .refine() that rejects when error is present and
returns a clear message, and/or have the GitHub provider's exchangeCode inspect
rawData for an error field before calling .parse() so you throw a meaningful
provider error instead of letting Zod fail on access_token; locate
githubOAuthResponseSchema and the exchangeCode/rawData handling to implement
this change.
In `@apps/gateway/src/providers/sentry.ts`:
- Around line 28-65: The exchangeCode implementation in Sentry provider expects
a composite token (decoded by decodeSentryToken) but connections.ts passes only
the raw code, so installationId is empty; update the Sentry-specific callback in
connections.ts so that when provider.name === "sentry" you read
c.req.query("installationId"), validate it, and wrap the code with
encodeSentryToken({ installationId, token: code }) before calling
provider.exchangeCode; if installationId is missing return a 400 error (e.g.,
missing_params) to avoid calling Sentry with an empty installationId.
- Around line 109-126: revokeToken is a no-op because decodeSentryToken returns
empty installationId for plain tokens; fix by making the token stored at
exchange time include the installationId so revokeToken can extract it: update
the code path that persists Sentry tokens (the function handling exchangeCode)
to save a composite string "installationId:accessToken" (or otherwise persist
installationId alongside the token), ensure connections retrieval still returns
that value, and keep revokeToken and decodeSentryToken as-is so revokeToken can
decode the composite and perform the DELETE; update any callers that expect the
previous plain token shape as needed.
In `@apps/gateway/src/providers/vercel.ts`:
- Around line 70-72: The condition checking both !response.ok and
response.status !== 204 is dead because 204 is a successful status (response.ok
is true), so simplify the guard in the token-revocation flow: in the code that
examines the fetch Response (the variable response used where the current
condition and throw are), replace the compound condition with a single check
that throws when response.ok is false (i.e., on non-success responses) so the
throw new Error(`Vercel token revocation failed: ${response.status}`) executes
only for actual errors.
In `@apps/gateway/src/routes/admin.ts`:
- Around line 50-53: The JSDoc above the POST /admin/cache/rebuild route is
stale (mentions PlanetScale/Console API); update that comment to correctly state
that the cache is rebuilt from the Turso/Drizzle SQLite data source (or the
specific local DB provider used) and reflect any relevant details (e.g.,
"Rebuild Redis cache from Turso/Drizzle SQLite source of truth") in the JSDoc
block for the POST /admin/cache/rebuild handler.
- Around line 89-102: The handler admin.get("/dlq") currently parses
limit/offset with parseInt which can produce NaN and be passed into
db.select().limit()/offset(); validate and sanitize these values: after parsing
use Number.isFinite or isNaN to detect invalid values and fall back to safe
defaults (e.g., limit = 50, offset = 0), coerce to integers, enforce
non-negative offsets and clamp limit to a reasonable max (e.g., 1000) before
calling .limit(limit).offset(offset) on the webhookDeliveries query so Drizzle
always receives valid numeric arguments.
- Around line 70-77: The loop over activeResources is making sequential
redis.hset calls causing many round trips; refactor to use a Redis pipeline:
create a pipeline via redis.pipeline(), for each resource call
pipeline.hset(resourceKey(r.provider, r.providerResourceId), { connectionId:
r.installationId, orgId: r.orgId }), then await pipeline.exec() once and update
rebuilt accordingly (e.g., set rebuilt = activeResources.length or count
successful responses). This keeps resourceKey, activeResources, and the hset
payloads intact while batching writes for performance.
In `@apps/gateway/src/routes/connections.ts`:
- Around line 172-188: The code inserting into installations uses a random
nanoid() fallback for externalId which breaks deduplication; update the
externalId computation (used in the insert into installations and returned via
installationRows/installation) to use a deterministic fallback instead of
nanoid(), for example compute a stable hash (e.g., SHA256) of a stable token
field such as oauthTokens.access_token or oauthTokens.raw.id and use that value,
or else abort and return an error when no stable identifier
(oauthTokens.raw.team_id, oauthTokens.raw.organization_id,
oauthTokens.raw.installation) is present; modify the expression that builds
externalId (currently using nanoid()) to use the deterministic hash or to
throw/return a 400/500 error so the unique index on (provider.name, externalId)
remains effective.
- Around line 1-15: The file imports third-party symbols directly (drizzle-orm's
and, eq and Hono) which violates the apps/** vendor-abstraction rule; replace
those direct imports with the corresponding re-exports from your `@vendor`
packages (e.g., import and, eq from "@vendor/db" or the project’s DB vendor
wrapper, and import Hono from "@vendor/hono" or its wrapper), update the import
statements referencing and, eq and Hono to use those `@vendor/`* modules, and
ensure the vendor wrappers re-export the original symbols so existing references
to and, eq and the Hono constructor continue to work.
- Around line 527-534: The handler currently calls await c.req.json(...)
directly which will throw on malformed/non-JSON bodies; wrap that call in a
try/catch around the c.req.json(...) invocation (the parsing block in
connections.ts) and on catch return a 400 JSON response (e.g., c.json({ error:
"invalid_json_body" }, 400)); keep the subsequent providerResourceId existence
check as-is so valid-but-missing fields still return the existing
missing_provider_resource_id response.
- Around line 142-170: The Sentry OAuth flow must supply a composite
"installationId:code" to SentryProvider.exchangeCode but the current generic
branch only reads "code"; update the handler so that before calling
provider.exchangeCode(code, redirectUri) you detect provider.name === "sentry",
read installationId from c.req.query("installationId"), validate it exists,
construct a combined token string `${installationId}:${code}` (or the equivalent
expected by SentryProvider.exchangeCode) and pass that composite string instead
of plain code; keep the existing error handling (return 400 if installationId
missing) and leave the rest of the flow unchanged.
In `@apps/gateway/src/routes/webhooks.ts`:
- Around line 35-67: Wrap the call to getWebhookSecret(name) in a try/catch and,
if it throws, log the error (including provider.name and the thrown error) and
return a non-retriable acknowledgement to the caller (HTTP 200 ACK or a 2xx JSON
response) so providers don't repeatedly retry during secrets-manager outages;
ensure subsequent uses of secret only proceed when the call succeeded (refer to
getWebhookSecret and provider.verifyWebhook). Also wrap
workflowClient.trigger(...) in a try/catch and, if it fails to enqueue the
workflow, log the error and reply with a 503 Service Unavailable plus a
Retry-After header so the provider gets a clear retry signal instead of an
opaque 500 (refer to workflowClient.trigger and the webhook-receipt workflow
URL). Ensure both catches avoid throwing so the HTTP response code and headers
are explicit.
In `@apps/gateway/src/workflows/types.ts`:
- Around line 3-6: The JSDoc for the payload in types.ts is incorrect about when
it is populated; update the comment to state that the payload is created after
successful signature verification and JSON parsing (i.e., after
provider.verifyWebhook and JSON.parse in webhooks.ts) so callers know these
invariants hold before the payload is passed from the thin webhook route to the
durable receipt workflow; keep the same high-level description but replace
"extracted before signature verification and JSON parsing" with wording that it
is built/validated post-verification and parsing.
In `@apps/gateway/src/workflows/webhook-receipt.ts`:
- Around line 62-77: The DB fallback query that selects from resources joined
with installations must also filter by provider to avoid cross-provider
collisions; update the where clause used by the query that builds row (the
select on resources with innerJoin(installations, ...).where(...)) to include
eq(installations.provider, data.provider) alongside the existing
eq(resources.providerResourceId, data.resourceId) and eq(resources.status,
"active") so the returned installation is scoped to the same provider as the
Redis key (resourceKey).
In `@apps/gateway/turbo.json`:
- Around line 7-23: The turbo.json build env list is missing runtime-critical
variables so Turbo won't invalidate cache when those credentials rotate; update
the "env" array in apps/gateway/turbo.json (the env list shown in the diff) to
include the Upstash/Redis vars (KV_REST_API_URL and KV_REST_API_TOKEN or
alternately UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN), the QStash
vars (QSTASH_TOKEN, QSTASH_CURRENT_SIGNING_KEY, QSTASH_NEXT_SIGNING_KEY), and
the Turso vars (TURSO_DATABASE_URL, TURSO_AUTH_TOKEN) so cache invalidation
triggers when they change.
In `@db/gateway/package.json`:
- Around line 46-47: Replace the direct version pins for the external tooling
entries with catalog references: update the dependency entries for "dotenv-cli"
and "drizzle-kit" in db/gateway/package.json to use the "catalog:" specifier
(e.g., "dotenv-cli": "catalog:dotenv-cli", "drizzle-kit": "catalog:drizzle-kit")
and make the same change for any occurrences in devDependencies so these shared
tools are managed centrally via the catalog.
- Line 35: Replace the direct version pin for the dependency "@libsql/client" in
package.json with a catalog reference to ensure monorepo-wide version
management; locate the "@libsql/client" entry in db/gateway/package.json and
change its value from "^0.14.0" to the catalog reference (use the project's
standard "catalog:" form) so the dependency reads as a catalog-managed
dependency.
- Line 32: The "with-env" npm script currently hardcodes a cross-package Vercel
env path ("with-env": "dotenv -e
../../apps/gateway/.vercel/.env.development.local --") which will be missing on
fresh checkouts/CI; update it to load a local env file in db/gateway (e.g., .env
or .env.local) or implement a fallback lookup so the script first reads ./
.env(.local) and only falls back to
../../apps/gateway/.vercel/.env.development.local if present; modify the
"with-env" entry in package.json accordingly and add a sample .env.example in
db/gateway to match other `@db/`* packages so contributors can create the local
file.
In `@db/gateway/src/migrations/0000_quiet_gorgon.sql`:
- Around line 32-33: The migration defines access_token and refresh_token
columns that will store sensitive OAuth tokens in plaintext; update application
write paths to encrypt these values with the existing AES-256-GCM utilities
before persisting (and decrypt on reads) and ensure the webhook_secret column in
gw_installations is treated the same; locate and update the functions that
insert/update installation records (references: access_token, refresh_token,
gw_installations, webhook_secret) to call the encryption helper and add tests to
validate no plaintext token is written to the DB and that decryption returns the
original values.
- Around line 41-51: The current non-unique index gw_wd_provider_delivery_idx on
table gw_webhook_deliveries (columns provider, delivery_id) only speeds reads
and does not prevent duplicate rows; change it to a UNIQUE constraint by
replacing the index with a UNIQUE INDEX (or add a UNIQUE constraint in the
Drizzle schema for gw_webhook_deliveries on (provider, delivery_id) and
regenerate migrations) so the DB enforces deduplication even if the Redis layer
is bypassed.
In `@db/gateway/src/migrations/meta/0000_snapshot.json`:
- Around line 252-253: The gw_tokens table in the migration snapshot lacks an
index on installation_id, causing full table scans; add an index entry for
gw_tokens on installation_id (matching the naming pattern used by
gw_res_installation_id_idx, e.g., gw_tokens_installation_id_idx) in the
migrations snapshot (db/gateway/src/migrations/meta/0000_snapshot.json) and add
the corresponding DDL/migration and schema change in
db/gateway/src/schema/tables/tokens.ts so that gw_tokens gains a proper
installation_id index and lookups use the index.
In `@db/gateway/src/schema/relations.ts`:
- Around line 1-21: Import webhookDeliveries from "./tables" and add a new
relation export named webhookDeliveriesRelations using
relations(webhookDeliveries, ({ one }) => ({ installation: one(installations, {
fields: [webhookDeliveries.installationId], references: [installations.id], }),
})); this mirrors tokensRelations/resourcesRelations and ties
webhookDeliveries.installationId to installations.id so Drizzle's db.query API
sees the relationship.
In `@db/gateway/src/schema/tables/resources.ts`:
- Around line 27-30: Change the index definition for providerResourceIdx to a
unique index so the (installationId, providerResourceId) pair is enforced as
unique at the DB level; locate providerResourceIdx in the resources table
definition and replace
index("gw_res_provider_resource_idx").on(table.installationId,
table.providerResourceId) with the uniqueIndex variant (i.e.,
uniqueIndex("gw_res_provider_resource_idx").on(...)) so the constraint applies
to table.installationId and table.providerResourceId and prevents duplicate rows
that could break the webhook-receipt .get() lookup.
In `@db/gateway/src/schema/tables/tokens.ts`:
- Around line 12-13: Access and refresh tokens are currently stored plaintext in
the accessToken and refreshToken columns; encrypt them before persisting and
decrypt after reading by using the existing AES-256-GCM helpers and secrets
vault. Add small helpers (e.g., encryptToken and decryptToken) that wrap
apps/gateway/src/lib/crypto.ts and apps/gateway/src/lib/secrets.ts, call
encryptToken on accessToken and refreshToken before any DB insert/update that
writes those columns, and call decryptToken after any select that reads those
columns (the DB column types remain text and store ciphertext). Ensure error
handling surfaces decryption failures and do not change the schema.
- Around line 5-20: Add a dedicated index on the tokens table for installationId
to avoid full table scans when querying by installation; update the tokens table
definition (export const tokens) to include an index named e.g.
gw_tokens_installation_id_idx that targets the installationId column
(tokens.installationId) so lookups like "fetch token for installation X" use the
index. Ensure the index is created in the table/schema definition using your
ORM's index API (matching how gw_resources created gw_res_installation_id_idx)
and keep the index name consistent with existing naming conventions.
In `@db/gateway/src/schema/tables/webhook-deliveries.ts`:
- Around line 16-18: Update the status enum on the webhook-deliveries table to
include in-flight and errored values by adding "pending" and "failed" to the
existing enum declaration for the status column (the text("status", { enum:
[...] }).notNull() in webhook-deliveries.ts); ensure any code that writes or
queries this column (e.g., handlers that insert deliveries or update delivery
status) is prepared to set/read "pending" for launched-but-unresolved deliveries
and "failed" for attempts that errored before terminal state.
- Around line 21-26: The composite index providerDeliveryIdx (named
"gw_wd_provider_delivery_idx") on table.provider and table.deliveryId is
currently non-unique; change it to a unique database index (use uniqueIndex
instead of index for providerDeliveryIdx) so the DB enforces deduplication at
the schema level (consistent with gw_inst_provider_external_idx in
gw_installations) to prevent duplicate webhook delivery rows if Redis SET NX
fails or expires.
In `@thoughts/shared/plans/2026-02-25-connection-manager-implementation.md`:
- Around line 43-72: The document contains many concatenation typos where
"Gateway" is joined to the following word (e.g., "Gatewayvia", "Gatewaywrites",
"Gatewaysends", "Gatewayroutes"); run a targeted search/replace for the pattern
Gateway([a-zA-Z]) and insert a space after "Gateway" so occurrences become
"Gateway via", "Gateway writes", "Gateway sends", "Gateway routes", etc.; update
each occurrence mentioned (lines referencing "Gatewayvia", "Gatewaywrites",
"Gatewaysends", "Gatewayroutes" and other similar hits) and re-scan the file for
any remaining Gateway+[word] instances to ensure all are fixed.
In `@vendor/upstash-workflow/package.json`:
- Line 29: Replace the direct semver pin for the dependency "hono" in this
package.json with the catalog reference format used by the repo (i.e., change
the version string from "^4.12.2" to "catalog:hono" or the project's catalog
key), and make the corresponding change in the apps/gateway package.json so both
packages reference the same catalog entry for "hono"; update the "hono" entry in
vendor/upstash-workflow's package.json (the dependency key "hono") to use the
catalog: form and ensure consistency with the catalog key used elsewhere.
---
Duplicate comments:
In `@apps/gateway/.env.example`:
- Line 14: Update the ENCRYPTION_KEY description in the .env template to require
an exact 32-byte AES-256 key (64 hex characters) instead of "32+ hex chars";
specifically change the ENCRYPTION_KEY comment so it clearly states "exactly 64
hex chars (32 bytes) for AES-256-GCM" to prevent invalid key lengths and make
key requirements unambiguous for consumers of the ENCRYPTION_KEY variable.
In `@apps/gateway/src/app.ts`:
- Around line 1-20: The prior review noted a defined but unused tenant
middleware; update this file to either remove the unused tenant middleware
definition/import or apply it to the app routes so it is used: locate the tenant
middleware symbol (e.g., tenant or tenantMiddleware) and either delete its
import/definition or register it with Hono (for example by using app.use(tenant)
or attaching it to the specific route groups like app.route("/workflows",
tenant, workflows)) so there are no unused middleware symbols remaining.
In `@apps/gateway/src/env.ts`:
- Around line 1-2: Replace the Next.js-specific env setup: stop importing
createEnv and vercel from "@t3-oss/env-nextjs"; instead use the core API (from
"@t3-oss/env-core") and initialize createEnv with runtimeEnv: process.env for
this non-Next Hono gateway. Update the module that currently exports the env
(the file using createEnv and vercel) to call createEnv with runtimeEnv:
process.env and drop the vercel preset, and ensure you use the project's
environment abstraction (apps/** convention) rather than adding a direct
third-party import in this apps/gateway module.
In `@apps/gateway/src/lib/keys.ts`:
- Around line 9-32: The dynamic Redis key segments can contain colons and cause
collisions; update the key helpers (connectionKey, orgConnectionsKey,
providerAccountKey, resourceKey, connectionResourcesKey, oauthStateKey,
webhookSeenKey) to escape each interpolated segment by calling a shared helper
(e.g., escapeKeySegment) that safely encodes input (URL-safe base64 or
encodeURIComponent-style escaping) before constructing the `gw:...` string so no
raw colons or unsafe characters are placed into the final key.
In `@apps/gateway/src/routes/admin.ts`:
- Around line 133-145: The delivery-status route handler
(admin.post("/delivery-status")) currently accepts and logs the raw webhook
payload from c.req.json and has no authentication or QStash signature
verification; update this handler to validate the QStash signature or other
webhook auth before processing (e.g., verify headers and signature using your
QStash verification utility or middleware) and remove the console.log of the
full body; replace it with structured logging via your logger (processLogger or
similar) that only records non-sensitive metadata (messageId, deliveryId,
status) or a redacted version of the payload, and ensure any sensitive fields
are stripped or masked before logging and that unauthenticated requests are
rejected with a 401/403.
In `@apps/gateway/src/routes/connections.ts`:
- Around line 25-55: The route handler connections.get("/:provider/authorize",
tenantMiddleware, async (c) => { ... }) is missing the apiKeyAuth middleware so
it is currently public while it writes OAuth state to Redis; add apiKeyAuth to
the middleware chain (e.g., connections.get("/:provider/authorize", apiKeyAuth,
tenantMiddleware, async (c) => ...)), ensure apiKeyAuth is imported where routes
are defined, and keep the rest of the logic (provider lookup via getProvider,
orgId check, redis.hset using oauthStateKey, redis.expire, and
provider.getAuthorizationUrl) unchanged.
In `@apps/gateway/src/routes/webhooks.ts`:
- Line 1: The file directly imports Hono from "hono" which duplicates the same
violation in sibling files; replace the direct import of the Hono symbol with
the project's shared/central Hono wrapper used elsewhere (i.e., import the Hono
app or factory from the repo's common wrapper module the other routes use) so
the route uses the centralized export instead of importing "hono" directly;
update any usages of Hono in this file (e.g., the Hono constructor/instance) to
use that shared wrapper API.
In `@apps/gateway/src/routes/workflows.ts`:
- Line 1: The file imports Hono directly which violates the vendor abstraction
policy; replace the direct import "import { Hono } from 'hono'" with the
vendor-wrapped import (e.g., import { Hono } from '@vendor/hono') and update any
other direct references to the "hono" package in this module (ensure uses of
Hono still match the type/API) so the file consumes the package through the
`@vendor` abstraction instead of importing from "hono" directly.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml,!**/pnpm-lock.yaml
📒 Files selected for processing (52)
.claude/commands/manage_connector.md.claude/commands/research_codebase.md.claude/commands/research_codebase_external.md.claude/commands/research_team.md.claude/commands/research_web.mdapps/gateway/.env.exampleapps/gateway/package.jsonapps/gateway/src/app.tsapps/gateway/src/env.tsapps/gateway/src/lib/base-url.tsapps/gateway/src/lib/db.tsapps/gateway/src/lib/github-jwt.tsapps/gateway/src/lib/keys.tsapps/gateway/src/lib/related-projects.tsapps/gateway/src/lib/secrets.tsapps/gateway/src/lib/workflow-client.tsapps/gateway/src/providers/github.tsapps/gateway/src/providers/index.tsapps/gateway/src/providers/linear.tsapps/gateway/src/providers/schemas.tsapps/gateway/src/providers/sentry.tsapps/gateway/src/providers/types.tsapps/gateway/src/providers/vercel.tsapps/gateway/src/routes/admin.tsapps/gateway/src/routes/connections.tsapps/gateway/src/routes/webhooks.tsapps/gateway/src/routes/workflows.tsapps/gateway/src/workflows/types.tsapps/gateway/src/workflows/webhook-receipt.tsapps/gateway/turbo.jsondb/gateway/env.tsdb/gateway/eslint.config.jsdb/gateway/package.jsondb/gateway/src/client.tsdb/gateway/src/drizzle.config.tsdb/gateway/src/index.tsdb/gateway/src/migrations/0000_quiet_gorgon.sqldb/gateway/src/migrations/meta/0000_snapshot.jsondb/gateway/src/migrations/meta/_journal.jsondb/gateway/src/schema/index.tsdb/gateway/src/schema/relations.tsdb/gateway/src/schema/tables/index.tsdb/gateway/src/schema/tables/installations.tsdb/gateway/src/schema/tables/resources.tsdb/gateway/src/schema/tables/tokens.tsdb/gateway/src/schema/tables/webhook-deliveries.tsdb/gateway/tsconfig.jsonthoughts/shared/plans/2026-02-25-connection-manager-implementation.mdvendor/upstash-workflow/package.jsonvendor/upstash-workflow/src/hono.tsvendor/upstash-workflow/src/nextjs.tsvendor/upstash-workflow/src/types.ts
thoughts/shared/plans/2026-02-25-connection-manager-implementation.md
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 10
♻️ Duplicate comments (11)
package.json (1)
63-63: 🛠️ Refactor suggestion | 🟠 MajorUse workspace catalog for
verceldependency versioning.Line [63] should reference
catalog:instead of a direct semver range to match repo dependency policy.
As per coding guidelines: "Use 'workspace:*' for internal dependencies and 'catalog:' for shared external dependencies in package.json configurations".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 63, Replace the direct semver value for the "vercel" dependency in package.json with the catalog reference: locate the "vercel" entry (currently "vercel": "^50.23.2") and change its version string to use the catalog prefix (e.g., "vercel": "catalog:vercel") so the repo follows the shared-external dependency policy.apps/gateway/src/env.ts (2)
20-20:⚠️ Potential issue | 🟠 MajorStrengthen
ENCRYPTION_KEYvalidation.Line [20] accepts arbitrary 32+ character strings; enforce strict key encoding (e.g., exact 64-hex or strict base64) before crypto usage.
🔐 Suggested fix
- ENCRYPTION_KEY: z.string().min(32), + ENCRYPTION_KEY: z + .string() + .regex(/^[A-Fa-f0-9]{64}$/, "Must be exactly 64 hex chars (32 bytes)"),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/env.ts` at line 20, The ENCRYPTION_KEY zod schema currently allows any string >=32 chars; tighten validation in the environment schema by replacing ENCRYPTION_KEY: z.string().min(32) with a strict encoded-key check (e.g., use a regex or zod refinements) to require either exactly 64 hex characters or a valid base64 string of the expected byte length; update the ENCRYPTION_KEY schema to something like z.string().regex(/^[0-9a-fA-F]{64}$/) for a 32-byte hex key or use z.string().refine(val => isValidBase64OfLength(val, expectedBytes), { message: 'invalid base64 key' }) and add a clear error message so crypto code consuming ENCRYPTION_KEY gets only properly encoded keys (modify the ENCRYPTION_KEY symbol in the env schema).
1-2:⚠️ Potential issue | 🟠 MajorUse env-core for the gateway service instead of the Next.js env package.
Lines [1-2] and [46-48] couple this Hono service to Next.js-specific env semantics. Use
@t3-oss/env-corewithruntimeEnv: process.env.🔁 Suggested migration
-import { createEnv } from "@t3-oss/env-nextjs"; -import { vercel } from "@t3-oss/env-nextjs/presets-zod"; +import { createEnv } from "@t3-oss/env-core"; +import { vercel } from "@t3-oss/env-core/presets-zod"; ... - experimental__runtimeEnv: { - NODE_ENV: process.env.NODE_ENV, - }, + runtimeEnv: process.env,Also applies to: 46-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/env.ts` around lines 1 - 2, Replace the Next.js-specific env package by importing createEnv from "@t3-oss/env-core" (remove "@t3-oss/env-nextjs" and the vercel preset import) and update the createEnv call to pass runtimeEnv: process.env so the Hono gateway uses env-core semantics; locate the existing createEnv(...) invocation and the vercel preset usage and swap them to use `@t3-oss/env-core` with runtimeEnv: process.env instead of the vercel preset.db/gateway/src/schema/tables/webhook-deliveries.ts (2)
17-19: Status enum is missing"pending"and"failed"states.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/gateway/src/schema/tables/webhook-deliveries.ts` around lines 17 - 19, The status column schema uses enum: [...DELIVERY_STATUSES] but DELIVERY_STATUSES is missing the "pending" and "failed" values; update the DELIVERY_STATUSES constant (or the enum array referenced by status in webhook-deliveries.ts) to include "pending" and "failed" so the text("status", { enum: [...] }).notNull() covers all delivery states.
22-27: Composite index on(provider, deliveryId)should beuniqueIndexto enforce DB-level dedup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/gateway/src/schema/tables/webhook-deliveries.ts` around lines 22 - 27, Change the non-unique composite index providerDeliveryIdx (index("gw_wd_provider_delivery_idx").on(table.provider, table.deliveryId)) to a unique index so the DB enforces no duplicate (provider, deliveryId) pairs; replace the call to index(...) with uniqueIndex("gw_wd_provider_delivery_idx").on(...) (or the project's unique index helper) in the providerDeliveryIdx definition to enforce uniqueness at the database level.db/gateway/src/schema/tables/installations.ts (1)
30-30:webhookSecretstored as plain text — encrypt at rest.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/gateway/src/schema/tables/installations.ts` at line 30, The webhookSecret column is currently defined as text ("webhook_secret") and must be stored encrypted at rest; change the schema definition for webhookSecret in the installations table to use a binary/bytea column (e.g., binary("webhook_secret") or the db lib's equivalent) or add a column-level encryption transformer, add a migration to convert existing plaintext values to encrypted form, and update all code paths that read/write webhookSecret (create/update/get functions for installations) to perform encryption before persisting and decryption after retrieval, referencing the webhookSecret symbol in installations.ts and the installation data access functions when implementing the changes.apps/gateway/src/providers/github.ts (1)
29-31:⚠️ Potential issue | 🟡 Minor
redirectPathis checked but its value is ignored.This is misleading for callers expecting path customization.
🔧 Suggested fix
- if (options?.redirectPath) { - const redirectUri = `${gatewayBaseUrl}/connections/github/callback`; + if (options?.redirectPath) { + const redirectUri = `${gatewayBaseUrl}${options.redirectPath}`; url.searchParams.set("redirect_uri", redirectUri); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/src/providers/github.ts` around lines 29 - 31, The code checks options?.redirectPath but ignores its value when building redirectUri; update the logic in the provider where redirectUri is created (the block using gatewayBaseUrl, options?.redirectPath, and url.searchParams.set("redirect_uri", ...)) so that if options.redirectPath is present you append that path to gatewayBaseUrl (normalizing slashes) instead of the hard-coded "/connections/github/callback", otherwise fall back to the existing default; ensure redirectUri variable is computed once and then used in url.searchParams.set.db/gateway/package.json (2)
35-35:⚠️ Potential issue | 🟡 MinorUse catalog-managed versions for shared external dependencies.
These entries are pinned directly instead of being centrally managed.
♻️ Suggested fix
- "@libsql/client": "^0.14.0", + "@libsql/client": "catalog:", ... - "dotenv-cli": "^8.0.0", - "drizzle-kit": "^0.31.4", + "dotenv-cli": "catalog:", + "drizzle-kit": "catalog:",As per coding guidelines, "Use 'workspace:*' for internal dependencies and 'catalog:' for shared external dependencies in package.json configurations".
Also applies to: 47-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/gateway/package.json` at line 35, Replace the hard-pinned version for the shared external dependency "@libsql/client" in package.json with the catalog-managed specifier (prefix with "catalog:") so the dependency is resolved from the central catalog instead of using "^0.14.0"; also update the other shared external entries referenced in the comment (lines with similar pinned versions) to use "catalog:" as well to keep all shared external deps catalog-managed.
32-32:⚠️ Potential issue | 🟠 MajorHarden
with-envto avoid hard failure on fresh setups/CI.The script assumes a cross-package Vercel env file exists. That path is typically absent on fresh checkouts and CI, which breaks all
db:*scripts.🔧 Suggested fix (prefer local env with fallback)
- "with-env": "dotenv -e ../../apps/gateway/.vercel/.env.development.local --" + "with-env": "sh -c 'if [ -f ./.env.local ]; then dotenv -e ./.env.local -- \"$@\"; else dotenv -e ../../apps/gateway/.vercel/.env.development.local -- \"$@\"; fi' --"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/gateway/package.json` at line 32, The "with-env" npm script currently hard-fails if ../../apps/gateway/.vercel/.env.development.local is missing; change the "with-env" script in package.json so it first checks for that file and falls back to running dotenv without -e (or without any file) when absent (for example, use a shell conditional or "dotenv -e <path> -- || dotenv --"); update the script named "with-env" so db:* scripts use the fallback instead of failing on fresh checkouts/CI.db/gateway/src/schema/tables/resources.ts (1)
1-1:⚠️ Potential issue | 🟠 MajorEnforce uniqueness for
(installationId, providerResourceId).A plain index allows duplicates, which can make lookup results non-deterministic.
🛡️ Suggested fix
-import { index, integer, sqliteTable, text } from "drizzle-orm/sqlite-core"; +import { index, integer, sqliteTable, text, uniqueIndex } from "drizzle-orm/sqlite-core"; ... - providerResourceIdx: index("gw_res_provider_resource_idx").on( + providerResourceIdx: uniqueIndex("gw_res_provider_resource_idx").on( table.installationId, table.providerResourceId, ),Also applies to: 28-31
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/gateway/src/schema/tables/resources.ts` at line 1, The (installationId, providerResourceId) composite currently uses a plain index which allows duplicates; update the resources table definition (the sqliteTable call that defines the "resources" table and its index declaration using index(...)) to enforce uniqueness by creating a unique composite index for installationId and providerResourceId (e.g. replace the plain index(...) on installationId and providerResourceId with a unique index via index(...).unique() or the equivalent unique composite constraint supported by drizzle), ensuring lookups on installationId + providerResourceId are deterministic.apps/gateway/package.json (1)
25-25:⚠️ Potential issue | 🟡 MinorUse catalog specifier for
hono(Line 25).
"hono": "^4.12.2"bypasses shared catalog versioning and can drift from repo-wide dependency management.Proposed fix
- "hono": "^4.12.2", + "hono": "catalog:",#!/bin/bash python - <<'PY' import json from pathlib import Path pkg = Path("apps/gateway/package.json") data = json.loads(pkg.read_text()) hono = data.get("dependencies", {}).get("hono") print("dependencies.hono =", hono) if isinstance(hono, str) and hono.startswith("catalog:"): print("PASS: hono uses catalog specifier") else: raise SystemExit("FAIL: hono is not using a catalog: specifier") PYAs per coding guidelines,
apps/**must useworkspace:*for internal dependencies andcatalog:for shared external dependencies in package.json.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/gateway/package.json` at line 25, Replace the plain version string for the dependency "hono" in the package.json dependencies block with a catalog specifier so it uses the repo-wide catalog versioning (e.g., change "hono": "^4.12.2" to "hono": "catalog:hono@^4.12.2"); update the dependencies entry for the symbol "hono" to start with "catalog:" and keep the same semver range after the @.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/console/src/app/api/webhooks/ingress/resolve-workspace.ts`:
- Around line 11-17: The JSDoc incorrectly claims the function returns the
"first active workspace" even though no status filtering exists; update the
comment in resolve-workspace.ts (for the exported function
resolveWorkspaceFromClerkOrgId / resolveWorkspace) to remove the word "active"
and instead state it returns the first workspace found for the org (or null if
none), and note that orgWorkspaces has no status column so no active-state
filtering is performed.
In `@apps/console/src/env.ts`:
- Around line 42-44: The schema defines QSTASH_TOKEN,
QSTASH_CURRENT_SIGNING_KEY, and QSTASH_NEXT_SIGNING_KEY with
z.string().min(1).optional(), which allows missing signing secrets at runtime;
change these three validators to required non-empty strings (remove .optional()
so they are z.string().min(1)) and update any code that reads these env vars to
assume presence (or add explicit runtime checks that throw with a clear message
if missing) so webhook verification keys cannot be absent.
In `@apps/gateway/src/workflows/types.ts`:
- Line 1: The exported type WebhookReceiptPayload lacks JSDoc; open the type
definition in packages/gateway-types/src/webhooks.ts and add a clear JSDoc block
immediately above the WebhookReceiptPayload declaration describing each field,
their types, required/optional status, and any invariants (e.g., which fields
are present only after verification or parsing, normalization expectations, ID
formats, timestamps, etc.); ensure the docstring uses standard JSDoc tags
(`@typedef` or /** */ with `@property`) so the re-export in
apps/gateway/src/workflows/types.ts surfaces proper documentation to consumers.
In `@db/gateway/src/client.ts`:
- Around line 1-2: The imports in db/gateway (createClient and drizzle) bypass
the vendor abstraction; either add a new vendor wrapper package (e.g.,
`@vendor/libsql` or extend `@vendor/db`) that re-exports createClient from
"@libsql/client/web" and drizzle from "drizzle-orm/libsql", and update
db/gateway to import those symbols from the vendor package, or refactor
db/gateway to consume an existing `@vendor/`* entry that already exposes
createClient and drizzle; ensure exported symbol names match the current usage
(createClient, drizzle) so call sites need minimal changes.
In `@db/gateway/src/schema/tables/installations.ts`:
- Line 31: The metadata column defined as text("metadata", { mode: "json" })
lacks a TypeScript type annotation, causing it to be inferred as unknown; define
a proper interface/type for the metadata shape (e.g., InstallationMetadata) and
annotate the column by chaining .$type<InstallationMetadata>() on the metadata
column definition in the installations table (update the import or add the type
near the table schema as needed) so downstream selects get the correct typed
shape instead of unknown.
In `@db/gateway/src/schema/tables/webhook-deliveries.ts`:
- Around line 5-28: The webhookDeliveries table schema (webhookDeliveries) is
missing a nullable payload column needed for DLQ replay; add a nullable payload
column (e.g., text("payload").$nullable() or text("payload").nullable()) to the
sqliteTable definition so raw webhook bodies can be stored (populate only for
rows with status === "dlq"), leaving all existing columns and the
providerDeliveryIdx index unchanged.
In `@packages/gateway-types/package.json`:
- Line 20: The "typecheck" npm script contains a redundant flag: remove the
"--emitDeclarationOnly false" option from the "typecheck" script so it relies on
"--noEmit" only (update the "typecheck" script definition that currently reads
"tsc --noEmit --emitDeclarationOnly false" to "tsc --noEmit"); this eliminates
the unnecessary flag while preserving the intended behavior.
In `@packages/gateway-types/src/interfaces.ts`:
- Line 44: The interface method refreshToken(refreshToken: string):
Promise<OAuthTokens> uses a parameter name that shadows the method name; rename
the parameter (e.g., to token, refreshTokenValue, or refreshTokenStr) in the
refreshToken method signature to improve clarity and update any implementing
classes or usages (methods named refreshToken in implementations) to use the new
parameter name.
- Around line 5-17: ProviderOptions is a non-discriminated union
(GitHubAuthOptions | LinearAuthOptions) so callers like getAuthorizationUrl
cannot safely narrow the type; add a required discriminant field to each
specific option type (e.g., add provider: "github" to GitHubAuthOptions and
provider: "linear" to LinearAuthOptions) and update the ProviderOptions union
accordingly so callers can switch on options.provider, or alternatively remove
the union and let each provider's getAuthorizationUrl signature accept its
concrete option type (GitHubAuthOptions or LinearAuthOptions) to preserve type
safety.
In `@thoughts/shared/plans/2026-02-25-connection-manager-implementation.md`:
- Line 31: Several fenced code blocks in the markdown (notably the data-flow
diagram block and multiple example/command blocks related to the connection
manager sections) are missing language specifiers and some are not separated by
blank lines; update each fenced block by adding an appropriate language tag
(e.g., ```text, ```bash, ```typescript) and ensure there is a blank line both
before and after each ``` fence so they satisfy MD040/MD031 rules. Search for
the "data-flow diagram" fenced block and the example/command/code blocks within
the connection manager document and apply these fixes consistently across all
occurrences.
---
Duplicate comments:
In `@apps/gateway/package.json`:
- Line 25: Replace the plain version string for the dependency "hono" in the
package.json dependencies block with a catalog specifier so it uses the
repo-wide catalog versioning (e.g., change "hono": "^4.12.2" to "hono":
"catalog:hono@^4.12.2"); update the dependencies entry for the symbol "hono" to
start with "catalog:" and keep the same semver range after the @.
In `@apps/gateway/src/env.ts`:
- Line 20: The ENCRYPTION_KEY zod schema currently allows any string >=32 chars;
tighten validation in the environment schema by replacing ENCRYPTION_KEY:
z.string().min(32) with a strict encoded-key check (e.g., use a regex or zod
refinements) to require either exactly 64 hex characters or a valid base64
string of the expected byte length; update the ENCRYPTION_KEY schema to
something like z.string().regex(/^[0-9a-fA-F]{64}$/) for a 32-byte hex key or
use z.string().refine(val => isValidBase64OfLength(val, expectedBytes), {
message: 'invalid base64 key' }) and add a clear error message so crypto code
consuming ENCRYPTION_KEY gets only properly encoded keys (modify the
ENCRYPTION_KEY symbol in the env schema).
- Around line 1-2: Replace the Next.js-specific env package by importing
createEnv from "@t3-oss/env-core" (remove "@t3-oss/env-nextjs" and the vercel
preset import) and update the createEnv call to pass runtimeEnv: process.env so
the Hono gateway uses env-core semantics; locate the existing createEnv(...)
invocation and the vercel preset usage and swap them to use `@t3-oss/env-core`
with runtimeEnv: process.env instead of the vercel preset.
In `@apps/gateway/src/providers/github.ts`:
- Around line 29-31: The code checks options?.redirectPath but ignores its value
when building redirectUri; update the logic in the provider where redirectUri is
created (the block using gatewayBaseUrl, options?.redirectPath, and
url.searchParams.set("redirect_uri", ...)) so that if options.redirectPath is
present you append that path to gatewayBaseUrl (normalizing slashes) instead of
the hard-coded "/connections/github/callback", otherwise fall back to the
existing default; ensure redirectUri variable is computed once and then used in
url.searchParams.set.
In `@db/gateway/package.json`:
- Line 35: Replace the hard-pinned version for the shared external dependency
"@libsql/client" in package.json with the catalog-managed specifier (prefix with
"catalog:") so the dependency is resolved from the central catalog instead of
using "^0.14.0"; also update the other shared external entries referenced in the
comment (lines with similar pinned versions) to use "catalog:" as well to keep
all shared external deps catalog-managed.
- Line 32: The "with-env" npm script currently hard-fails if
../../apps/gateway/.vercel/.env.development.local is missing; change the
"with-env" script in package.json so it first checks for that file and falls
back to running dotenv without -e (or without any file) when absent (for
example, use a shell conditional or "dotenv -e <path> -- || dotenv --"); update
the script named "with-env" so db:* scripts use the fallback instead of failing
on fresh checkouts/CI.
In `@db/gateway/src/schema/tables/installations.ts`:
- Line 30: The webhookSecret column is currently defined as text
("webhook_secret") and must be stored encrypted at rest; change the schema
definition for webhookSecret in the installations table to use a binary/bytea
column (e.g., binary("webhook_secret") or the db lib's equivalent) or add a
column-level encryption transformer, add a migration to convert existing
plaintext values to encrypted form, and update all code paths that read/write
webhookSecret (create/update/get functions for installations) to perform
encryption before persisting and decryption after retrieval, referencing the
webhookSecret symbol in installations.ts and the installation data access
functions when implementing the changes.
In `@db/gateway/src/schema/tables/resources.ts`:
- Line 1: The (installationId, providerResourceId) composite currently uses a
plain index which allows duplicates; update the resources table definition (the
sqliteTable call that defines the "resources" table and its index declaration
using index(...)) to enforce uniqueness by creating a unique composite index for
installationId and providerResourceId (e.g. replace the plain index(...) on
installationId and providerResourceId with a unique index via
index(...).unique() or the equivalent unique composite constraint supported by
drizzle), ensuring lookups on installationId + providerResourceId are
deterministic.
In `@db/gateway/src/schema/tables/webhook-deliveries.ts`:
- Around line 17-19: The status column schema uses enum: [...DELIVERY_STATUSES]
but DELIVERY_STATUSES is missing the "pending" and "failed" values; update the
DELIVERY_STATUSES constant (or the enum array referenced by status in
webhook-deliveries.ts) to include "pending" and "failed" so the text("status", {
enum: [...] }).notNull() covers all delivery states.
- Around line 22-27: Change the non-unique composite index providerDeliveryIdx
(index("gw_wd_provider_delivery_idx").on(table.provider, table.deliveryId)) to a
unique index so the DB enforces no duplicate (provider, deliveryId) pairs;
replace the call to index(...) with
uniqueIndex("gw_wd_provider_delivery_idx").on(...) (or the project's unique
index helper) in the providerDeliveryIdx definition to enforce uniqueness at the
database level.
In `@package.json`:
- Line 63: Replace the direct semver value for the "vercel" dependency in
package.json with the catalog reference: locate the "vercel" entry (currently
"vercel": "^50.23.2") and change its version string to use the catalog prefix
(e.g., "vercel": "catalog:vercel") so the repo follows the shared-external
dependency policy.
ℹ️ Review info
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml,!**/pnpm-lock.yaml
📒 Files selected for processing (26)
apps/console/package.jsonapps/console/src/app/api/webhooks/ingress/dispatch.tsapps/console/src/app/api/webhooks/ingress/resolve-workspace.tsapps/console/src/app/api/webhooks/ingress/route.tsapps/console/src/env.tsapps/gateway/api/index.tsapps/gateway/package.jsonapps/gateway/src/env.tsapps/gateway/src/providers/github.tsapps/gateway/src/providers/types.tsapps/gateway/src/workflows/types.tsdb/gateway/env.tsdb/gateway/package.jsondb/gateway/src/client.tsdb/gateway/src/schema/tables/installations.tsdb/gateway/src/schema/tables/resources.tsdb/gateway/src/schema/tables/webhook-deliveries.tspackage.jsonpackages/gateway-types/eslint.config.jspackages/gateway-types/package.jsonpackages/gateway-types/src/index.tspackages/gateway-types/src/interfaces.tspackages/gateway-types/src/providers.tspackages/gateway-types/src/webhooks.tspackages/gateway-types/tsconfig.jsonthoughts/shared/plans/2026-02-25-connection-manager-implementation.md
thoughts/shared/plans/2026-02-25-connection-manager-implementation.md
Outdated
Show resolved
Hide resolved
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
These icons always appear alongside visible text labels, so marking them as decorative prevents screen readers from announcing SVG path data. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…teway, and console
The fetch call to Linear's GraphQL endpoint was sending the raw access token without the required "Bearer " prefix, causing 401 auth failures.
…ry deps - Update backfill/gateway/connections test assertions to match source changes (new headers, correlationId, db.batch migration) - Fix optional chaining bug in agent.ts where null/undefined messages bypassed NoMessagesError guard - Migrate Promise.all to db.batch in sources.ts m2m router - Consolidate @sentry/core to catalog: protocol across all packages
The X-API-Key check returned "service", identical to the fallback, making it unreachable dead code. Simplified to a single nullish coalescing expression with no behavioral change.
… key The QStash deduplicationId only included provider, installationId, and orgId, causing different parameterized backfills (e.g. depth=7 vs depth=90) to be silently suppressed. Default depth to 30 at the schema level and append depth and sorted entityTypes to the key so each unique combination is queued independently.
Prefer NeonDbError.code "23505" (UNIQUE_VIOLATION) over fragile substring matching on error.message. Retains the substring fallback for safety in case the error is wrapped without a code property.
…backfill The three Hono-based services had error-capture middleware wired up but no Sentry.init() or DSN — all captureException calls were silently dropped. Created Sentry projects via API and added init + env wiring for each app.
The /cli/auth page was the only (app)/ route Next.js attempted to statically prerender, causing the layout's tRPC prefetch to fail with ECONNREFUSED during build. Extract client logic to a separate component and make page.tsx a server component with force-dynamic export.
- Use process.env for provider registration guards (env object under skipValidation omits extended preset vars from @t3-oss/env-core) - Add @sentry/core stub via resolve alias (sentry-init.ts calls Sentry.init() at module load, unavailable in @sentry/core@10 types) - Set NODE_ENV=production in test setup to skip gateway dev delays - Update assertions to use toMatchObject for correlationId tolerance - Update api-console-connections tests for org-scoped apiKeyProcedure
Replace 34 export * statements across 15 files in @lightfastai/ai-sdk, @repo/ai, and @db/chat with explicit named exports. Eliminates all no-restricted-syntax lint warnings and improves tree-shaking.
Sentry.init() does not exist in @sentry/core — the previous calls were silently failing at runtime. Replace with initAndBind(ServerRuntimeClient) using a fetch-based transport. Update integration test stub to match.
Add vitest to root devDependencies so vitest.shared.ts can resolve its import, exclude vitest.config.ts from Next.js app tsconfigs to prevent build type-checking failures, enable turbo test caching with proper inputs, add missing test script to www, and remove orphan test script from core/mcp.
…count-info feat(connections): provider accountInfo, OAuth reliability, and workspace redesign
Summary
Full-stack extraction of the Gateway, Connections, and Backfill services from the Console monolith into three independent Hono/Vercel services. 316 commits across 10 phases taking the system from a tightly-coupled Console-only architecture to an event-driven microservices topology with shared PlanetScale storage, comprehensive test coverage, and org-scoped connection ownership.
355 files changed — +43,937 / −10,244
Architecture
Data flow:
All services share
@db/console(PlanetScale) + Upstash Redis (gw:namespace) +GATEWAY_API_KEYfor service auth.Commit-by-Commit Changelog (316 commits, 10 phases)
Phase 1: Gateway Foundation (14 commits)
The initial Gateway service scaffolding — Hono app, provider system, durable webhook delivery pipeline.
2e3ddf45c04b65WebhookProviderinterface withextractDeliveryId,verifySignature,extractEventType5f4b7534c504d6SourceTypeenum for provider dispatchd59457fVERCEL_URL/VERCEL_BRANCH_URLfallbacks477f63c59afecbd6677d1connections,webhook_endpoints,deliveriestables123b407crypto.subtlefor timing-safe compare5da80b0WebhookProvider,SourceType,WebhookPayloadtypese1b1528a970e15POST /api/webhooks/ingress— receives Gateway deliveries, resolves workspace, dispatches to Inngestd83f2908035116gatewayInstallationIdcolumn onworkspace_integrationsPhase 2: Database Migration — Turso → PlanetScale (4 commits)
Three-phase migration eliminating the separate Turso/SQLite database in favor of shared PlanetScale.
47065e7gw_installations,gw_resources,gw_tokens,gw_webhook_deliverieswith proper indexes and FKs3349372export *with explicit named exports — Prevent accidental re-exports, enable tree-shaking93d0b21@db/gateway(Turso) for@db/console(PlanetScale), change Vercel runtime to Node.js407f41fPhase 3: Architecture Refactoring (8 commits)
Decompose monolithic gateway into clean provider/strategy patterns before service extraction.
7352f753802a5ebd03358ConnectionStrategywithcreate()andteardown()methods, durable teardown via Upstash Workflow79cd9cdapps/backfillwith Inngest workflows075dcff4cda869webhook-receipt→webhook-delivery, deduplicate utilities508ca5b9969a6ePhase 4: Service Extraction (7 commits)
Extract Connections and Backfill into independent Hono/Vercel services; remove legacy Console code.
704110capps/connectionswith OAuth flows, token vault (AES-256-GCM), webhook registration, tenant middleware007bef9backfill-orchestrator(per-connection) →entity-worker(per-entity) via Inngest events71750ccb9cabe5api/github/*,api/vercel/*,console-oauthpackage,github-sync-orchestrator,files-batch-processor,backfill-orchestrator(XState)b1720d2/authorizeand/callbackendpoints3629006orgRouter.connectionstRPC router, removeuserRouter.userSources,userSourcestable1a59f9dPhase 5: Test Infrastructure (10 commits)
Comprehensive test suites across all three services plus shared test infrastructure.
b67a9e815e1d36be91cc53f2d25bce1854d1a59f9d6687261f321e07895d413b77f2afPhase 6: Security Hardening (40+ commits)
Systematic security improvements across all services, many driven by CodeRabbit review.
Cryptography & Authentication:
701d84d— Eliminate timing side-channels in timingSafeEqual — Constant-time comparison preventing length-based timing attackse8fea14— Constant-time comparison for service API key check in Gateway79aa6a6— Constant-time comparison for API key validation in Backfill3a120d1— Timing-safe comparison for API key auth in Connections47cf341— Harden timing-safe comparisons in crypto and webhook auth067efdc— Reject short encryption keys and fix Uint8Array return typea21651f— Validate hex input in hexToBytes to prevent silent NaN-to-zero conversion72b18c9— Validate hex input in connections hexToBytes3231874— Chunk bytesToBase64 to avoid argument limit overflowc7d4a74— Detect PKCS#1 key format in importPrivateKey with actionable errorInput Validation & Error Handling:
89f829a— Validate orgId format in tenant middlewareb469956— Validate orgId and connectedBy before persisting installation44bb337— Validate installationId is numeric before URL interpolationd89cd14— Validate depth parameter at runtime in trigger route100183c— Validate depth parameter before computing since timestamp8aa1859— Return 400 for malformed JSON in trigger routes82d61f5— Validate JSON parsing and receivedAt in service auth path04ad39d— Validate and clamp DLQ query params limit/offsete0339e2— Validate eventType in service auth webhook guardNetwork Resilience:
100183c— Add 10s timeout to Gateway fetch in backfill get-connection stepc02d835— Add 15s fetch timeout to all Linear API calls64efd65— Add 30s timeout to GitHub API fetch callsbbed278— Add 30s fetch timeouts to entity worker19014e2— Add 10s timeout to GitHub installation token fetchOAuth & Token Security:
ef1e52e— Enforce OAuth state provider match for all providerse7468b0— Atomic pipeline for OAuth state + TTL in Redisaa43991— Remove console.log calls leaking sensitive Vercel datac866333— Avoid logging full payload to prevent data leaks4f2111e— Preserve expiresAt on token refresh when new expiresIn is absentIdempotency & Data Integrity:
2adafe9— Unique constraint on gw_resources (installationId, providerResourceId)537e2f9— Unique (provider, delivery_id) on gw_webhook_deliveries908a32b— Unique installation_id on gw_tokens6bf4f82— Idempotent upsert in handleCallback (replace select-then-insert)0ef735d— Idempotent writeTokenRecord with upsertcb37468— Add deduplicationId to QStash backfill publish callsPhase 7: Console Refactors (25+ commits)
Rewire Console UI and tRPC to use the new services instead of legacy code paths.
3629006orgRouter.connectionstRPC router with full CRUDb1720d2/authorize+/callbackb9cabe5api/github/*(5 routes),api/vercel/*(3 routes)faa1366cdcc9b2f48e6274bf5402067efd61974d99e7ed4bcabf2691/servicesnamespacefccc021044d1fbb0ac5f488217fe2790103bb7db72Phase 8: Build System & Deployment (50+ commits)
Getting three new Hono services to deploy correctly on Vercel with proper ESM, TypeScript, and bundling.
Vercel Deployment:
999c375— Generate Vercel Build Output API bundle to fix ERR_MODULE_NOT_FOUND8a46626— Use tsup to bundle workspace deps at app leveld231c0a— Use turbo build to compile internal dependencies8dc5374— Align with official Vercel Hono pattern for connections + gateway1ccc639— Align backfill with Vercel Hono starter pattern0dd866d— Remove root build scripts for Hono apps (let Vercel handle it)TypeScript & ESM:
0768956/86c666b— Fix Hono type errors + standardize tsconfigs — ESNext/Bundler resolution for Hono apps595c854— Add shared Hono tsconfig (internal/typescript/hono.json)c9443af— Add .js extensions to relative imports for ESM compatibilitycc5a0fa— Clean up build scripts and vercel config across all Hono apps31e4169— Add clientPrefix to all env-core createEnv calls for TS 5.9 compatb73fcc4— Narrow union type for GitHub OAuth response to fix TS 5.9 buildESLint:
3cf2e59— Add @hono/eslint-config as@repo/eslint-config/honof69582e— Switch connections to @hono/eslint-config, fix all lint errors22eb071— Switch backfill to @hono/eslint-config, fix all lint errorsEnvironment & Dependencies:
51ca163— Migrate non-Next.js packages from env-nextjs to env-core — Backfill, connections, gateway3686dc4— Switch @vendor/inngest env from env-nextjs to env-coref31b862— Add Hono adapter getEnv(c) to backfill, connections, and gatewaya8b1d57— Move hono and postgres to pnpm catalogce909f6— Use tsup for @vendor/qstash ESM-compatible dist outputc06bafa— Use tsup for @db/console ESM-compatible dist outputPhase 9: Provider Implementations & Bug Fixes (60+ commits)
Detailed provider-specific fixes, edge case handling, and CodeRabbit review responses.
GitHub:
50a5fbc— Use dynamicaccountTypeinstead of hardcoded "Organization"244b4e4— Detect reactivated GitHub installations correctlyc96e27f— Fix reactivation detection for Linear, handle GitHubsetup_action=request8306bd2— Harden GitHub installation token response handling5a21745— Add runtime validation and remove double-casts in GitHub connector679bef2— Validate providerResourceId before numeric coercionLinear:
aa55e09— Use OAuth access token for Linear webhook API callsc3edf4a— Use stable externalId and upsert for Linear re-auth81d9dfa— Deterministic fallback in linear extractDeliveryId for idempotencySentry:
5c405d5— Remove dead webhook registration flow from handleCallback1f99c29— Validate hex input in Sentry token encode/decodeVercel:
28c0ecd— Preferpayload.idoverx-vercel-idheader for dedup key1705cf2— Validatedeployment.uidbefore building deliveryId0aced24— ExtractderiveExternalIdhelper in VercelProviderBackfill Engine:
1dec076— Fix entity-workereventsDispatchedlost on memoized replay898e093— Make since timestamp deterministic across retriesce66fd9— Escape backslashes in CEL string interpolationf2cf414— Bounded concurrency in entity worker dispatchd2fbf09— Handle non-numeric rate-limit headers in parseGitHubRateLimitConnections Service:
d5b8e4b— Wrap soft-delete step in transaction for atomicity43602d4— Wrap cancel-backfill in try/catch for best-effort semanticsc8e0166— Make notifyBackfillService truly fire-and-forget3c3e9d9— Propagate non-404 errors in detectConfig instead of swallowing752b521— Add deduplicationId to QStash backfill publish callsPhase 10: Integration Tests & Final Polish (30+ commits)
Cross-service integration test suites and final typecheck/lint cleanup.
b245b780f03225cce7385b1550f6445c39cfd0d9180dad392f5821b75235cd883fccff2088cf2New Services
apps/gatewayapps/connectionsapps/backfillNew Packages
@vendor/qstash@vendor/related-projects@vercel/related-projects@repo/gateway-typesWebhookProvider,SourceType,WebhookPayload)@repo/console-test-db@repo/console-linear@repo/console-sentrypackages/integration-testsDeleted Code
@db/gateway(Turso/SQLite)gw_*tables@repo/console-oauthapi/github/*routes (5 files)api/vercel/*routes (3 files)userRouter.userSources(942 lines)orgRouter.connections(org-scoped)backfill-orchestrator.ts(XState, 433 lines)sync-orchestrator.ts(322 lines)files-batch-processor.ts(254 lines)github-sync-orchestrator.ts(270 lines)github-push-handler.ts(281 lines)console-webhooks/github.ts(351 lines)console-webhooks/vercel.ts(416 lines)console-webhooks/common.ts(222 lines)console-webhooks/types.ts(77 lines)console-oauth/(1,175 lines)console-octokit-github/config-detector.ts(157 lines)console-octokit-github/github-content.ts(378 lines)console-validation/workflow-io.ts(165 lines)userSourcesDB tablegw_installations+gw_tokensscripts/postinstall.js(dual binary)Database Changes
New tables (migration 0029, 0030):
gw_installations— Provider installations with org ownership, external IDs, unique constraintsgw_resources— Tracked resources per installation with unique (installationId, providerResourceId)gw_tokens— Encrypted OAuth tokens (AES-256-GCM) with unique installation_idgw_webhook_deliveries— Delivery log with unique (provider, delivery_id), status trackingModified tables:
workspace_integrations— Addedgw_installation_idFK, setonDelete: "set null"for installation FKgw_*tables use$onUpdateFnforupdatedAtcolumnsClient changes:
createClientfrom barrel export to prevent accidental connection pool creationexport *Testing Summary
Test plan
🤖 Generated with Claude Code