-
-
Notifications
You must be signed in to change notification settings - Fork 840
feat(api): handle build server deployment init gracefully for older cli versions #2526
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api): handle build server deployment init gracefully for older cli versions #2526
Conversation
…li versionsi This PR adapts the deployment initialization endpoint to handle build server deployments with older CLI versions gracefully. When we introduced automatic deployments via the build server, we slightly changed the deployment flow mainly in the initialization and starting step: now deployments are first initialized in the `PENDING` status and updated to `BUILDING` once the build server dequeues the build job. Newer versions of the `deploy` command in the CLI will automatically attach to the existing deployment and continue with the build process. For older versions, we can't change the command's client-side behavior, so we need to handle this case here in the initialization endpoint. As we control the env variables which the git meta is extracted from in the build server, we can use those to pass the existing deployment ID to this endpoint. This doesn't affect the git meta on the deployment as it is set prior to this step using the /start endpoint. It's a rather hacky solution, but it will do for now as it enables us to avoid degrading the build server experience for users with older CLI versions. We'll eventually be able to remove this workaround once we stop supporting 3.x CLI versions.
|
Walkthrough
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10–15 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal). Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
apps/webapp/app/v3/services/initializeDeployment.server.ts (3)
22-55
: Do not bypass the UNMANAGED restriction in the legacy path.The early-return branch sidesteps the later
UNMANAGED
check. Validate the existing deployment’s type before returning.Apply this diff after fetching
existingDeployment
:const existingDeployment = await this._prisma.workerDeployment.findFirst({ where: { environmentId: environment.id, friendlyId: existingDeploymentId, }, }); if (!existingDeployment) { throw new ServiceValidationError( "Existing deployment not found during deployment initialization" ); } +// Keep behavior consistent: we don't support UNMANAGED deployments +if (existingDeployment.type !== "MANAGED") { + throw new ServiceValidationError("UNMANAGED deployments are not supported"); +}
22-55
: Avoid returning an empty imageRef; reconstruct it if missing.Returning
""
can break callers that assume a valid ref. IfimageReference
is null, reconstruct it using the existing deployment’s version/shortCode, mirroring the main-path behavior.Apply this diff before the return:
- return { - deployment: existingDeployment, - imageRef: existingDeployment.imageReference ?? "", - }; + let imageRef = existingDeployment.imageReference ?? ""; + if (!imageRef) { + const isV4Deployment = existingDeployment.type === "MANAGED"; + const registryConfig = getRegistryConfig(isV4Deployment); + const [imageRefError, imageRefResult] = await tryCatch( + getDeploymentImageRef({ + registry: registryConfig, + projectRef: environment.project.externalRef, + nextVersion: existingDeployment.version, + environmentType: environment.type, + deploymentShortCode: existingDeployment.shortCode, + }) + ); + if (imageRefError) { + logger.error("Failed to reconstruct imageRef for existing deployment", { + environmentId: environment.id, + deploymentId: existingDeployment.id, + friendlyId: existingDeployment.friendlyId, + cause: imageRefError.message, + }); + throw new ServiceValidationError("Failed to get deployment image ref"); + } + imageRef = imageRefResult.imageRef; + } + return { + deployment: existingDeployment, + imageRef, + };Please confirm
WorkerDeployment
hasversion
andshortCode
fields (used above). If names differ, adjust accordingly.
22-55
: Add diagnostics for legacy path usage and not-found cases.Lightweight logs help us measure rollout and debug unexpected inputs.
Apply this diff at the start of the legacy branch and in the not-found case:
- if (payload.gitMeta?.commitSha?.startsWith("deployment_")) { + if (payload.gitMeta?.commitSha?.startsWith("deployment_")) { + const existingDeploymentId = payload.gitMeta.commitSha!; + logger.debug("InitializeDeploymentService: legacy path - reusing existing deployment", { + environmentId: environment.id, + friendlyId: existingDeploymentId, + }); - const existingDeploymentId = payload.gitMeta.commitSha; + // existingDeploymentId already defined above const existingDeployment = await this._prisma.workerDeployment.findFirst({ where: { environmentId: environment.id, friendlyId: existingDeploymentId, }, }); if (!existingDeployment) { + logger.warn("Legacy path: existing deployment not found", { + environmentId: environment.id, + friendlyId: existingDeploymentId, + }); throw new ServiceValidationError( "Existing deployment not found during deployment initialization" ); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/webapp/app/v3/services/initializeDeployment.server.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.{ts,tsx}
: Always prefer using isomorphic code like fetch, ReadableStream, etc. instead of Node.js specific code
For TypeScript, we usually use types over interfaces
Avoid enums
No default exports, use function declarations
Files:
apps/webapp/app/v3/services/initializeDeployment.server.ts
{packages/core,apps/webapp}/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
We use zod a lot in packages/core and in the webapp
Files:
apps/webapp/app/v3/services/initializeDeployment.server.ts
apps/webapp/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
When importing from @trigger.dev/core in the webapp, never import the root package path; always use one of the documented subpath exports from @trigger.dev/core’s package.json
Files:
apps/webapp/app/v3/services/initializeDeployment.server.ts
{apps/webapp/app/**/*.server.{ts,tsx},apps/webapp/app/routes/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Access environment variables only via the env export from app/env.server.ts; do not reference process.env directly
Files:
apps/webapp/app/v3/services/initializeDeployment.server.ts
apps/webapp/app/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/webapp.mdc)
Modules intended for test consumption under apps/webapp/app/**/*.ts must not read environment variables; accept configuration via options instead
Files:
apps/webapp/app/v3/services/initializeDeployment.server.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (23)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (6, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (2, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (7, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (5, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (3, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (4, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (8, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (8, 8)
- GitHub Check: units / internal / 🧪 Unit Tests: Internal (1, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (7, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (4, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (2, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (5, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (1, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (3, 8)
- GitHub Check: units / webapp / 🧪 Unit Tests: Webapp (6, 8)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - npm)
- GitHub Check: units / packages / 🧪 Unit Tests: Packages (1, 1)
- GitHub Check: e2e / 🧪 CLI v3 tests (windows-latest - pnpm)
- GitHub Check: e2e / 🧪 CLI v3 tests (ubuntu-latest - npm)
- GitHub Check: typecheck / typecheck
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (1)
apps/webapp/app/v3/services/initializeDeployment.server.ts (1)
22-55
: Ensure build timeouts are scheduled for the legacydeployment_
commitSha pathThe early-return branch in apps/webapp/app/v3/services/initializeDeployment.server.ts (legacy commitSha check) bypasses the TimeoutDeploymentService.enqueue call later in the same file (apps/webapp/app/v3/services/initializeDeployment.server.ts:174–177). There are enqueue calls elsewhere (e.g. apps/webapp/app/v3/services/createDeploymentBackgroundWorkerV4.server.ts and apps/webapp/app/v3/services/deployment.server.ts), but I couldn't find the
/start
flow that creates the existing deployment to confirm it already enqueues a timeout. Confirm the/start
step schedules a timeout that covers PENDING and BUILDING for deployments created by older CLI flows, or add a TimeoutDeploymentService.enqueue before returning the existing deployment.
return this.traceWithEnv("call", environment, async () => { | ||
if (payload.gitMeta?.commitSha?.startsWith("deployment_")) { | ||
// When we introduced automatic deployments via the build server, we slightly changed the deployment flow | ||
// mainly in the initialization and starting step: now deployments are first initialized in the `PENDING` status | ||
// and updated to `BUILDING` once the build server dequeues the build job. | ||
// Newer versions of the `deploy` command in the CLI will automatically attach to the existing deployment | ||
// and continue with the build process. For older versions, we can't change the command's client-side behavior, | ||
// so we need to handle this case here in the initialization endpoint. As we control the env variables which | ||
// the git meta is extracted from in the build server, we can use those to pass the existing deployment ID | ||
// to this endpoint. This doesn't affect the git meta on the deployment as it is set prior to this step using the | ||
// /start endpoint. It's a rather hacky solution, but it will do for now as it enables us to avoid degrading the | ||
// build server experience for users with older CLI versions. We'll eventually be able to remove this workaround | ||
// once we stop supporting 3.x CLI versions. | ||
|
||
const existingDeploymentId = payload.gitMeta.commitSha; | ||
const existingDeployment = await this._prisma.workerDeployment.findFirst({ | ||
where: { | ||
environmentId: environment.id, | ||
friendlyId: existingDeploymentId, | ||
}, | ||
}); | ||
|
||
if (!existingDeployment) { | ||
throw new ServiceValidationError( | ||
"Existing deployment not found during deployment initialization" | ||
); | ||
} | ||
|
||
return { | ||
deployment: existingDeployment, | ||
imageRef: existingDeployment.imageReference ?? "", | ||
}; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Legacy path bypasses self-hosted restriction check. Add the same guard as the main path.
Right now, the early-return path skips the payload.selfHosted && remoteBuildsEnabled()
gate, potentially allowing flows the main path blocks. Mirror that check here to keep policy parity.
Apply this diff inside the legacy branch before querying Prisma:
if (payload.gitMeta?.commitSha?.startsWith("deployment_")) {
+ // Parity with the main path: do not allow self-hosted on this instance
+ if (payload.selfHosted && remoteBuildsEnabled()) {
+ throw new ServiceValidationError("Self-hosted deployments are not supported on this instance");
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return this.traceWithEnv("call", environment, async () => { | |
if (payload.gitMeta?.commitSha?.startsWith("deployment_")) { | |
// When we introduced automatic deployments via the build server, we slightly changed the deployment flow | |
// mainly in the initialization and starting step: now deployments are first initialized in the `PENDING` status | |
// and updated to `BUILDING` once the build server dequeues the build job. | |
// Newer versions of the `deploy` command in the CLI will automatically attach to the existing deployment | |
// and continue with the build process. For older versions, we can't change the command's client-side behavior, | |
// so we need to handle this case here in the initialization endpoint. As we control the env variables which | |
// the git meta is extracted from in the build server, we can use those to pass the existing deployment ID | |
// to this endpoint. This doesn't affect the git meta on the deployment as it is set prior to this step using the | |
// /start endpoint. It's a rather hacky solution, but it will do for now as it enables us to avoid degrading the | |
// build server experience for users with older CLI versions. We'll eventually be able to remove this workaround | |
// once we stop supporting 3.x CLI versions. | |
const existingDeploymentId = payload.gitMeta.commitSha; | |
const existingDeployment = await this._prisma.workerDeployment.findFirst({ | |
where: { | |
environmentId: environment.id, | |
friendlyId: existingDeploymentId, | |
}, | |
}); | |
if (!existingDeployment) { | |
throw new ServiceValidationError( | |
"Existing deployment not found during deployment initialization" | |
); | |
} | |
return { | |
deployment: existingDeployment, | |
imageRef: existingDeployment.imageReference ?? "", | |
}; | |
} | |
return this.traceWithEnv("call", environment, async () => { | |
if (payload.gitMeta?.commitSha?.startsWith("deployment_")) { | |
// When we introduced automatic deployments via the build server, we slightly changed the deployment flow | |
// mainly in the initialization and starting step: now deployments are first initialized in the `PENDING` status | |
// and updated to `BUILDING` once the build server dequeues the build job. | |
// Newer versions of the `deploy` command in the CLI will automatically attach to the existing deployment | |
// and continue with the build process. For older versions, we can't change the command's client-side behavior, | |
// so we need to handle this case here in the initialization endpoint. As we control the env variables which | |
// the git meta is extracted from in the build server, we can use those to pass the existing deployment ID | |
// to this endpoint. This doesn't affect the git meta on the deployment as it is set prior to this step using the | |
// /start endpoint. It's a rather hacky solution, but it will do for now as it enables us to avoid degrading the | |
// build server experience for users with older CLI versions. We'll eventually be able to remove this workaround | |
// once we stop supporting 3.x CLI versions. | |
// Parity with the main path: do not allow self-hosted on this instance | |
if (payload.selfHosted && remoteBuildsEnabled()) { | |
throw new ServiceValidationError("Self-hosted deployments are not supported on this instance"); | |
} | |
const existingDeploymentId = payload.gitMeta.commitSha; | |
const existingDeployment = await this._prisma.workerDeployment.findFirst({ | |
where: { | |
environmentId: environment.id, | |
friendlyId: existingDeploymentId, | |
}, | |
}); | |
if (!existingDeployment) { | |
throw new ServiceValidationError( | |
"Existing deployment not found during deployment initialization" | |
); | |
} | |
return { | |
deployment: existingDeployment, | |
imageRef: existingDeployment.imageReference ?? "", | |
}; | |
} |
🤖 Prompt for AI Agents
In apps/webapp/app/v3/services/initializeDeployment.server.ts around lines 22 to
55, the legacy early-return path that looks up an existing deployment using
payload.gitMeta.commitSha bypasses the same self-hosted vs remote-builds gate
used in the main path; add the identical check before querying Prisma: if
payload.selfHosted is true and remoteBuildsEnabled() is false, throw a
ServiceValidationError (same message/behavior as the main path) to block the
legacy flow, ensuring parity with the primary initialization branch.
This PR adapts the deployment initialization endpoint to handle build server deployments with older CLI versions gracefully.
When we introduced automatic deployments via the build server, we slightly changed the deployment flow
mainly in the initialization and starting step: now deployments are first initialized in the
PENDING
statusand updated to
BUILDING
once the build server dequeues the build job.Newer versions of the
deploy
command in the CLI will automatically attach to the existing deploymentand continue with the build process. For older versions, we can't change the command's client-side behavior,
so we need to handle this case here in the initialization endpoint. As we control the env variables which
the git meta is extracted from in the build server, we can use those to pass the existing deployment ID
to this endpoint. This doesn't affect the git meta on the deployment as it is set prior to this step using the
/start endpoint. It's a rather hacky solution, but it will do for now as it enables us to avoid degrading the
build server experience for users with older CLI versions. We'll eventually be able to remove this workaround
once we stop supporting 3.x CLI versions.