-
Notifications
You must be signed in to change notification settings - Fork 11
use queue for selector compute #506
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThis update introduces a major refactor to the computation of resource selectors and release targets within the system. The previous approach, which relied on builder classes and mutex-based concurrency control for selector computations, has been replaced with a set of new event-driven workers. These workers handle the computation and updating of resource associations for environments, deployments, and policy targets using transactional row-level locking and job re-queuing on lock conflicts. The worker registry and event types have been updated accordingly. The API is extended with new endpoints and OpenAPI schema changes, and comprehensive end-to-end tests are added or updated to validate the new flows. Changes
Sequence Diagram(s)sequenceDiagram
participant API
participant Worker
participant DB
participant Queue
API->>Queue: Enqueue ComputeDeploymentResourceSelector job (deploymentId)
Queue->>Worker: Trigger job (deploymentId)
Worker->>DB: Fetch deployment and system
alt Deployment not found
Worker->>Queue: Fail job
else
Worker->>DB: Begin transaction
Worker->>DB: Lock deployment row (FOR UPDATE NOWAIT)
Worker->>DB: Delete existing computed deployment resources
alt Has resource selector
Worker->>DB: Query matching resources
Worker->>DB: Insert computed deployment resources
end
Worker->>DB: Commit transaction
Worker->>Queue: Enqueue ComputeSystemsReleaseTargets job (systemId)
end
Note over Worker: On lock conflict, re-enqueue job with delay
sequenceDiagram
participant API
participant Worker
participant DB
participant Queue
API->>Queue: Enqueue ComputeEnvironmentResourceSelector job (environmentId)
Queue->>Worker: Trigger job (environmentId)
Worker->>DB: Fetch environment and system
alt Environment not found
Worker->>Queue: Fail job
else
Worker->>DB: Begin transaction
Worker->>DB: Lock environment row (FOR UPDATE NOWAIT)
Worker->>DB: Delete existing computed environment resources
alt Has resource selector
Worker->>DB: Query matching resources
Worker->>DB: Insert computed environment resources
end
Worker->>DB: Commit transaction
Worker->>Queue: Enqueue ComputeSystemsReleaseTargets job (systemId)
end
Note over Worker: On lock conflict, re-enqueue job with delay
sequenceDiagram
participant API
participant Worker
participant DB
participant Queue
API->>Queue: Enqueue ComputePolicyTargetReleaseTargetSelector job (policyTargetId)
Queue->>Worker: Trigger job (policyTargetId)
Worker->>DB: Fetch policy target and policy
alt Not found
Worker->>Queue: Fail job
else
Worker->>DB: Begin transaction
Worker->>DB: Lock release targets (FOR UPDATE NOWAIT)
Worker->>DB: Delete existing computed policy target release targets
Worker->>DB: Query matching release targets
Worker->>DB: Insert computed policy target release targets
Worker->>DB: Commit transaction
Worker->>Queue: Enqueue EvaluateReleaseTarget job(s)
end
Note over Worker: On lock conflict, re-enqueue job with delay
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 12
♻️ Duplicate comments (2)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)
83-90: Retry logic never executesBecause
acquiredis alwaystrue(see first comment), this branch is never taken.
After adding{ tryAcquire: true }, the retry logic will start working as intended.
No action needed after that change – just mentioning for clarity.apps/event-worker/src/workers/compute-environment-resource-selector.ts (1)
77-84: Retry branch currently unreachableSame root cause as above: without
{ tryAcquire: true },acquiredis alwaystrue, making this branch unreachable.
After the mutex fix, this will work; no additional change needed.
🧹 Nitpick comments (8)
apps/event-worker/src/utils/with-mutex.ts (1)
1-28: Good implementation of a distributed mutex pattern.The
withMutexutility successfully implements a Redis-based distributed lock mechanism that will help prevent race conditions when multiple workers attempt to process the same entity simultaneously. The implementation properly handles both blocking and non-blocking lock acquisition, executes the provided function when the lock is acquired, and ensures the lock is released in all scenarios.Consider adding a timeout parameter to prevent indefinite waiting in blocking mode:
export const withMutex = async <T>( key: string, fn: (mutex: RedisMutex) => Promise<T> | T, - opts: { tryAcquire: boolean } = { tryAcquire: false }, + opts: { tryAcquire: boolean; timeoutMs?: number } = { tryAcquire: false }, ): Promise<[boolean, T | null]> => { const mutex = new RedisMutex(redis, key); try { if (opts.tryAcquire) { const acquired = await mutex.tryAcquire(); if (!acquired) return [false, null]; } else { - await mutex.acquire(); + await mutex.acquire(opts.timeoutMs); } const result = await fn(mutex); return [true, result]; } catch (error) { console.error("Error executing mutex function", { error, key }); throw error; } finally { await mutex.release(); } };apps/event-worker/src/workers/new-environment.ts (1)
3-14: Consider removing commented code instead of leaving it in placeThe
recomputeReleaseTargetsfunction has been commented out as it's no longer needed with the new queue-based architecture. Consider removing this commented code entirely to improve maintainability, as it can be retrieved from version control if needed in the future.-// const recomputeReleaseTargets = async (environment: schema.Environment) => { -// const computeBuilder = selector().compute(); -// await computeBuilder.environments([environment]).resourceSelectors(); -// const { systemId } = environment; -// const system = await db -// .select() -// .from(schema.system) -// .where(eq(schema.system.id, systemId)) -// .then(takeFirst); -// const { workspaceId } = system; -// return computeBuilder.allResources(workspaceId).releaseTargets(); -// };apps/event-worker/src/workers/updated-resources/index.ts (1)
1-1: Remove unused lodash import
_is no longer referenced after the inline logic was deleted, so the import can be dropped to keep the file clean.-import _ from "lodash";apps/event-worker/src/workers/update-environment.ts (1)
65-70: Add workspace filter to avoid cross‑workspace exitsThe current
findManyselects resources purely by selector logic; if multiple workspaces share identical resource structures, you may publish exit hooks for resources that were never part of the system.- const exitedResources = await db.query.resource.findMany({ - where: and( + const exitedResources = await db.query.resource.findMany({ + where: and( + eq(schema.resource.workspaceId, job.data.workspaceId), selector().query().resources().where(oldSelector).sql(), not(selector().query().resources().where(resourceSelector).sql()!), ), });apps/event-worker/src/workers/new-resource.ts (1)
29-44: Bulk‑enqueue jobs instead of one‑by‑one loopsIterating with
await …add()issues a separate round‑trip to Redis per job and serialises the loop.
BullMQ supportsaddBulk, which would cut latency dramatically when a workspace contains many environments/deployments.- for (const environment of environments) { - await getQueue(Channel.ComputeEnvironmentResourceSelector).add( - environment.id, - environment, - { jobId: environment.id }, - ); - } + await getQueue(Channel.ComputeEnvironmentResourceSelector).addBulk( + environments.map((env) => ({ + name: env.id, + data: env, + opts: { jobId: env.id }, + })), + );Apply the same pattern to the deployment loop below.
apps/event-worker/src/workers/compute-systems-release-targets.ts (1)
118-123: Re‑queue with exponential back‑off to avoid tight retry loopsA fixed
1 sdelay can lead to hot‑looping when contention is high.
Consider exponentially increasing the delay or relying on BullMQ’s built‑inbackoffoption:- { delay: ms("1s"), jobId: job.id }, + { + jobId: job.id, + backoff: { type: "exponential", delay: ms("1s") }, + },apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)
102-106: DuplicatedjobIdmay overwrite jobs
jobId: rt.idis fine for uniqueness per release target, but if the same release target is re‑queued rapidly (e.g., selector churn) the add will throwJobExistsError.
Consider incorporating a timestamp or a deterministic suffix, or rely on BullMQ’s default autogenerated IDs.apps/event-worker/src/workers/compute-environment-resource-selector.ts (1)
86-90: Validate payload sent toComputeSystemsReleaseTargetsThe job is enqueued with
data = environment.system.
Double‑check that the consumer worker indeed expects the entire system object; otherwise pass only the system ID to avoid unnecessary payload bloat.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
apps/event-worker/src/utils/with-mutex.ts(1 hunks)apps/event-worker/src/workers/compute-deployment-resource-selector.ts(1 hunks)apps/event-worker/src/workers/compute-environment-resource-selector.ts(1 hunks)apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts(1 hunks)apps/event-worker/src/workers/compute-systems-release-targets.ts(1 hunks)apps/event-worker/src/workers/index.ts(2 hunks)apps/event-worker/src/workers/new-deployment.ts(1 hunks)apps/event-worker/src/workers/new-environment.ts(1 hunks)apps/event-worker/src/workers/new-policy.ts(1 hunks)apps/event-worker/src/workers/new-resource.ts(2 hunks)apps/event-worker/src/workers/update-deployment.ts(2 hunks)apps/event-worker/src/workers/update-environment.ts(2 hunks)apps/event-worker/src/workers/updated-resources/index.ts(2 hunks)apps/webservice/src/app/api/v1/environments/route.ts(1 hunks)e2e/tests/api/environments.spec.ts(2 hunks)e2e/tests/api/resource-selectors.spec.ts(1 hunks)packages/events/src/types.ts(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/webservice/src/app/api/v1/environments/route.tse2e/tests/api/resource-selectors.spec.tsapps/event-worker/src/workers/new-policy.tsapps/event-worker/src/workers/new-deployment.tsapps/event-worker/src/workers/update-environment.tsapps/event-worker/src/workers/compute-deployment-resource-selector.tse2e/tests/api/environments.spec.tsapps/event-worker/src/utils/with-mutex.tsapps/event-worker/src/workers/compute-systems-release-targets.tsapps/event-worker/src/workers/compute-policy-target-release-target-selector.tsapps/event-worker/src/workers/update-deployment.tsapps/event-worker/src/workers/new-environment.tsapps/event-worker/src/workers/new-resource.tsapps/event-worker/src/workers/index.tsapps/event-worker/src/workers/updated-resources/index.tspackages/events/src/types.tsapps/event-worker/src/workers/compute-environment-resource-selector.ts
🧬 Code Graph Analysis (3)
e2e/tests/api/environments.spec.ts (1)
packages/db/src/schema/environment.ts (1)
environment(59-84)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (5)
packages/events/src/index.ts (2)
createWorker(10-25)getQueue(28-34)packages/db/src/schema/policy.ts (1)
policyTarget(64-78)packages/db/src/client.ts (1)
db(15-15)apps/event-worker/src/utils/with-mutex.ts (1)
withMutex(5-28)packages/db/src/selectors/index.ts (1)
selector(18-18)
apps/event-worker/src/workers/new-resource.ts (4)
packages/db/src/client.ts (1)
db(15-15)packages/db/src/schema/environment.ts (1)
environment(59-84)packages/events/src/index.ts (1)
getQueue(28-34)packages/db/src/schema/deployment.ts (1)
deployment(69-93)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
🔇 Additional comments (19)
e2e/tests/api/resource-selectors.spec.ts (1)
18-18: Consistent spacing improves readability.This added blank line helps maintain consistent code style by separating setup steps in the
beforeAllhook.packages/events/src/types.ts (2)
32-35: Well-structured channel additions for the queue-based architecture.The new channels follow a consistent naming pattern and align with the overall objective of introducing a queue-based architecture for computing resource selectors and release targets.
67-70: Appropriate payload structure for the new channel types.Each new channel type uses a consistent and simple payload structure with just an
idproperty, which is appropriate for these computation tasks where the entity ID is sufficient to perform the necessary operations.e2e/tests/api/environments.spec.ts (2)
12-13: Improved code organization.Moving the
yamlPathconstant to module scope is a good practice as it avoids redefining it on each test run and makes it available to the entire module.
69-73: Enhanced test assertions for better validation.The explicit assertions for response status and data existence improve the test robustness by verifying the API's behavior more thoroughly. The non-null assertion used after verifying that the data is defined ensures type safety while maintaining clear intent.
apps/event-worker/src/workers/new-policy.ts (3)
1-4: Updated imports to support the queue-based architectureThe imports have been updated to include database-related functionality while removing unnecessary imports that were previously used for direct selector computations. This aligns with the architectural shift to queue-based processing.
7-9: LGTM: Replaced inline selector computation with explicit database queryThe code now fetches policy targets directly from the database instead of using selector chains, which aligns with the new architecture approach.
11-16:✅ Verification successful
Queue-based delegation implemented correctly
The worker now delegates the computation of release target selectors to a dedicated worker by enqueuing jobs on the
ComputePolicyTargetReleaseTargetSelectorchannel. This follows the PR objective of introducing a modular, queue-based architecture for selector computations.Consider adding error handling for potential database or queue failures.
🏁 Script executed:
#!/bin/bash # Check if the ComputePolicyTargetReleaseTargetSelector channel is properly registered grep -r "ComputePolicyTargetReleaseTargetSelector" --include="*.ts" .Length of output: 1102
Queue-based delegation to ComputePolicyTargetReleaseTargetSelector channel is correct
- Verified that the
ComputePolicyTargetReleaseTargetSelectorchannel is declared inpackages/events/src/types.tsand registered inapps/event-worker/src/workers/index.ts.- The loop in
apps/event-worker/src/workers/new-policy.tscorrectly enqueues eachpolicyTargetfor processing.- Consider wrapping the
getQueue(...).add(...)call in a try/catch or appending a.catch()to handle potential database or queue failures gracefully.apps/webservice/src/app/api/v1/environments/route.ts (5)
34-38: Simplified database query with promise chainingThe environment lookup has been restructured to use promise chaining with
then(takeFirstOrNull)instead of await syntax, which aligns with the coding guideline that allows for various error handling patterns including Promise chains.
40-68: Well-structured transaction for environment creationThe transaction logic correctly handles:
- Processing version channels
- Upserting the environment
- Creating jobs for the new environment
- Returning the environment with metadata
This ensures atomic operations when creating an environment.
70-74: Proper event enqueueing for environment updatesWhen an existing environment is found, an update event is enqueued with both the new environment data and the old selector, which is necessary for proper diff computation in the worker.
76-77: Proper event enqueueing for new environmentsWhen creating a new environment, a new environment event is correctly enqueued with the environment data.
79-79: Clean response handlingThe response correctly returns the environment data as JSON, maintaining a clear API contract.
apps/event-worker/src/workers/new-deployment.ts (2)
1-1: Simplified imports for queue-based architectureThe imports have been streamlined to only include event-related functionality, removing database and schema imports that were previously used for direct computations.
6-10:✅ Verification successful
Replaced inline processing with queue-based delegation
The worker now delegates all computation by enqueuing a job on the
ComputeDeploymentResourceSelectorchannel instead of performing inline database operations. This is aligned with the PR objective of moving to a queue-based architecture.The
jobIdis explicitly set to match the deployment ID, which ensures consistent job tracking.
🏁 Script executed:
#!/bin/bash # Check if the ComputeDeploymentResourceSelector channel and worker are properly registered grep -r "ComputeDeploymentResourceSelector" --include="*.ts" .Length of output: 1256
✅ Resolved: Queue-based computation delegation is correctly implemented
All references to
ComputeDeploymentResourceSelectorare properly registered and the new-deployment worker now enqueues jobs instead of performing inline DB work:
- Location:
apps/event-worker/src/workers/new-deployment.ts(lines 6–10)- Delegates computation by calling
await getQueue(Channel.ComputeDeploymentResourceSelector).add( job.data.id, job.data, { jobId: job.data.id }, );jobIdexplicitly set to the deployment ID for consistent tracking- Verified registrations in:
apps/event-worker/src/workers/index.tsapps/event-worker/src/workers/compute-deployment-resource-selector.ts- Additional uses in
new-resource.ts,update-deployment.ts, andupdated-resources/index.tsNo further changes needed.
apps/event-worker/src/workers/new-environment.ts (2)
1-1: Simplified imports for queue-based architectureImports have been updated to only include event-related functionality, removing database and utility imports that were previously used for direct computations.
21-25:✅ Verification successful
Replaced inline processing with queue-based delegation
The worker now delegates computation by enqueuing a job on the
ComputeEnvironmentResourceSelectorchannel instead of directly computing release targets. This aligns with the PR objective of introducing a queue-based architecture.The
jobIdis explicitly set to match the environment ID, which ensures consistent job tracking.
🏁 Script executed:
#!/bin/bash # Check if the ComputeEnvironmentResourceSelector channel and worker are properly registered grep -r "ComputeEnvironmentResourceSelector" --include="*.ts" .Length of output: 1131
Queue-based delegation for ComputeEnvironmentResourceSelector verified
The
ComputeEnvironmentResourceSelectorchannel and its worker are correctly registered in:
- packages/events/src/types.ts
- apps/event-worker/src/workers/index.ts
- apps/event-worker/src/workers/compute-environment-resource-selector.ts
The enqueue call in
apps/event-worker/src/workers/new-environment.tsis accurate and requires no further changes.apps/event-worker/src/workers/index.ts (1)
47-55: Registration looks goodAll newly introduced workers are properly registered and typed.
No further action required.apps/event-worker/src/workers/compute-systems-release-targets.ts (1)
138-144: Verify column alias emitted by the join
drizzle(and other SQL builders) will alias joined columns based on table names.
The destructuringfor (const { policy_target: policyTarget } of policyTargets)assumes the alias ispolicy_target.If the driver instead produces
{ policyTarget: … }the loop will silently produceundefined.
Double‑check the generated column naming or use an explicitselect({ policyTarget: schema.policyTarget.* })to make this deterministic.
| await getQueue(Channel.ComputeDeploymentResourceSelector).add( | ||
| resource.id, | ||
| resource, | ||
| { jobId: resource.id }, | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong queue/channel + payload combination will crash the downstream worker
computeDeploymentResourceSelectorWorkerEvent expects a payload containing a deployment (or at minimum { id: <deploymentId> }).
Here, we enqueue a resource object; the worker will look up a deployment whose id === resource.id, fail the query, and throw “Deployment not found”, causing the job to retry forever.
- await getQueue(Channel.ComputeDeploymentResourceSelector).add(
- resource.id,
- resource,
- { jobId: resource.id },
- );
+ // Option A – enqueue one job per deployment that might be affected
+ // const deployments = await db.query.deployment.findMany({ …filter by workspace/system… });
+ // for (const d of deployments) {
+ // await getQueue(Channel.ComputeDeploymentResourceSelector).add(
+ // d.id,
+ // d,
+ // { jobId: d.id },
+ // );
+ // }
+
+ // Option B – if the intent was system‑level recomputation, target that specific queue instead:
+ // await getQueue(Channel.ComputeSystemsReleaseTargets).add(
+ // resource.workspaceId,
+ // { workspaceId: resource.workspaceId },
+ // { jobId: `ws:${resource.workspaceId}` },
+ // );Please adopt whichever option matches the design, but sending a Resource to ComputeDeploymentResourceSelector is definitely incorrect.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await getQueue(Channel.ComputeDeploymentResourceSelector).add( | |
| resource.id, | |
| resource, | |
| { jobId: resource.id }, | |
| ); | |
| // Option A – enqueue one job per deployment that might be affected | |
| // const deployments = await db.query.deployment.findMany({ …filter by workspace/system… }); | |
| // for (const d of deployments) { | |
| // await getQueue(Channel.ComputeDeploymentResourceSelector).add( | |
| // d.id, | |
| // d, | |
| // { jobId: d.id }, | |
| // ); | |
| // } | |
| // Option B – if the intent was system‑level recomputation, target that specific queue instead: | |
| // await getQueue(Channel.ComputeSystemsReleaseTargets).add( | |
| // resource.workspaceId, | |
| // { workspaceId: resource.workspaceId }, | |
| // { jobId: `ws:${resource.workspaceId}` }, | |
| // ); |
apps/event-worker/src/workers/compute-deployment-resource-selector.ts
Outdated
Show resolved
Hide resolved
apps/event-worker/src/workers/compute-deployment-resource-selector.ts
Outdated
Show resolved
Hide resolved
| getQueue(Channel.ComputeDeploymentResourceSelector).add(data.id, data, { | ||
| jobId: data.id, | ||
| }); | ||
| if (deployment == null) | ||
| throw new Error(`Deployment not found: ${data.id}`); | ||
|
|
||
| const { releaseTargets } = deployment; | ||
| const currentResources = releaseTargets.map((rt) => rt.resource); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Await the enqueue operation to surface queue‑level failures
queue.add() returns a promise. By not awaiting it, any network / Redis error will be swallowed and the worker will ACK the job even though the follow‑up job was never scheduled, silently breaking the pipeline.
- getQueue(Channel.ComputeDeploymentResourceSelector).add(data.id, data, {
- jobId: data.id,
- });
+ await getQueue(Channel.ComputeDeploymentResourceSelector).add(
+ data.id,
+ data,
+ { jobId: data.id },
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| getQueue(Channel.ComputeDeploymentResourceSelector).add(data.id, data, { | |
| jobId: data.id, | |
| }); | |
| if (deployment == null) | |
| throw new Error(`Deployment not found: ${data.id}`); | |
| const { releaseTargets } = deployment; | |
| const currentResources = releaseTargets.map((rt) => rt.resource); | |
| await getQueue(Channel.ComputeDeploymentResourceSelector).add( | |
| data.id, | |
| data, | |
| { jobId: data.id }, | |
| ); |
| const { deployments, environments } = system; | ||
| const deploymentIds = deployments.map((d) => d.id); | ||
| const environmentIds = environments.map((e) => e.id); | ||
| const { workspaceId } = system; | ||
|
|
||
| const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`; | ||
| const [acquired, createdReleaseTargets] = await withMutex(key, () => | ||
| db.transaction(async (tx) => { | ||
| const previousReleaseTargets = await tx | ||
| .delete(schema.releaseTarget) | ||
| .where( | ||
| or( | ||
| inArray(schema.releaseTarget.deploymentId, deploymentIds), | ||
| inArray(schema.releaseTarget.environmentId, environmentIds), | ||
| ), | ||
| ) | ||
| .returning(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Handle empty deployment / environment lists to avoid IN () SQL
If a system is created without deployments or environments the inArray() calls will yield an invalid IN () clause or (depending on the dialect) match everything.
Add an early return or guard:
+ if (deploymentIds.length === 0 && environmentIds.length === 0) {
+ return;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { deployments, environments } = system; | |
| const deploymentIds = deployments.map((d) => d.id); | |
| const environmentIds = environments.map((e) => e.id); | |
| const { workspaceId } = system; | |
| const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`; | |
| const [acquired, createdReleaseTargets] = await withMutex(key, () => | |
| db.transaction(async (tx) => { | |
| const previousReleaseTargets = await tx | |
| .delete(schema.releaseTarget) | |
| .where( | |
| or( | |
| inArray(schema.releaseTarget.deploymentId, deploymentIds), | |
| inArray(schema.releaseTarget.environmentId, environmentIds), | |
| ), | |
| ) | |
| .returning(); | |
| const { deployments, environments } = system; | |
| const deploymentIds = deployments.map((d) => d.id); | |
| const environmentIds = environments.map((e) => e.id); | |
| + if (deploymentIds.length === 0 && environmentIds.length === 0) { | |
| + return; | |
| + } | |
| const { workspaceId } = system; | |
| const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`; | |
| const [acquired, createdReleaseTargets] = await withMutex(key, () => | |
| db.transaction(async (tx) => { | |
| const previousReleaseTargets = await tx | |
| .delete(schema.releaseTarget) | |
| .where( | |
| or( | |
| inArray(schema.releaseTarget.deploymentId, deploymentIds), | |
| inArray(schema.releaseTarget.environmentId, environmentIds), | |
| ), | |
| ) | |
| .returning(); |
| .then((rt) => | ||
| rt.map((rt) => ({ | ||
| policyTargetId: policyTarget.id, | ||
| releaseTargetId: rt.release_target.id, | ||
| })), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect property name on joined result
rt.release_target.id assumes Drizzle returns an underscored alias.
Joins in Drizzle return an object shaped like { releaseTarget: { … } } by default, so this will be undefined, causing all releaseTargetId values to be undefined.
- releaseTargetId: rt.release_target.id,
+ releaseTargetId: rt.releaseTarget.id,Please verify the actual shape returned by drizzle and adjust accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .then((rt) => | |
| rt.map((rt) => ({ | |
| policyTargetId: policyTarget.id, | |
| releaseTargetId: rt.release_target.id, | |
| })), | |
| ); | |
| .then((rt) => | |
| rt.map((rt) => ({ | |
| policyTargetId: policyTarget.id, | |
| releaseTargetId: rt.releaseTarget.id, | |
| })), | |
| ); |
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
Outdated
Show resolved
Hide resolved
| and( | ||
| isNull(schema.resource.deletedAt), | ||
| selector() | ||
| .query() | ||
| .resources() | ||
| .where(policyTarget.resourceSelector) | ||
| .sql(), | ||
| selector() | ||
| .query() | ||
| .deployments() | ||
| .where(policyTarget.deploymentSelector) | ||
| .sql(), | ||
| selector() | ||
| .query() | ||
| .environments() | ||
| .where(policyTarget.environmentSelector) | ||
| .sql(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Potential null‑selector crash
policyTarget.resourceSelector, .deploymentSelector, and .environmentSelector can be null (they are nullable in the schema).
If any of them is null, the builder’s where() will likely throw because it expects a condition object.
Guard the call or fall back to .sql(true) / a tautology when the selector is missing.
🏁 Script executed:
#!/bin/bash
# Search for PolicyTarget type definitions
rg -n "interface PolicyTarget" -A5
rg -n "type PolicyTarget" -A5
# Inspect usages of resourceSelector, deploymentSelector, and environmentSelector
rg -n "\.resourceSelector" -A3
rg -n "\.deploymentSelector" -A3
rg -n "\.environmentSelector" -A3Length of output: 69330
🏁 Script executed:
#!/bin/bash
# Locate the target file
file_path=$(fd --full-path compute-policy-target-release-target-selector.ts)
# Display lines 1-200 for context
sed -n '1,200p' "$file_path"Length of output: 3548
Prevent null‑selector crash in computePolicyTargetReleaseTargetSelectorWorkerEvent
Guard each .where(policyTarget.*Selector) call so that a missing selector doesn’t cause the query builder to throw. If a selector is null, fall back to a no‑op filter (e.g. .sql(true)).
Locations to update:
- apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts lines ~54–60
Suggested diff:
- .where(
- and(
- isNull(schema.resource.deletedAt),
- selector()
- .query()
- .resources()
- .where(policyTarget.resourceSelector)
- .sql(),
- selector()
- .query()
- .deployments()
- .where(policyTarget.deploymentSelector)
- .sql(),
- selector()
- .query()
- .environments()
- .where(policyTarget.environmentSelector)
- .sql(),
- ),
- )
+ .where(
+ and(
+ isNull(schema.resource.deletedAt),
+ policyTarget.resourceSelector
+ ? selector()
+ .query()
+ .resources()
+ .where(policyTarget.resourceSelector)
+ .sql()
+ : selector().query().resources().sql(true),
+ policyTarget.deploymentSelector
+ ? selector()
+ .query()
+ .deployments()
+ .where(policyTarget.deploymentSelector)
+ .sql()
+ : selector().query().deployments().sql(true),
+ policyTarget.environmentSelector
+ ? selector()
+ .query()
+ .environments()
+ .where(policyTarget.environmentSelector)
+ .sql()
+ : selector().query().environments().sql(true),
+ ),
+ )This ensures that when any of the selectors is null, the query will include a tautology rather than throwing.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| and( | |
| isNull(schema.resource.deletedAt), | |
| selector() | |
| .query() | |
| .resources() | |
| .where(policyTarget.resourceSelector) | |
| .sql(), | |
| selector() | |
| .query() | |
| .deployments() | |
| .where(policyTarget.deploymentSelector) | |
| .sql(), | |
| selector() | |
| .query() | |
| .environments() | |
| .where(policyTarget.environmentSelector) | |
| .sql(), | |
| .where( | |
| and( | |
| isNull(schema.resource.deletedAt), | |
| policyTarget.resourceSelector | |
| ? selector() | |
| .query() | |
| .resources() | |
| .where(policyTarget.resourceSelector) | |
| .sql() | |
| : selector() | |
| .query() | |
| .resources() | |
| .sql(true), | |
| policyTarget.deploymentSelector | |
| ? selector() | |
| .query() | |
| .deployments() | |
| .where(policyTarget.deploymentSelector) | |
| .sql() | |
| : selector() | |
| .query() | |
| .deployments() | |
| .sql(true), | |
| policyTarget.environmentSelector | |
| ? selector() | |
| .query() | |
| .environments() | |
| .where(policyTarget.environmentSelector) | |
| .sql() | |
| : selector() | |
| .query() | |
| .environments() | |
| .sql(true), | |
| ), | |
| ) |
| const [acquired] = await withMutex(key, () => { | ||
| db.transaction(async (tx) => { | ||
| await tx | ||
| .delete(schema.computedEnvironmentResource) | ||
| .where( | ||
| eq( | ||
| schema.computedEnvironmentResource.environmentId, | ||
| environment.id, | ||
| ), | ||
| ); | ||
|
|
||
| if (environment.resourceSelector == null) return; | ||
|
|
||
| const resources = await tx.query.resource.findMany({ | ||
| where: and( | ||
| eq(schema.resource.workspaceId, workspaceId), | ||
| selector() | ||
| .query() | ||
| .resources() | ||
| .where(environment.resourceSelector) | ||
| .sql(), | ||
| isNull(schema.resource.deletedAt), | ||
| ), | ||
| }); | ||
|
|
||
| const computedEnvironmentResources = resources.map((r) => ({ | ||
| environmentId: environment.id, | ||
| resourceId: r.id, | ||
| })); | ||
|
|
||
| await tx | ||
| .insert(schema.computedEnvironmentResource) | ||
| .values(computedEnvironmentResources); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
db.transaction never awaited – main work is skipped
The function passed to withMutex uses a block body without a return, so it returns undefined.
As a result, the transaction executes asynchronously and withMutex resolves immediately, releasing the mutex before the DB work completes (and acquired is always true, see below).
- const [acquired] = await withMutex(key, () => {
- db.transaction(async (tx) => {
+ const [acquired] = await withMutex(key, () => {
+ return db.transaction(async (tx) => {
…
});
});Also pass { tryAcquire: true } so the retry branch becomes functional:
- const [acquired] = await withMutex(key, () => {
+ const [acquired] = await withMutex(
+ key,
+ () => {
+ return db.transaction(async (tx) => { … });
+ },
+ { tryAcquire: true },
+ );There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/event-worker/src/workers/update-resource-variable.ts (1)
1-1: Remove the unusedlodashimport
_is never referenced in this file after the refactor, so the import increases bundle size and trips most linters.-import _ from "lodash";
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (13)
apps/event-worker/src/workers/new-environment.ts(1 hunks)apps/event-worker/src/workers/update-resource-variable.ts(1 hunks)apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts(1 hunks)apps/webservice/src/app/api/v1/resources/route.ts(2 hunks)packages/db/src/selectors/compute/compute.ts(0 hunks)packages/db/src/selectors/compute/deployment-builder.ts(0 hunks)packages/db/src/selectors/compute/environment-builder.ts(0 hunks)packages/db/src/selectors/compute/mutex.ts(0 hunks)packages/db/src/selectors/compute/policy-builder.ts(0 hunks)packages/db/src/selectors/compute/resource-builder.ts(0 hunks)packages/db/src/selectors/index.ts(0 hunks)packages/rule-engine/src/db/create-policy.ts(1 hunks)packages/rule-engine/src/db/update-policy.ts(1 hunks)
💤 Files with no reviewable changes (7)
- packages/db/src/selectors/index.ts
- packages/db/src/selectors/compute/mutex.ts
- packages/db/src/selectors/compute/compute.ts
- packages/db/src/selectors/compute/resource-builder.ts
- packages/db/src/selectors/compute/deployment-builder.ts
- packages/db/src/selectors/compute/environment-builder.ts
- packages/db/src/selectors/compute/policy-builder.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/event-worker/src/workers/new-environment.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/webservice/src/app/api/v1/resources/[resourceId]/route.tspackages/rule-engine/src/db/create-policy.tsapps/event-worker/src/workers/update-resource-variable.tspackages/rule-engine/src/db/update-policy.tsapps/webservice/src/app/api/v1/resources/route.ts
🧬 Code Graph Analysis (1)
apps/webservice/src/app/api/v1/resources/route.ts (1)
packages/events/src/index.ts (1)
getQueue(28-34)
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
🔇 Additional comments (10)
apps/webservice/src/app/api/v1/resources/route.ts (2)
5-5: Simplified import statement aligns with architectural changes.The removal of the
selectorimport aligns with the architectural shift to queue-based selector computation rather than inline processing.
75-78: Well-implemented queue-based approach for selector computation.The code has been improved by replacing inline selector computation with a direct queue job. This change:
- Decreases API response time by offloading computation to background workers
- Improves separation of concerns in the codebase
- Creates a more consistent pattern for handling resource processing
- Makes the code more maintainable by reducing complexity
The use of
jobId: insertedResource.idensures idempotency, preventing duplicate processing if the job is requeued.packages/rule-engine/src/db/update-policy.ts (2)
4-4: Refactored imports align with queue-based architectureThe removal of selector-related imports matches the architectural shift to a queue-based system for computing resource selectors and release targets.
This change is part of the broader refactoring to delegate selector computations to dedicated worker events rather than performing them inline. The previous call to
selector().compute().policies([policy.id]).releaseTargetSelectors()has been removed from this file, and the computation will now be handled by workers triggered through queue events.
139-192: LGTM: Function maintains core functionality while delegating selector computationsThe
updatePolicyInTxfunction now focuses solely on data persistence operations, with the release target selector computation delegated to worker events.This approach improves separation of concerns by:
- Keeping database operations focused on CRUD functionality
- Moving potentially expensive selector computations to asynchronous workers
- Allowing better scaling and failure isolation for computation-heavy operations
The transaction pattern with Promise.all for concurrent updates is maintained, which is a good practice.
packages/rule-engine/src/db/create-policy.ts (2)
4-4: Refactored imports align with queue-based architectureThe removal of selector-related imports matches the architectural shift to a queue-based system for computing resource selectors and release targets.
This change is part of the broader refactoring to delegate selector computations to dedicated worker events rather than performing them inline. The previous computations related to release target selectors have been removed from this file, aligning with the strategy to handle these operations in dedicated workers.
88-192: LGTM: Function maintains core functionality while delegating selector computationsThe
createPolicyInTxfunction correctly focuses on data persistence while relying on the queue event at line 181 to trigger worker-based selector computations.This architecture improvement:
- Maintains the core policy creation functionality
- Delegates complex selector computations to specialized workers
- Uses distributed mutexes (implemented elsewhere) to ensure exclusive processing
- Provides better isolation of concerns, making the code more maintainable
The retained queue event on line 181 (
getQueue(Channel.NewPolicy).add(policy.id, policy)) serves as the trigger for the new worker-based computation of selectors.apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (4)
5-5: Import change aligns with architecture updatesThe removal of the
selectorimport from@ctrlplane/dbis consistent with the architectural shift to delegate selector computations to dedicated background workers instead of performing them inline within API handlers.
106-116: Consistent response format maintainedThe endpoint still constructs a properly formatted response after the resource update, maintaining the API contract even after removing the selector computation. The metadata formatting is preserved correctly.
118-125: Error handling remains unchangedThe error handling remains comprehensive, catching all errors and converting them to appropriate API responses with logging. This ensures the API behavior remains consistent and predictable even with architectural changes.
92-126:❓ Verification inconclusive
Confirm queue job is being properly enqueued elsewhere
The PATCH handler no longer computes resource selectors or adds the resource to the UpdatedResource queue after updates. This implementation correctly focuses the API endpoint on its core responsibility of updating the resource. However, we should verify that the resource selector computation and necessary queue jobs are now being handled by the new dedicated worker system.
🏁 Script executed:
#!/bin/bash # Search for code that handles resource updates in the worker system echo "Checking for UpdatedResource worker implementation..." rg -A 15 "Channel\.UpdatedResource.*add" --glob "*.ts" echo "Checking for resource selector computation workers..." rg -A 10 "ComputeEnvironmentResourceSelector|ComputeDeploymentResourceSelector" --glob "*.ts"Length of output: 16475
🏁 Script executed:
#!/bin/bash echo "Locate all references to upsertResources across the repo:" rg -n "upsertResources" --glob "*.ts" --glob "*.js" echo -e "\nShow import and context in route.ts:" rg -n "upsertResources" -C3 apps/webservice/src/app/api/v1/resources/\[resourceId\]/route.ts echo -e "\nDisplay the implementation of upsertResources (first 200 lines):" # Find the file defining upsertResources file=$(rg -l "export function upsertResources" -g "*.ts") if [ -n "$file" ]; then echo "Found in $file" sed -n '1,200p' "$file" else echo "Definition not found by name; searching for function signature..." rg -n "function upsertResources" -C5 -g "*.ts" fiLength of output: 2461
🏁 Script executed:
#!/bin/bash echo "Searching for Channel.UpdatedResource in event-worker worker mapping..." rg -n "UpdatedResource" apps/event-worker/src/workers/index.ts || echo "No Channel.UpdatedResource entry in index.ts" echo -e "\nDisplaying the first 50 lines of the updated-resources worker..." if [ -f "apps/event-worker/src/workers/updated-resources/index.ts" ]; then sed -n '1,50p' apps/event-worker/src/workers/updated-resources/index.ts else echo "Worker file not found: apps/event-worker/src/workers/updated-resources/index.ts" fiLength of output: 2098
🏁 Script executed:
#!/bin/bash echo "Printing first 200 lines of packages/db/src/upsert-resources.ts to inspect upsertResources implementation..." sed -n '1,200p' packages/db/src/upsert-resources.ts echo -e "\nSearching within upsert-resources.ts for queue interactions (getQueue or Channel.UpdatedResource)..." rg -n "getQueue|UpdatedResource" -C3 packages/db/src/upsert-resources.tsLength of output: 5218
Verify UpdatedResource jobs are enqueued for API‐driven updates
The PATCH handler now focuses solely on persisting changes via
upsertResourcesand no longer computes selectors or enqueuesChannel.UpdatedResource. We’ve confirmed:
-upsertResources(and the API route) do not callgetQueue(Channel.UpdatedResource)
- AnupdatedResourceWorkeris registered inapps/event-workerto processChannel.UpdatedResource
- Other subsystems (provider‐scan, agent‐socket) do enqueueUpdatedResourcejobsPlease ensure that updates coming through this API still publish to the
UpdatedResourcequeue—for example, by adding a queue call in this handler, wiring it into your job‐dispatch layer, or via a DB trigger—so selector recomputation continues as expected.
| async () => { | ||
| // todo:? | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
No‑op worker will silently swallow UpdateResourceVariable events
The worker is still subscribed to Channel.UpdateResourceVariable, but the handler body is empty.
Current behaviour:
- Job is dequeued and immediately acknowledged (the promise resolves).
- Nothing is published to the new compute queues.
- Any downstream logic that relied on the selector / release‑target recomputation is now skipped.
Unless this channel is completely deprecated, leaving the worker in place will create hard‑to‑trace data loss.
Two safer options:
- Delete the worker and stop registering it, so producers receive “no consumer” alerts.
- Delegate the event to the new compute workers instead of ignoring it.
Example delegation stub (pseudo‑code ‑ adjust API names accordingly):
- async () => {
- // todo:?
+ async (payload, ctx) => {
+ // TEMP: delegate to new compute flow until this channel is removed
+ await ctx.producer.produce(
+ Channel.ComputeEnvironmentResourceSelector,
+ payload
+ );
},If the channel is obsolete, remove the worker entirely to avoid confusion.
Run the following to confirm no other worker handles UpdateResourceVariable anymore:
🏁 Script executed:
#!/bin/bash
# Search for remaining consumers of the channel
rg --context 2 'Channel\.UpdateResourceVariable'Length of output: 1758
Empty handler for UpdateResourceVariable will silently swallow events
The updateResourceVariableWorker in apps/event-worker/src/workers/update-resource-variable.ts is still subscribed to Channel.UpdateResourceVariable but its handler is a no‑op. A quick search confirms no other worker consumes this channel, so every event will be dequeued and acknowledged without any downstream processing—likely causing invisible data loss.
Two remediation paths:
-
Remove the worker
IfUpdateResourceVariableis deprecated, unregister it inapps/event-worker/src/workers/index.ts:- [Channel.UpdateResourceVariable]: updateResourceVariableWorker,and delete
update-resource-variable.ts. -
Delegate to the new compute flow
Forward incoming payloads to your compute queue until the old channel is removed:export const updateResourceVariableWorker = createWorker( Channel.UpdateResourceVariable, - async () => { - // todo:? + async (payload, ctx) => { + // TEMP: delegate to new compute flow until this channel is removed + await ctx.producer.produce( + Channel.ComputeEnvironmentResourceSelector, + payload + ); },
Please apply one of these fixes to avoid hard‑to‑trace data loss.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async () => { | |
| // todo:? | |
| }, | |
| export const updateResourceVariableWorker = createWorker( | |
| Channel.UpdateResourceVariable, | |
| async (payload, ctx) => { | |
| // TEMP: delegate to new compute flow until this channel is removed | |
| await ctx.producer.produce( | |
| Channel.ComputeEnvironmentResourceSelector, | |
| payload | |
| ); | |
| }, | |
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (5)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (3)
21-22: AddtryAcquire: trueoption to mutex acquisitionThe current mutex call will always return
acquired = truebecause it doesn't use thetryAcquireoption, making the condition check at line 84 redundant.Apply this fix to correctly implement non-blocking mutex acquisition:
- const [acquired, rts] = await withMutex(key, () => + const [acquired, rts] = await withMutex( + key, + () => db.transaction(async (tx) => { ... }), + { tryAcquire: true }, + );
49-65: Prevent null-selector crash in where conditionsThe current implementation doesn't handle cases where selectors are null, which can cause the query builder to throw errors since
policyTarget.resourceSelector,.deploymentSelector, and.environmentSelectorare nullable in the schema.Add null checks to prevent potential crashes:
.where( and( isNull(schema.resource.deletedAt), selector() .query() .resources() - .where(policyTarget.resourceSelector) + .where(policyTarget.resourceSelector ?? {}) .sql(), selector() .query() .deployments() - .where(policyTarget.deploymentSelector) + .where(policyTarget.deploymentSelector ?? {}) .sql(), selector() .query() .environments() - .where(policyTarget.environmentSelector) + .where(policyTarget.environmentSelector ?? {}) .sql(), ), )Or use a more robust approach with conditional checks:
.where( and( isNull(schema.resource.deletedAt), - selector() - .query() - .resources() - .where(policyTarget.resourceSelector) - .sql(), + policyTarget.resourceSelector + ? selector() + .query() + .resources() + .where(policyTarget.resourceSelector) + .sql() + : selector().query().resources().sql(true), - selector() - .query() - .deployments() - .where(policyTarget.deploymentSelector) - .sql(), + policyTarget.deploymentSelector + ? selector() + .query() + .deployments() + .where(policyTarget.deploymentSelector) + .sql() + : selector().query().deployments().sql(true), - selector() - .query() - .environments() - .where(policyTarget.environmentSelector) - .sql(), + policyTarget.environmentSelector + ? selector() + .query() + .environments() + .where(policyTarget.environmentSelector) + .sql() + : selector().query().environments().sql(true), ), )
68-73: Fix property access on joined resultThe current implementation uses
rt.release_target.idwhich assumes Drizzle returns underscored aliases for joined tables. By default, Drizzle uses camelCase property names.Apply this fix:
.then((rt) => rt.map((rt) => ({ policyTargetId: policyTarget.id, - releaseTargetId: rt.release_target.id, + releaseTargetId: rt.releaseTarget.id, })), );apps/event-worker/src/workers/compute-systems-release-targets.ts (2)
70-74: Handle empty deployment/environment lists to avoid invalid SQLWhen a system has no deployments or environments, the
inArray()calls on lines 82-83 could generate invalid SQL statements with emptyIN ()clauses.Add an early return when both lists are empty:
const { deployments, environments } = system; const deploymentIds = deployments.map((d) => d.id); const environmentIds = environments.map((e) => e.id); const { workspaceId } = system; + if (deploymentIds.length === 0 && environmentIds.length === 0) { + return; + }
88-98: Ensure uniqueness before bulk-insert to prevent primary-key violationsThe
findMatchingEnvironmentDeploymentPairsfunction can generate duplicate rows when a resource matches multiple computed-deployment rows through the LEFT JOIN, which could lead to primary key violations during insertion.Add deduplication before insertion:
const releaseTargets = await findMatchingEnvironmentDeploymentPairs( tx, workspaceId, ); + // Deduplicate release targets to prevent primary key violations + const uniqueReleaseTargets = releaseTargets.filter( + (rt, index, self) => + index === + self.findIndex( + (t) => + t.deploymentId === rt.deploymentId && + t.environmentId === rt.environmentId && + t.resourceId === rt.resourceId + ) + ); if (releaseTargets.length > 0) await tx .insert(schema.releaseTarget) - .values(releaseTargets) + .values(uniqueReleaseTargets) .onConflictDoNothing();
🧹 Nitpick comments (3)
apps/webservice/src/app/api/v1/systems/[systemId]/route.ts (1)
73-73: Authorization pattern updated.The destructuring pattern has changed from
({ can, extra: { params } })to({ can, params })directly. Note that this creates an inconsistency with the GET endpoint (line 16) which still uses the old pattern. Consider updating all endpoints to use the same pattern for consistency.apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)
12-19: Add try/catch block around critical operationsWhile the worker framework might handle exceptions at a high level, it's better to have explicit error handling around critical operations to provide more context-specific error messages and recovery strategies.
async (job) => { + try { const { id } = job.data; const policyTarget = await db.query.policyTarget.findFirst({ where: eq(schema.policyTarget.id, id), }); if (policyTarget == null) throw new Error("Policy target not found"); + } catch (error) { + console.error(`Error processing policy target ${job.data.id}:`, error); + throw error; + }apps/event-worker/src/workers/compute-systems-release-targets.ts (1)
11-58: Consider improving the resource-matching conditions for better readabilityThe
findMatchingEnvironmentDeploymentPairsfunction uses complex join conditions and filtering logic. Consider extracting these conditions into more descriptive named variables for better readability and maintainability.For example:
const findMatchingEnvironmentDeploymentPairs = ( tx: Tx, workspaceId: string, ) => { const isResourceMatchingEnvironment = eq( schema.computedEnvironmentResource.resourceId, schema.resource.id, ); const isResourceMatchingDeployment = or( isNull(schema.deployment.resourceSelector), eq(schema.computedDeploymentResource.resourceId, schema.resource.id), ); + const isResourceInWorkspace = eq(schema.resource.workspaceId, workspaceId); + const isResourceNotDeleted = isNull(schema.resource.deletedAt); + const isDeploymentInSameSystem = eq(schema.deployment.systemId, schema.environment.systemId); return tx .select({ environmentId: schema.environment.id, deploymentId: schema.deployment.id, resourceId: schema.resource.id, }) .from(schema.resource) .innerJoin( schema.computedEnvironmentResource, eq(schema.computedEnvironmentResource.resourceId, schema.resource.id), ) .innerJoin( schema.environment, eq( schema.computedEnvironmentResource.environmentId, schema.environment.id, ), ) .innerJoin( schema.deployment, - eq(schema.deployment.systemId, schema.environment.systemId), + isDeploymentInSameSystem, ) .leftJoin( schema.computedDeploymentResource, eq(schema.computedDeploymentResource.deploymentId, schema.deployment.id), ) .where( and( isResourceMatchingEnvironment, isResourceMatchingDeployment, - eq(schema.resource.workspaceId, workspaceId), - isNull(schema.resource.deletedAt), + isResourceInWorkspace, + isResourceNotDeleted, ), ); };
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/event-worker/src/workers/compute-deployment-resource-selector.ts(1 hunks)apps/event-worker/src/workers/compute-environment-resource-selector.ts(1 hunks)apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts(1 hunks)apps/event-worker/src/workers/compute-systems-release-targets.ts(1 hunks)apps/webservice/src/app/api/v1/systems/[systemId]/route.ts(1 hunks)e2e/tests/api/environments.spec.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- e2e/tests/api/environments.spec.ts
- apps/event-worker/src/workers/compute-deployment-resource-selector.ts
- apps/event-worker/src/workers/compute-environment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/webservice/src/app/api/v1/systems/[systemId]/route.tsapps/event-worker/src/workers/compute-systems-release-targets.tsapps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
🧬 Code Graph Analysis (2)
apps/event-worker/src/workers/compute-systems-release-targets.ts (5)
packages/db/src/common.ts (1)
Tx(22-22)packages/events/src/index.ts (2)
createWorker(10-25)getQueue(28-34)packages/db/src/client.ts (1)
db(15-15)apps/event-worker/src/utils/with-mutex.ts (1)
withMutex(5-28)packages/db/src/schema/policy.ts (1)
policyTarget(64-78)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (5)
packages/events/src/index.ts (2)
createWorker(10-25)getQueue(28-34)packages/db/src/schema/policy.ts (1)
policyTarget(64-78)packages/db/src/client.ts (1)
db(15-15)apps/event-worker/src/utils/with-mutex.ts (1)
withMutex(5-28)packages/db/src/selectors/index.ts (1)
selector(13-13)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
🔇 Additional comments (1)
apps/webservice/src/app/api/v1/systems/[systemId]/route.ts (1)
76-76: Added null safety with fallback empty string.Good defensive programming practice adding the nullish coalescing operator (
??). This ensures a defined value is always passed to the permission check, preventing potential runtime errors ifsystemIdis null or undefined.
| ) | ||
| .where(eq(schema.policy.workspaceId, workspaceId)); | ||
|
|
||
| for (const { policy_target: policyTarget } of policyTargets) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix property access on joined result
The current implementation uses policy_target with an underscore, which assumes Drizzle returns underscored aliases for joined tables. By default, Drizzle uses camelCase property names.
Apply this fix:
- for (const { policy_target: policyTarget } of policyTargets) {
+ for (const { policyTarget } of policyTargets) {
getQueue(Channel.ComputePolicyTargetReleaseTargetSelector).add(
policyTarget.id,
policyTarget,
{ delay: ms("500ms"), jobId: policyTarget.id },
);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (const { policy_target: policyTarget } of policyTargets) { | |
| for (const { policyTarget } of policyTargets) { | |
| getQueue(Channel.ComputePolicyTargetReleaseTargetSelector).add( | |
| policyTarget.id, | |
| policyTarget, | |
| { delay: ms("500ms"), jobId: policyTarget.id }, | |
| ); | |
| } |
| const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`; | ||
| const [acquired, createdReleaseTargets] = await withMutex(key, () => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add tryAcquire: true option to mutex acquisition
The current mutex call will always return acquired = true because it doesn't use the tryAcquire option, making the condition check at line 121 redundant.
Apply this fix to correctly implement non-blocking mutex acquisition:
const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
- const [acquired, createdReleaseTargets] = await withMutex(key, () =>
+ const [acquired, createdReleaseTargets] = await withMutex(
+ key,
+ () =>
db.transaction(async (tx) => {
...
}),
+ { tryAcquire: true },
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`; | |
| const [acquired, createdReleaseTargets] = await withMutex(key, () => | |
| const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`; | |
| - const [acquired, createdReleaseTargets] = await withMutex(key, () => | |
| - db.transaction(async (tx) => { | |
| - ... | |
| - }), | |
| - ); | |
| + const [acquired, createdReleaseTargets] = await withMutex( | |
| + key, | |
| + () => | |
| + db.transaction(async (tx) => { | |
| + ... | |
| + }), | |
| + { tryAcquire: true }, | |
| + ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
e2e/tests/api/environments.spec.ts (1)
89-186: Consider implementing a polling mechanism instead of fixed timeouts.The test is comprehensive in verifying the selector update workflow. However, it relies on fixed timeouts (
waitForTimeout(10_000)andwaitForTimeout(30_000)) which could lead to flaky tests if the computation takes longer than expected.A polling approach would be more reliable:
- // Wait longer for selector compute to complete (30 seconds) - await page.waitForTimeout(30_000); + // Poll for resources until they're updated or timeout + const startTime = Date.now(); + const timeout = 30_000; + let resourcesUpdated = false; + + while (!resourcesUpdated && Date.now() - startTime < timeout) { + const pollResponse = await api.GET( + "/v1/environments/{environmentId}/resources", + { params: { path: { environmentId: updatedEnvironmentId } } }, + ); + + if (pollResponse.response.status === 200 && + pollResponse.data?.resources?.length === 1 && + pollResponse.data?.resources[0]?.identifier === + importedEntities.resources.find((r) => r.metadata?.env === "prod")?.identifier) { + resourcesUpdated = true; + } else { + await page.waitForTimeout(1000); // Wait 1 second between polls + } + } + + expect(resourcesUpdated).toBe(true, "Resources were not updated within the timeout period");Otherwise, the test structure and assertions are well-implemented.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
e2e/tests/api/environments.spec.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/environments.spec.ts (2)
packages/db/src/schema/environment.ts (1)
environment(59-84)e2e/tests/fixtures.ts (1)
test(11-26)
🔇 Additional comments (4)
e2e/tests/api/environments.spec.ts (4)
12-13: Good refactoring of the yamlPath constant.Moving the constant to the module scope improves code reusability across the test suite.
69-73: Great improvement using explicit Playwright assertions.Using proper
expect()assertions instead of manual error checks improves test clarity and provides better error messages on failure.
188-238: Well-structured test with good documentation of API behavior.This test thoroughly validates the environment metadata update flow and clearly documents that POST creates a new environment version rather than updating in place. Good practice to verify both the update response and the subsequent GET response.
89-274: Great implementation of e2e tests for the queue-based selector computation.These comprehensive tests effectively validate the new asynchronous queue-based architecture for computing environment selectors. The tests properly:
- Create environments with different selectors
- Wait for background computation to complete
- Verify the correct resources are matched
- Validate the environment update and deletion workflows
This demonstrates that the refactoring from inline computation to queue-based worker processing is functioning correctly.
| test("should delete an environment", async ({ api }) => { | ||
| // First create an environment | ||
| const environmentName = faker.string.alphanumeric(10); | ||
| const environmentResponse = await api.POST("/v1/environments", { | ||
| body: { | ||
| name: environmentName, | ||
| systemId: importedEntities.system.id, | ||
| }, | ||
| }); | ||
|
|
||
| expect(environmentResponse.response.status).toBe(200); | ||
| expect(environmentResponse.data?.id).toBeDefined(); | ||
|
|
||
| const environmentId = environmentResponse.data!.id; | ||
|
|
||
| // Delete the environment | ||
| const deleteResponse = await api.DELETE( | ||
| "/v1/environments/{environmentId}", | ||
| { | ||
| params: { path: { environmentId } }, | ||
| }, | ||
| ); | ||
|
|
||
| expect(deleteResponse.response.status).toBe(200); | ||
|
|
||
| // Try to get the deleted environment - should fail (either 404 or 500) | ||
| // Note: The API appears to return 500 instead of 404 when an environment is not found | ||
| const getResponse = await api.GET("/v1/environments/{environmentId}", { | ||
| params: { path: { environmentId } }, | ||
| }); | ||
|
|
||
| // Accept either 404 (not found) or 500 (internal server error) as valid responses | ||
| // since the API implementation may be returning 500 for deleted resources | ||
| expect([404, 500]).toContain(getResponse.response.status); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
The test adapts well to current API behavior, but consider investigating the 500 error.
This test correctly validates the environment deletion flow. However, it's worth noting that the API returns 500 (server error) instead of the standard 404 (not found) response when requesting a deleted environment. While the test correctly handles this situation, consider:
The RESTful convention would be to return 404 for resources that don't exist, while 500 indicates a server-side issue. This might be worth addressing in a future improvement.
🏁 Script executed:
#!/bin/bash
# Check if there are other endpoints that also return 500 instead of 404
echo "Checking for HTTP 500 error patterns in the codebase..."
rg -B 2 -A 2 "throw new Error|return Response.json.*status.*500" --type=ts apps/Length of output: 16545
🏁 Script executed:
#!/bin/bash
echo "Locating environment API route handlers under apps/webservice/src/app/api/v1/environments..."
# Find all route.ts files for environments API
ROUTE_FILES=$(rg -l --glob "apps/webservice/src/app/api/v1/environments/**/route.ts" "")
if [ -z "$ROUTE_FILES" ]; then
echo "No route.ts files found under environments API."
exit 0
fi
echo "Found route files:"
echo "$ROUTE_FILES"
echo
# For each route.ts, print the GET handler and its error handling
for FILE in $ROUTE_FILES; do
echo "---- $FILE ----"
# Print lines around the GET export
rg -B 5 -A 20 "export async function GET" "$FILE" || echo "No GET handler in $FILE"
echo
doneLength of output: 2638
🏁 Script executed:
#!/bin/bash
echo "Displaying environment API route handler implementations..."
for FILE in apps/webservice/src/app/api/v1/environments/**/route.ts; do
echo "---- $FILE ----"
sed -n '1,200p' "$FILE"
echo
doneLength of output: 2486
Update test to expect only 404 for deleted environments
The GET handler for /v1/environments/{environmentId} explicitly returns a 404 when the environment isn’t found, so we can remove the fallback for a 500. In e2e/tests/api/environments.spec.ts around lines 270–274:
- // Accept either 404 (not found) or 500 (internal server error) as valid responses
- expect([404, 500]).toContain(getResponse.response.status);
+ // Expect 404 (not found) response for deleted environment
+ expect(getResponse.response.status).toBe(404);- File:
e2e/tests/api/environments.spec.ts - Replace the array‐based assertion with a single 404 check.
- Remove the comment about 500 since the API now correctly returns 404.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test("should delete an environment", async ({ api }) => { | |
| // First create an environment | |
| const environmentName = faker.string.alphanumeric(10); | |
| const environmentResponse = await api.POST("/v1/environments", { | |
| body: { | |
| name: environmentName, | |
| systemId: importedEntities.system.id, | |
| }, | |
| }); | |
| expect(environmentResponse.response.status).toBe(200); | |
| expect(environmentResponse.data?.id).toBeDefined(); | |
| const environmentId = environmentResponse.data!.id; | |
| // Delete the environment | |
| const deleteResponse = await api.DELETE( | |
| "/v1/environments/{environmentId}", | |
| { | |
| params: { path: { environmentId } }, | |
| }, | |
| ); | |
| expect(deleteResponse.response.status).toBe(200); | |
| // Try to get the deleted environment - should fail (either 404 or 500) | |
| // Note: The API appears to return 500 instead of 404 when an environment is not found | |
| const getResponse = await api.GET("/v1/environments/{environmentId}", { | |
| params: { path: { environmentId } }, | |
| }); | |
| // Accept either 404 (not found) or 500 (internal server error) as valid responses | |
| // since the API implementation may be returning 500 for deleted resources | |
| expect([404, 500]).toContain(getResponse.response.status); | |
| }); | |
| test("should delete an environment", async ({ api }) => { | |
| // First create an environment | |
| const environmentName = faker.string.alphanumeric(10); | |
| const environmentResponse = await api.POST("/v1/environments", { | |
| body: { | |
| name: environmentName, | |
| systemId: importedEntities.system.id, | |
| }, | |
| }); | |
| expect(environmentResponse.response.status).toBe(200); | |
| expect(environmentResponse.data?.id).toBeDefined(); | |
| const environmentId = environmentResponse.data!.id; | |
| // Delete the environment | |
| const deleteResponse = await api.DELETE( | |
| "/v1/environments/{environmentId}", | |
| { | |
| params: { path: { environmentId } }, | |
| }, | |
| ); | |
| expect(deleteResponse.response.status).toBe(200); | |
| // Try to get the deleted environment - should fail (either 404 or 500) | |
| // Note: The API appears to return 500 instead of 404 when an environment is not found | |
| const getResponse = await api.GET("/v1/environments/{environmentId}", { | |
| params: { path: { environmentId } }, | |
| }); | |
| // Expect 404 (not found) response for deleted environment | |
| expect(getResponse.response.status).toBe(404); | |
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
e2e/tests/api/deployments.spec.ts (1)
29-43: Well-structured test case with appropriate assertions.The test is well-implemented with:
- Random data generation using faker to prevent test flakiness
- Clear API request structure
- Multiple assertions that verify both the HTTP status code and response data properties
Consider expanding the test suite in the future to cover additional scenarios such as:
- Updating existing deployments
- Deleting deployments
- Error cases with invalid inputs
- Fetching deployment details
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
apps/event-worker/src/workers/compute-deployment-resource-selector.ts(1 hunks)e2e/tests/api/deployments.spec.ts(1 hunks)e2e/tests/api/deployments.spec.yaml(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- e2e/tests/api/deployments.spec.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/event-worker/src/workers/compute-deployment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
🔇 Additional comments (3)
e2e/tests/api/deployments.spec.ts (3)
1-11: Clean import structure.The imports are well-organized with standard libraries first, followed by custom API helpers. The explicit importing of specific functions rather than entire modules is good practice for maintaining clear dependencies.
12-12: Well-defined test data path.Using path.join to create a platform-independent file path is the right approach, ensuring the tests will work consistently across different operating systems.
14-27: Good test isolation with proper setup and teardown.The test suite follows best practices by importing test entities before all tests and cleaning them up afterward. This ensures test isolation and prevents test data pollution between test runs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
e2e/tests/api/environments.spec.ts (1)
271-273: Update test to expect only 404 for deleted environmentsThis test assertion contradicts previous review feedback. The GET handler for
/v1/environments/{environmentId}explicitly returns a 404 when the environment isn't found, so we should remove the fallback for a 500.- // Accept either 404 (not found) or 500 (internal server error) as valid responses - // since the API implementation may be returning 500 for deleted resources - expect([404, 500]).toContain(getResponse.response.status); + // Expect 404 (not found) response for deleted environment + expect(getResponse.response.status).toBe(404);
🧹 Nitpick comments (1)
e2e/tests/api/environments.spec.ts (1)
172-172: Consider parameterizing wait timesThe wait time has been increased from 10 seconds in previous tests to 30 seconds here. Consider extracting these values to named constants to make test adjustment easier in the future and document why different operations need different wait times.
- // Wait longer for selector compute to complete (30 seconds) - await page.waitForTimeout(30_000); + // Selector compute operations need longer than resource matching + const SELECTOR_COMPUTE_WAIT_MS = 30_000; + await page.waitForTimeout(SELECTOR_COMPUTE_WAIT_MS);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts(1 hunks)apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts(3 hunks)e2e/api/schema.ts(1 hunks)e2e/tests/api/deployments.spec.ts(1 hunks)e2e/tests/api/environments.spec.ts(3 hunks)openapi.v1.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- e2e/tests/api/deployments.spec.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.tsapps/webservice/src/app/api/v1/deployments/[deploymentId]/route.tse2e/api/schema.tse2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (2)
apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (3)
packages/db/src/common.ts (2)
Tx(22-22)takeFirstOrNull(15-20)packages/db/src/schema/deployment.ts (1)
deployment(69-93)packages/logger/src/index.ts (1)
logger(48-48)
e2e/tests/api/environments.spec.ts (2)
packages/db/src/schema/environment.ts (1)
environment(59-84)e2e/tests/fixtures.ts (1)
test(11-26)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Lint
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (12)
e2e/tests/api/environments.spec.ts (5)
12-13: Good refactoring!Moving the
yamlPathconstant to module scope improves code organization and eliminates potential duplication.
69-72: Improved error assertionsThe explicit Playwright expect assertions provide better readability and error reporting compared to manual error checking.
89-186: Well-structured test for environment selector updatesThis test comprehensively covers the update of environment selectors and resource matching. The explicit wait times and clear assertions ensure proper testing of the asynchronous selector computation pipeline.
Note the documented behavior on line 167: "The API creates a new environment rather than updating the existing one". This is an important behavior to be aware of and is correctly handled in the test.
188-238: Good test coverage for environment metadata updatesThis test thoroughly covers the editing of environment name and description, with clear setup, execution, and verification steps.
240-274: Good test coverage for environment deletionThis test properly covers the full lifecycle of environment deletion, which is an important addition to the test suite.
apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts (1)
106-129: Schema definition change improves clarity and alignment with queue-based architecture.This change replaces the previous reference to an external UpdateDeployment component with an explicit inline schema. The inline definition clearly shows all possible properties that can be updated, which aligns well with the new queue-based architecture for handling resource selector computations.
openapi.v1.json (1)
617-652: Schema consistency maintained across API specifications.The inline schema definition here mirrors the changes in the TypeScript implementation, ensuring consistency in the API contract. This explicit property enumeration makes it clear what fields can be updated through the PATCH endpoint, which is important for API consumers.
e2e/api/schema.ts (1)
1470-1486: Schema update provides better type safety for deployment updates.The explicit inline schema definition for the
updateDeploymentoperation provides clearer type safety and better documentation than the previous generic schema reference. This change aligns with the new queue-based architecture for computing resource selectors asynchronously.The schema now properly includes the
resourceSelectorfield (line 1483-1485) which is necessary for the new async processing model while maintaining all existing deployment properties.apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (4)
2-2: Appropriate type import for schema validation.Adding the
ztype import from Zod enables proper type inference when using the schema validation middleware.
12-12: Good use of middleware for request validation.Using the
parseBodymiddleware centralizes validation logic and improves separation of concerns.
92-96: Improved request processing with middleware-based validation.The code now uses the
parseBodymiddleware with the schema definition, which:
- Ensures consistent validation
- Provides proper typing for the request body
- Simplifies the handler implementation
This is a cleaner pattern that separates validation concerns from business logic.
112-118: Simplified database update with validated body.The database update operation is now cleaner, directly using the validated
bodyobject that matches the database schema. This approach:
- Reduces code duplication
- Ensures type safety
- Aligns with the new architecture where resource selector computation is handled asynchronously
The use of
returning()withtakeFirstOrNullproperly handles the response.
| try { | ||
| const { deploymentId } = await params; | ||
|
|
||
| const updatedDeployment = await db | ||
| .update(SCHEMA.deployment) | ||
| .set(body) | ||
| .where(eq(SCHEMA.deployment.id, deploymentId)) | ||
| .returning() | ||
| .then(takeFirstOrNull); | ||
| const deployment = await db | ||
| .select() | ||
| .from(SCHEMA.deployment) | ||
| .where(eq(SCHEMA.deployment.id, deploymentId)) | ||
| .then(takeFirstOrNull); | ||
|
|
||
| return NextResponse.json(updatedDeployment); | ||
| } catch (error) { | ||
| logger.error("Failed to update deployment", { error: error }); | ||
| if (deployment == null) | ||
| return NextResponse.json( | ||
| { error: "Failed to update deployment" }, | ||
| { status: httpStatus.INTERNAL_SERVER_ERROR }, | ||
| { error: "Deployment not found" }, | ||
| { status: httpStatus.NOT_FOUND }, | ||
| ); | ||
| } | ||
| }, | ||
| ); | ||
|
|
||
| const updatedDeployment = await db | ||
| .update(SCHEMA.deployment) | ||
| .set(body) | ||
| .where(eq(SCHEMA.deployment.id, deploymentId)) | ||
| .returning() | ||
| .then(takeFirstOrNull); | ||
|
|
||
| return NextResponse.json(updatedDeployment); | ||
| } catch (error) { | ||
| logger.error("Failed to update deployment", { error: error }); | ||
| return NextResponse.json( | ||
| { error: "Failed to update deployment" }, | ||
| { status: httpStatus.INTERNAL_SERVER_ERROR }, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify that resource selector changes trigger computation events.
While the code for updating the deployment is correct, we should verify that changes to the resourceSelector field properly trigger the new async computation workers.
Run the following script to check if relevant event producers are properly set up to handle deployment updates:
🏁 Script executed:
#!/bin/bash
# Search for event producers that handle deployment updates and resource selector computation
# Look for event producers that might be triggered on deployment updates
echo "=== Checking deployment update event producers ==="
rg "Deployment.*Update" -A 10 -B 5 --glob "**/*.ts" | grep -E "(produce|enqueue|emit)"
# Check for the new compute deployment resource selector event handlers
echo "=== Checking resource selector computation handlers ==="
rg "ComputeDeploymentResourceSelector" -A 3 -B 3 --glob "**/*.ts"Length of output: 6583
Add event enqueuing for resource‑selector updates
The update handler never emits or enqueues the new compute job when resourceSelector changes. We need to detect that change and push a ComputeDeploymentResourceSelector event. For example, in apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts:
@@ try {
const { deploymentId } = await params;
const deployment = await db
.select()
.from(SCHEMA.deployment)
.where(eq(SCHEMA.deployment.id, deploymentId))
.then(takeFirstOrNull);
if (deployment == null)
@@
const updatedDeployment = await db
.update(SCHEMA.deployment)
.set(body)
.where(eq(SCHEMA.deployment.id, deploymentId))
.returning()
.then(takeFirstOrNull);
+ // enqueue a compute job if resourceSelector changed
+ if (!isEqual(deployment.resourceSelector, body.resourceSelector)) {
+ await getQueue(Channel.ComputeDeploymentResourceSelector).add(
+ deploymentId,
+ {
+ id: deploymentId,
+ oldSelector: deployment.resourceSelector,
+ resourceSelector: body.resourceSelector,
+ },
+ { jobId: deploymentId }
+ );
+ }
return NextResponse.json(updatedDeployment);• Import isEqual (from lodash), getQueue and Channel as needed.
• Ensure this logic runs before sending the response.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const { deploymentId } = await params; | |
| const updatedDeployment = await db | |
| .update(SCHEMA.deployment) | |
| .set(body) | |
| .where(eq(SCHEMA.deployment.id, deploymentId)) | |
| .returning() | |
| .then(takeFirstOrNull); | |
| const deployment = await db | |
| .select() | |
| .from(SCHEMA.deployment) | |
| .where(eq(SCHEMA.deployment.id, deploymentId)) | |
| .then(takeFirstOrNull); | |
| return NextResponse.json(updatedDeployment); | |
| } catch (error) { | |
| logger.error("Failed to update deployment", { error: error }); | |
| if (deployment == null) | |
| return NextResponse.json( | |
| { error: "Failed to update deployment" }, | |
| { status: httpStatus.INTERNAL_SERVER_ERROR }, | |
| { error: "Deployment not found" }, | |
| { status: httpStatus.NOT_FOUND }, | |
| ); | |
| } | |
| }, | |
| ); | |
| const updatedDeployment = await db | |
| .update(SCHEMA.deployment) | |
| .set(body) | |
| .where(eq(SCHEMA.deployment.id, deploymentId)) | |
| .returning() | |
| .then(takeFirstOrNull); | |
| return NextResponse.json(updatedDeployment); | |
| } catch (error) { | |
| logger.error("Failed to update deployment", { error: error }); | |
| return NextResponse.json( | |
| { error: "Failed to update deployment" }, | |
| { status: httpStatus.INTERNAL_SERVER_ERROR }, | |
| ); | |
| } | |
| try { | |
| const { deploymentId } = await params; | |
| const deployment = await db | |
| .select() | |
| .from(SCHEMA.deployment) | |
| .where(eq(SCHEMA.deployment.id, deploymentId)) | |
| .then(takeFirstOrNull); | |
| if (deployment == null) | |
| return NextResponse.json( | |
| { error: "Deployment not found" }, | |
| { status: httpStatus.NOT_FOUND }, | |
| ); | |
| const updatedDeployment = await db | |
| .update(SCHEMA.deployment) | |
| .set(body) | |
| .where(eq(SCHEMA.deployment.id, deploymentId)) | |
| .returning() | |
| .then(takeFirstOrNull); | |
| // enqueue a compute job if resourceSelector changed | |
| if (!isEqual(deployment.resourceSelector, body.resourceSelector)) { | |
| await getQueue(Channel.ComputeDeploymentResourceSelector).add( | |
| deploymentId, | |
| { | |
| id: deploymentId, | |
| oldSelector: deployment.resourceSelector, | |
| resourceSelector: body.resourceSelector, | |
| }, | |
| { jobId: deploymentId } | |
| ); | |
| } | |
| return NextResponse.json(updatedDeployment); | |
| } catch (error) { | |
| logger.error("Failed to update deployment", { error: error }); | |
| return NextResponse.json( | |
| { error: "Failed to update deployment" }, | |
| { status: httpStatus.INTERNAL_SERVER_ERROR }, | |
| ); | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
e2e/tests/api/deployments.spec.ts (1)
1-147: Consider adding error case tests.The test suite thoroughly tests the happy path scenarios but doesn't include tests for error cases such as invalid inputs, unauthorized access, or duplicate slugs. Consider adding these to verify proper error handling.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts(4 hunks)e2e/tests/api/deployments.spec.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Lint
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (6)
e2e/tests/api/deployments.spec.ts (6)
1-13: Module imports and setup look good.The necessary imports are correctly defined, and the YAML path is properly set up for importing test entities.
14-28: Well-structured test setup with proper cleanup.Good practice to import test entities before all tests and clean them up afterward, ensuring test isolation and preventing test pollution.
29-43: Create deployment test is correctly implemented.The test properly generates a random deployment name, submits it to the API, and validates both the response status code and expected data properties.
45-70: Get deployment test is thorough and complete.The test creates a deployment, retrieves it by ID, and performs appropriate assertions. The error check on line 56 ensures proper handling when deployment creation fails.
72-108: Update deployment test covers all required fields.The test successfully creates a deployment, updates multiple fields (name, slug, description, retryCount, timeout), and verifies each updated field with individual assertions.
110-146: Delete deployment test includes comprehensive verification.The test confirms both successful deletion and verifies that the deployment can no longer be retrieved, providing complete validation of the delete functionality.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (2)
apps/event-worker/src/workers/update-environment.ts (1)
72-72:⚠️ Potential issueFix incorrect ID passed to dispatchExitHooks
The function
dispatchExitHooksexpects a system ID, but you're passing the environment ID (job.data.id). This will cause exit hooks to never be dispatched because the deployment query looks fordeployment.systemId === systemId.- await dispatchExitHooks(db, job.data.id, exitedResources); + await dispatchExitHooks(db, job.data.systemId, exitedResources);e2e/tests/api/environments.spec.ts (1)
238-272:⚠️ Potential issueUpdate test to expect only 404 for deleted environments.
The GET handler for
/v1/environments/{environmentId}explicitly returns a 404 when the environment isn't found, so we can remove the fallback for a 500.- // Accept either 404 (not found) or 500 (internal server error) as valid responses - // since the API implementation may be returning 500 for deleted resources - expect([404, 500]).toContain(getResponse.response.status); + // Expect 404 (not found) response for deleted environment + expect(getResponse.response.status).toBe(404);
🧹 Nitpick comments (4)
apps/event-worker/src/workers/update-environment.ts (1)
58-63: Remove debugging console.log statementThis console.log statement appears to be for debugging purposes and should be removed before merging to production.
- console.log("updateEnvironmentWorker");apps/event-worker/src/workers/new-environment.ts (2)
7-7: Remove debugging console.log statementThis console.log statement appears to be for debugging purposes and should be removed before merging to production.
- console.log("newEnvironmentWorker");
8-11: Consider adding deduplication to environment queue jobUnlike the new-deployment.ts worker, this job doesn't have deduplication configuration. Consider adding it for consistency and to prevent duplicate processing in case of rapid successive events.
await getQueue(Channel.ComputeEnvironmentResourceSelector).add( environment.id, environment, + { deduplication: { id: environment.id, ttl: 500 } }, );e2e/tests/api/environments.spec.ts (1)
74-74: Consider a more deterministic approach instead of fixed delay.The 10-second fixed delay may be too long in some cases and too short in others. Consider implementing a polling mechanism that checks for the completion of the background processing instead.
- await new Promise((resolve) => setTimeout(resolve, 10000)); + // Poll for resource matching completion or use a more deterministic approach + let attempts = 0; + while (attempts < 10) { + const statusResponse = await api.GET("/v1/environments/{environmentId}", + { params: { path: { environmentId: environment.id } } } + ); + if (statusResponse.data?.resourceSelectorProcessed) { + break; + } + await new Promise((resolve) => setTimeout(resolve, 1000)); + attempts++; + }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (11)
apps/event-worker/src/workers/compute-deployment-resource-selector.ts(1 hunks)apps/event-worker/src/workers/compute-environment-resource-selector.ts(1 hunks)apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts(1 hunks)apps/event-worker/src/workers/compute-systems-release-targets.ts(1 hunks)apps/event-worker/src/workers/new-deployment.ts(1 hunks)apps/event-worker/src/workers/new-environment.ts(1 hunks)apps/event-worker/src/workers/new-resource.ts(2 hunks)apps/event-worker/src/workers/update-deployment.ts(2 hunks)apps/event-worker/src/workers/update-environment.ts(2 hunks)apps/event-worker/src/workers/updated-resources/index.ts(2 hunks)e2e/tests/api/environments.spec.ts(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
- apps/event-worker/src/workers/updated-resources/index.ts
- apps/event-worker/src/workers/compute-deployment-resource-selector.ts
- apps/event-worker/src/workers/new-resource.ts
- apps/event-worker/src/workers/update-deployment.ts
- apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
- apps/event-worker/src/workers/compute-systems-release-targets.ts
- apps/event-worker/src/workers/compute-environment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/event-worker/src/workers/update-environment.tsapps/event-worker/src/workers/new-environment.tsapps/event-worker/src/workers/new-deployment.tse2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/environments.spec.ts (2)
e2e/tests/fixtures.ts (1)
test(11-26)packages/db/src/schema/environment.ts (1)
environment(59-84)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
apps/event-worker/src/workers/new-deployment.ts (1)
1-12: Properly implements queue-based deployment resource computationThe worker has been refactored to delegate the resource computation to a dedicated worker through the queue system, which aligns with the PR's objective of using queues for selector computation. This improves modularity and separation of concerns.
The deduplication configuration with a 500ms TTL is a good practice to avoid processing the same deployment multiple times in rapid succession.
apps/event-worker/src/workers/update-environment.ts (1)
65-70: Good approach for finding exited resources using SQL selectorsThe approach to find exited resources directly through SQL selectors is efficient, using a combination of
andandnotoperators to identify resources that matched the old selector but don't match the new one.e2e/tests/api/environments.spec.ts (5)
12-13: Clean refactoring ofyamlPathto module scope.Moving the
yamlPathto module scope improves reusability across tests and avoids duplication. This is a good practice.
43-75: Improved test structure with explicit assertions.The test has been refactored to use explicit assertions with
expectinstead of throwing errors manually, which improves readability and error reporting.
89-184: Good addition of test for environment selector updates.This test properly validates the queue-based selector compute functionality by:
- Creating an environment with QA resource selector
- Verifying initial resource matching
- Updating to prod resource selector
- Verifying updated resource matching
The test structure follows a clear setup-action-verify pattern which enhances readability.
186-236: Good test coverage for environment metadata updates.This test properly verifies that environment name and description can be updated, with appropriate assertions for the updated values.
166-168:❌ Incorrect review comment
Consider adding a test for direct environment updates.
The comment "The API creates a new environment rather than updating the existing one" suggests unexpected behavior. If this is intended behavior, the test is correct. However, if updates are supposed to modify the existing environment, this might indicate a bug in the API implementation.
🏁 Script executed:
#!/bin/bash # Check for environment update implementation echo "Looking for environment update implementation..." rg -A 15 "export.*function (PUT|PATCH|POST).*environments" --type=ts apps/webservice/src/app/api/v1/environments || echo "No environment update endpoint found" # Check for environment creation vs update logic echo -e "\nChecking for environment creation vs update logic..." rg -A 10 "if \(.*id\)" --type=ts apps/webservice/src/app/api/v1/environmentsLength of output: 504
🏁 Script executed:
#!/bin/bash echo "Listing environment API files..." find apps/webservice/src/app/api/v1/environments -type f 2>/dev/null echo -e "\nSearching for HTTP method decorators in environment controllers..." rg -n "@(Post|Put|Patch)" -g '*.ts' apps/webservice/src/app/api/v1/environments || echo "No decorators found" echo -e "\nExtracting test snippet around updateResponse..." rg -n -C3 "updateResponse" e2e/tests/api/environments.spec.ts || echo "No occurrences of updateResponse found"Length of output: 2483
#!/bin/bash echo "Inspecting base POST handler in route.ts..." rg -n -C5 "export async function POST" apps/webservice/src/app/api/v1/environments/route.ts || echo "No base POST handler found" echo -e "\nInspecting dynamic [environmentId] route handlers..." sed -n '1,200p' apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts
#!/bin/bash echo "Searching for update handlers in dynamic environment route..." rg -n "export async function (PATCH|PUT)" -g 'apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts' || echo "No update handler found" echo -e "\nShowing dynamic route implementation (first 200 lines):" sed -n '1,200p' apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts
🏁 Script executed:
#!/bin/bash echo "Inspecting dynamic environment route for update handlers..." rg -n "export async function (PUT|PATCH)" -g 'apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts' || echo "No PUT/PATCH handlers found" echo -e "\nShowing dynamic route file (first 200 lines)..." sed -n '1,200p' apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.tsLength of output: 2537
No direct update endpoint exists—tests reflect current API behavior.
I checked
apps/webservice/src/app/api/v1/environments/route.tsand the dynamic[environmentId]/route.ts; there are only GET, DELETE, and POST handlers (no PUT or PATCH). Since the API doesn’t support updating an existing environment via a dedicated endpoint, the tests’ use of POST (which always creates) is correct. If you intend to add environment‐update functionality, please implement a PUT/PATCH handler first; otherwise, you can dismiss the suggestion to add a “direct update” test.Likely an incorrect or invalid review comment.
| const environment = environmentResponse.data!; | ||
|
|
||
| // Verify initial resources (should only be the QA resource) | ||
| await new Promise((resolve) => setTimeout(resolve, 10_000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Standardize time delays across all tests.
Multiple instances of 10-second delays exist throughout the tests. Consider:
- Creating a helper function to standardize these delays
- Implementing a more reliable approach like polling for status
// Add at top of file
+const waitForSelectorProcessing = async (api, environmentId, maxWaitMs = 10000) => {
+ const start = Date.now();
+ while (Date.now() - start < maxWaitMs) {
+ // Poll every second
+ await new Promise(resolve => setTimeout(resolve, 1000));
+ // Add status check here if an endpoint is available
+ }
+};
// Then replace all instances of:
-await new Promise((resolve) => setTimeout(resolve, 10000));
+await waitForSelectorProcessing(api, environmentId);Also applies to: 170-170, 324-324
…ctrlplane into use-queue-for-selector-compute
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
e2e/tests/api/deployments.spec.ts (3)
179-179: Consider more robust waiting mechanismThe hard-coded wait time might be problematic in different environments. Consider implementing a polling mechanism that waits for a specific condition rather than using a fixed timeout.
- await page.waitForTimeout(10_000); + // Poll for resource matching completion + let attempts = 0; + const maxAttempts = 10; + while (attempts < maxAttempts) { + const checkResources = await api.GET(`/v1/deployments/${deploymentId}/resources`, { + params: { path: { deploymentId } } + }); + if (checkResources.data?.resources?.length > 0) break; + await page.waitForTimeout(1000); + attempts++; + }
235-235: Consider consistent wait timesThere's a discrepancy between wait times (5 seconds here vs 10 seconds elsewhere). Consider standardizing wait times or implementing a more reliable waiting approach.
- await page.waitForTimeout(5_000); + await page.waitForTimeout(10_000); // Make consistent with other waits
268-268: Consider more robust waiting mechanismSimilar to previous comment, using a hard-coded wait time might be problematic in different environments.
- await page.waitForTimeout(10_000); + // Poll for resource matching completion + let attempts = 0; + const maxAttempts = 10; + while (attempts < maxAttempts) { + const checkResources = await api.GET(`/v1/deployments/${deploymentId}/resources`, { + params: { path: { deploymentId } } + }); + // Poll until we find resources with env=prod + const hasProdResources = checkResources.data?.resources?.some( + r => r.metadata?.env === "prod" + ); + if (hasProdResources) break; + await page.waitForTimeout(1000); + attempts++; + }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
apps/event-worker/src/workers/compute-deployment-resource-selector.ts(1 hunks)apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts(3 hunks)apps/webservice/src/app/api/v1/deployments/openapi.ts(1 hunks)e2e/api/schema.ts(2 hunks)e2e/tests/api/deployments.spec.ts(1 hunks)openapi.v1.json(2 hunks)
✅ Files skipped from review due to trivial changes (1)
- apps/webservice/src/app/api/v1/deployments/openapi.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- apps/event-worker/src/workers/compute-deployment-resource-selector.ts
- e2e/api/schema.ts
- apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts
- openapi.v1.json
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
e2e/tests/api/deployments.spec.ts (7)
1-13: Well-structured imports and test setupThe imports are well-organized, using appropriate modules for testing (Playwright, faker) and file path handling. The YAML-based test fixtures approach is clean and maintainable.
14-28: Good test lifecycle managementThe beforeAll/afterAll hooks properly handle test fixture setup and cleanup, ensuring that tests have necessary data and the environment is cleaned afterward.
29-43: LGTM: Create deployment testThis test effectively verifies the deployment creation endpoint with appropriate assertions on both status code and response data.
45-71: Appropriate error handling in GET testGood implementation of the GET deployment test with proper error handling for undefined deployment IDs.
73-110: Comprehensive field update testingThe update test effectively covers multiple field updates in a single request, thoroughly validating the PATCH functionality.
112-149: Thorough deletion verificationThe deletion test properly validates both the DELETE operation and confirms removal by attempting to retrieve the deleted deployment.
14-288: Thorough test suite implementationThis test suite comprehensively covers all CRUD operations and resource matching functionality for deployments. It aligns well with the PR objectives of implementing a queue-based architecture for computing resource selectors. The test validations properly check both API responses and the end results of asynchronous resource matching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
e2e/tests/api/deployments.spec.ts (2)
179-179: Consider using polling instead of fixed timeouts.Hard-coded timeouts could make tests flaky in different environments. A polling mechanism with a reasonable timeout would be more robust.
- await page.waitForTimeout(10_000); + // Poll for resource matching completion with timeout + const maxWaitTime = 30_000; + const startTime = Date.now(); + let resourcesMatched = false; + + while (!resourcesMatched && Date.now() - startTime < maxWaitTime) { + const resourceCheck = await api.GET("/v1/deployments/{deploymentId}/resources", { + params: { path: { deploymentId } }, + }); + + if (resourceCheck.response.status === 200 && + resourceCheck.data?.resources?.length > 0) { + resourcesMatched = true; + } else { + await page.waitForTimeout(1000); // Wait 1 second between checks + } + }
235-235: Inconsistent wait times between operations.The test uses 5 seconds wait after creation but 10 seconds after update. Consider standardizing these wait times or implementing a polling approach as suggested earlier.
Also applies to: 268-268
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
e2e/tests/api/deployments.spec.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: build (linux/amd64)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
🔇 Additional comments (10)
e2e/tests/api/deployments.spec.ts (10)
1-13: LGTM - Good test setup with proper imports and utilities.The imports and test setup are well organized, with clear separation between internal utilities and test frameworks.
14-27: LGTM - Well-structured test lifecycle hooks.The test setup and teardown logic properly handles importing test fixtures from YAML and cleaning up afterward to prevent test pollution.
29-43: LGTM - Clean deployment creation test.This test case properly validates the deployment creation API endpoint with appropriate assertions for status code and response properties.
45-71: LGTM - Thorough deployment retrieval test.The test correctly validates the GET endpoint functionality with appropriate error handling when the deployment ID is undefined.
73-110: LGTM - Comprehensive update test.The test properly validates updating various deployment fields including name, slug, description, and operational parameters.
112-149: LGTM - Complete deletion test flow.The test correctly verifies both the deletion operation and confirms the deployment is no longer retrievable afterward.
151-202: LGTM - Good resource selector matching test.The test effectively verifies that the deployment resource selector correctly matches resources based on metadata filters.
204-287: LGTM - Well-structured resource selector update test.This test effectively validates that changing a deployment's resource selector correctly updates the matched resources.
289-353: LGTM - Good edge case testing for deleted resources.The test correctly verifies that deleted resources are no longer matched by deployment selectors, which is an important edge case.
1-354: Overall well-structured test suite for the new queue-based architecture.This test suite provides excellent coverage for the Deployments API including CRUD operations and the new queue-based resource selector computation. The tests are thorough and align well with the PR objectives of implementing a queue for selector computation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
apps/event-worker/src/workers/update-environment.ts (1)
75-75: Fixed systemId parameter as noted in previous review.This change correctly passes
job.data.systemIdto thedispatchExitHooksfunction instead ofjob.data.id, addressing the issue noted in a previous review. This ensures that exit hooks will be properly dispatched.
🧹 Nitpick comments (1)
apps/event-worker/src/workers/update-environment.ts (1)
58-58: Remove debugging log statement before merging.This console.log statement appears to be for debugging purposes and should be removed before merging to production.
- console.log("updateEnvironmentWorker");
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/event-worker/src/workers/update-deployment.ts(2 hunks)apps/event-worker/src/workers/update-environment.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/event-worker/src/workers/update-deployment.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/event-worker/src/workers/update-environment.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (2)
apps/event-worker/src/workers/update-environment.ts (2)
59-63: LGTM! Effective delegation to specialized worker.The code now properly delegates the computation to a specialized worker by enqueuing a job on the ComputeEnvironmentResourceSelector queue instead of performing inline processing. The deduplication configuration will prevent redundant processing of the same environment.
65-73: Good improvement in exited resources detection.The new approach for detecting exited resources using SQL conditions and database queries is more efficient than the previous in-memory filtering approach. It properly combines old and new selectors using logical operators to find resources that are no longer selected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
e2e/tests/api/deployments.spec.ts (2)
151-202: Consider implementing a polling mechanism instead of fixed waits.Using a fixed 10-second wait time (line 179) might lead to flaky tests if the asynchronous processing takes longer under load. Consider implementing a polling mechanism that waits for the condition to be met with a timeout.
- await page.waitForTimeout(10_000); + // Poll until resources are matched or timeout + let matchedResources = { data: { resources: [] } }; + const startTime = Date.now(); + const timeout = 30_000; // 30 seconds max wait + + while (Date.now() - startTime < timeout) { + matchedResources = await api.GET( + "/v1/deployments/{deploymentId}/resources", + { + params: { + path: { + deploymentId, + }, + }, + }, + ); + + if (matchedResources.data?.resources?.length > 0) { + break; + } + + // Short wait between attempts + await page.waitForTimeout(1000); + }
204-287: Consider extracting resource selector definitions to reduce repetition.The resource selector conditions are repeated across multiple tests. Consider extracting these into helper functions or constants for better maintainability.
+ // Helper function to create a resource selector for a specific environment + const createEnvResourceSelector = (env: string, systemPrefix: string) => ({ + type: "comparison", + operator: "and", + conditions: [ + { + type: "metadata", + operator: "equals", + key: "env", + value: env, + }, + { + type: "identifier", + operator: "starts-with", + value: systemPrefix, + }, + ], + }); +Then in the test:
- resourceSelector: { - type: "comparison", - operator: "and", - conditions: [ - { - type: "metadata", - operator: "equals", - key: "env", - value: "qa", - }, - { - type: "identifier", - operator: "starts-with", - value: systemPrefix, - }, - ], - }, + resourceSelector: createEnvResourceSelector("qa", systemPrefix),
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
e2e/tests/api/deployments.spec.ts(1 hunks)e2e/tests/api/resource-selectors.spec.ts(0 hunks)e2e/tests/api/resource-selectors.spec.yaml(0 hunks)
💤 Files with no reviewable changes (2)
- e2e/tests/api/resource-selectors.spec.ts
- e2e/tests/api/resource-selectors.spec.yaml
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
e2e/tests/api/deployments.spec.ts (7)
1-13: LGTM: Imports and setup are well-structured.The imports are appropriate for an E2E test file, including path utilities, faker for generating test data, and test fixtures.
14-27: Well-structured test suite setup and teardown.The beforeAll/afterAll hooks properly import test entities from YAML and clean them up afterward, ensuring test isolation.
29-43: LGTM: Create deployment test is properly implemented.This test correctly validates the deployment creation API, including response status code and expected data fields.
45-71: LGTM: Get deployment test follows best practices.The test properly handles the deployment ID existence check before proceeding and validates the API response.
73-110: LGTM: Update deployment test covers all relevant fields.This test thoroughly verifies that all fields (name, slug, description, retryCount, timeout) can be updated correctly.
112-149: LGTM: Delete deployment test verifies complete removal.Good practice to verify that the deleted deployment cannot be retrieved afterward.
355-406: LGTM: Good test for resource selector clearing.This test correctly verifies that setting the resourceSelector to undefined removes all matched resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
e2e/tests/api/release-targets.spec.ts (1)
29-38: Consider reducing timeout durationWhile the test correctly waits for background processing to complete, the 5-second timeout may be excessive. Consider implementing a more dynamic approach like polling for completion or using a shorter timeout if possible.
- await page.waitForTimeout(5_000); + await page.waitForTimeout(2_000);Or better yet, consider implementing a polling mechanism if the API provides a way to check for completion status.
apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts (2)
38-46: Consider adding more context to the responseWhile the response mapping includes essential data, it might be helpful to include additional metadata such as creation timestamps or additional attributes that could be useful for API consumers.
Consider enhancing the response with additional useful fields:
const releaseTargets = releaseTargetsRows.map((rt) => ({ id: rt.id, resource: rt.resource, environment: rt.environment, deployment: rt.deployment, + createdAt: rt.createdAt, + updatedAt: rt.updatedAt }));
46-54: Error handling could be more specificWhile the current error handling catches all errors, consider adding more specific error handling for different error types to provide more detailed feedback to API consumers.
Consider implementing more specific error handling:
try { // ... existing code return NextResponse.json(releaseTargets); } catch (error) { log.error(error); + if (error instanceof SomeSpecificError) { + return NextResponse.json( + { error: "Specific error message" }, + { status: 400 } + ); + } return NextResponse.json( { error: "Internal server error" }, { status: 500 }, ); }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/openapi.ts(1 hunks)apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts(1 hunks)e2e/api/schema.ts(5 hunks)e2e/package.json(1 hunks)e2e/tests/api/release-targets.spec.ts(1 hunks)e2e/tests/api/release-targets.spec.yaml(1 hunks)openapi.v1.json(4 hunks)
✅ Files skipped from review due to trivial changes (3)
- e2e/package.json
- apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/openapi.ts
- e2e/tests/api/release-targets.spec.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
- openapi.v1.json
- e2e/api/schema.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/release-targets.spec.tsapps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Lint
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
🔇 Additional comments (8)
e2e/tests/api/release-targets.spec.ts (5)
1-13: Imports and setup look goodThe imports are well-organized and the test file is properly set up with the required dependencies for testing the API. The YAML path is correctly defined using
path.jointo ensure cross-platform compatibility.
14-28: Test fixtures properly initialized and cleaned upThe test suite correctly uses
beforeAllandafterAllhooks to import entities from YAML before tests run and clean them up afterward. This ensures tests start with a known state and don't leave test data behind.
39-54: Resource retrieval logic is robustThe resource retrieval logic is well-implemented with proper error handling and assertions. The test correctly throws descriptive errors if expected data is missing, which will make test failures easier to diagnose.
55-69: Release targets API testing is thoroughThe test correctly calls the new API endpoint and validates both the response status and the presence of release targets. Error handling is properly implemented with descriptive error messages.
70-83: Relationship validation is comprehensiveThe test thoroughly validates the relationships between release targets and their associated resources, environments, and deployments. This ensures that the computed selectors maintain proper references between entities.
apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts (3)
1-14: Imports and logging setup look goodThe imports are well-organized and necessary dependencies are included. The logger setup with a child logger using a specific module name follows good practices for structured logging.
15-24: Authentication and authorization correctly implementedThe endpoint properly uses middleware for authentication and authorization. The authorization check correctly verifies the
ResourceGetpermission on the specific resource ID from the request parameters.
25-37: Database query is efficient and includes related entitiesThe implementation correctly retrieves the resource ID from route parameters and queries the database for release targets. The query efficiently includes related entities (resource, environment, deployment) in a single operation using the
withoption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
e2e/tests/api/environments.spec.ts (4)
74-74: Standardize time delays across all tests.Multiple instances of similar delays exist throughout this file. Consider creating a standardized helper function for consistency, as suggested in previous reviews.
// Add at top of file after imports +const waitForSelectorProcessing = async (api, environmentId, maxWaitMs = 10000) => { + await new Promise(resolve => setTimeout(resolve, maxWaitMs)); +}; // Then replace all timeout instances with -await new Promise((resolve) => setTimeout(resolve, 10000)); +await waitForSelectorProcessing(api, environment.id);
145-145: Standardize time delays across all tests.Another instance of hardcoded delay. Consider using the helper function suggested earlier.
426-426: Standardize time delays across all tests.Another instance of hardcoded delay. Consider using the helper function suggested earlier.
371-373: 🛠️ Refactor suggestionUpdate test to expect only 404 for deleted environments
The GET handler for
/v1/environments/{environmentId}explicitly returns a 404 when the environment isn't found, so we can remove the fallback for a 500.- // Accept either 404 (not found) or 500 (internal server error) as valid responses - // since the API implementation may be returning 500 for deleted resources - expect([404, 500]).toContain(getResponse.response.status); + // Expect 404 (not found) response for deleted environment + expect(getResponse.response.status).toBe(404);
🧹 Nitpick comments (6)
e2e/tests/api/environments.spec.ts (6)
85-89: Simplify validation with a more concise assertion pattern.The current pattern of defining a variable, checking if it's defined, then throwing an error can be simplified.
- const receivedResource = resourcesResponse.data?.resources?.[0]; - expect(receivedResource).toBeDefined(); - if (!receivedResource) throw new Error("No resource found"); - expect(receivedResource.identifier).toBe( - importedEntities.resources.find((r) => r.metadata?.env === "qa") - ?.identifier, - ); + const receivedResource = resourcesResponse.data?.resources?.[0]; + expect(receivedResource).toBeDefined(); + expect(receivedResource?.identifier).toBe( + importedEntities.resources.find((r) => r.metadata?.env === "qa") + ?.identifier, + );
100-107: Simplify release target validation with a more concise assertion pattern.Similar to the previous comment, this validation can be simplified.
- const releaseTarget = releaseTargetsResponse.data?.[0]; - expect(releaseTarget).toBeDefined(); - if (!releaseTarget) throw new Error("No release target found"); - expect(releaseTarget.environment.id).toBe(environment.id); - const deploymentMatch = importedEntities.deployments.find( - (d) => d.id === releaseTarget.deployment.id, - ); - expect(deploymentMatch).toBeDefined(); - if (!deploymentMatch) throw new Error("No deployment match found"); - expect(deploymentMatch.id).toBe(releaseTarget.deployment.id); + const releaseTarget = releaseTargetsResponse.data?.[0]; + expect(releaseTarget).toBeDefined(); + expect(releaseTarget?.environment.id).toBe(environment.id); + + const deploymentMatch = importedEntities.deployments.find( + (d) => d.id === releaseTarget?.deployment.id, + ); + expect(deploymentMatch).toBeDefined(); + expect(deploymentMatch?.id).toBe(releaseTarget?.deployment.id);
187-189: Add a verification for environment ID consistency.The comment notes that the API creates a new environment rather than updating an existing one. It would be valuable to verify that the new ID is actually different from the original.
// Note: The API creates a new environment rather than updating the existing one const updatedEnvironmentId = updateResponse.data!.id; expect(updatedEnvironmentId).toBeDefined(); + expect(updatedEnvironmentId).not.toBe(environment.id);
203-207: Apply the same simplification pattern for resource validation.Consistent with earlier suggestions, apply the same pattern simplification here.
- const receivedResource = updatedResourcesResponse.data?.resources?.[0]; - expect(receivedResource).toBeDefined(); - if (!receivedResource) throw new Error("No resource found"); - expect(receivedResource.identifier).toBe( - importedEntities.resources.find((r) => r.metadata?.env === "prod") - ?.identifier, - ); + const receivedResource = updatedResourcesResponse.data?.resources?.[0]; + expect(receivedResource).toBeDefined(); + expect(receivedResource?.identifier).toBe( + importedEntities.resources.find((r) => r.metadata?.env === "prod") + ?.identifier, + );
259-259: Consider using consistent timeout values.This test uses a 5 second timeout while others use 10 seconds. Unless there's a specific reason for this difference, consider standardizing on a single timeout value.
323-324: Add verification for different behaviors between creating and updating.Similar to the earlier test, it would be helpful to explicitly verify that updating creates a new environment rather than modifying the existing one.
// Note: It appears that the POST endpoint creates a new environment rather than updating the existing one const updatedEnvironmentId = updateResponse.data!.id; expect(updatedEnvironmentId).toBeDefined(); + expect(updatedEnvironmentId).not.toBe(environment.id); + + // Verify the original environment still exists with original values + const originalEnvResponse = await api.GET("/v1/environments/{environmentId}", { + params: { path: { environmentId: environment.id } }, + }); + expect(originalEnvResponse.response.status).toBe(200); + expect(originalEnvResponse.data?.name).toBe(originalName);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
e2e/tests/api/environments.spec.ts(5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/environments.spec.ts (3)
e2e/tests/fixtures.ts (1)
test(11-26)packages/db/src/schema/environment.ts (1)
environment(59-84)packages/db/src/schema/release.ts (1)
releaseTarget(20-42)
⏰ Context from checks skipped due to timeout of 90000ms (6)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
e2e/tests/api/environments.spec.ts (1)
12-13: LGTM: Moving yamlPath outside beforeAll is a clean refactoringThis change improves code organization by making the constant available at module scope rather than being defined inside the beforeAll hook.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
e2e/tests/api/deployments.spec.ts (5)
254-254: Consider replacing fixed wait with retry or pollingAs with other locations, fixed wait times can lead to flaky tests depending on the environment speed.
287-287: Consider replacing fixed wait with retry or pollingFixed timeouts can cause unnecessary delays or flaky tests depending on environment speed.
384-384: Consider replacing fixed wait with retry or pollingFixed timeouts can make tests slow or flaky depending on environment performance.
437-437: Consider replacing fixed wait with retry or pollingFixed timeouts like this can make tests unnecessarily slow or flaky in different environments.
448-448: Consider replacing fixed wait with retry or pollingFixed timeouts may not be reliable in different environments.
🧹 Nitpick comments (2)
e2e/tests/api/deployments.spec.ts (2)
179-179: Consider replacing fixed wait with retry or pollingA fixed 10-second wait may cause flaky tests - it might be too short in slow environments or unnecessarily long in fast ones.
Consider implementing a polling mechanism that checks for the expected state:
- await page.waitForTimeout(10_000); + // Wait for resource selector computation to complete with exponential backoff + let retries = 0; + const maxRetries = 5; + let resources; + + while (retries < maxRetries) { + resources = await api.GET("/v1/deployments/{deploymentId}/resources", { + params: { path: { deploymentId } }, + }); + + if (resources.response.status === 200 && resources.data?.resources?.length > 0) { + break; + } + + await page.waitForTimeout(1000 * Math.pow(2, retries)); + retries++; + }
463-463: Remove console.log statementDebug statements should not be committed in test code.
- console.log(resource);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts(1 hunks)e2e/api/schema.ts(7 hunks)e2e/tests/api/deployments.spec.ts(1 hunks)e2e/tests/api/deployments.spec.yaml(1 hunks)openapi.v1.json(6 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts
- e2e/tests/api/deployments.spec.yaml
- openapi.v1.json
- e2e/api/schema.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
🔇 Additional comments (3)
e2e/tests/api/deployments.spec.ts (3)
327-403: Fixed previous test isolation issueGood job addressing the previous review comment. The test now creates and deletes a new resource instead of using imported entities, which prevents interference with subsequent tests.
405-495: The comment seems to contradict the testThe comment on lines 458-461 suggests that with a null selector, all resources should have release targets. If that's the case, shouldn't the endpoint return a 200 with all resources instead of a 400?
This appears to be testing a behavior where:
- A deployment with a null selector can't directly list its resources (error 400)
- But all resources in the system will show this deployment in their release targets
Is this the intended behavior? Please confirm.
455-455:✅ Verification successful
Ensure this is the expected behavior
The test expects a 400 status code when getting resources for a deployment with a null selector. This seems unusual as most APIs would return a 200 with an empty array in this case.
🏁 Script executed:
#!/bin/bash # Check if there's any documentation or implementation details about this behavior # Check API implementation for the GET /deployments/{id}/resources endpoint rg -A 10 "deployments/\[deploymentId\]/resources" --glob "apps/webservice/src/app/api/**/*.ts"Length of output: 96
🏁 Script executed:
#!/bin/bash # List all API files containing “resources” to locate the route implementation rg -l "resources" --glob "apps/webservice/src/app/api/**/*.{ts,js}" echo echo "---- deployments.spec.ts (lines 440–480) ----" # Show the test snippet to see how the endpoint is being called sed -n '440,480p' e2e/tests/api/deployments.spec.tsLength of output: 3037
🏁 Script executed:
#!/bin/bash # Inspect the implementation of GET /v1/deployments/{deploymentId}/resources sed -n '1,200p' apps/webservice/src/app/api/v1/deployments/\[deploymentId\]/resources/route.tsLength of output: 1816
No changes needed—400 response is by design
The GET /v1/deployments/{deploymentId}/resources handler explicitly returns a 400 status whendeployment.resourceSelectorisnull, so the test’s expectation is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (4)
e2e/tests/api/environments.spec.ts (4)
74-74: Standardize time delays across all tests.Multiple instances of hardcoded delays (10000ms) exist throughout the tests. This was previously flagged in a review but hasn't been addressed.
// Add at top of file +const waitForSelectorProcessing = async (api, environmentId, maxWaitMs = 10000) => { + const start = Date.now(); + while (Date.now() - start < maxWaitMs) { + // Poll every second + await new Promise(resolve => setTimeout(resolve, 1000)); + // Add status check here if an endpoint is available + } +}; // Then replace all hardcoded waits with: -await new Promise((resolve) => setTimeout(resolve, 10000)); +await waitForSelectorProcessing(api, environmentId);
145-145: Standardize time delays across all tests.Another instance of hardcoded delays. Consider implementing the helper function as suggested previously.
340-374: Update test to expect only 404 for deleted environments.The GET handler for
/v1/environments/{environmentId}explicitly returns a 404 when the environment isn't found, so we can remove the fallback for a 500.- // Accept either 404 (not found) or 500 (internal server error) as valid responses - expect([404, 500]).toContain(getResponse.response.status); + // Expect 404 (not found) response for deleted environment + expect(getResponse.response.status).toBe(404);
433-433: Standardize time delays across all tests.Another instance of hardcoded delays. Consider implementing the helper function as suggested previously.
🧹 Nitpick comments (2)
e2e/tests/api/environments.spec.ts (2)
85-89: Remove redundant error handling after assertions.The throw statement after the expect assertion is redundant since the test would already fail if the assertion doesn't pass.
- expect(receivedResource).toBeDefined(); - if (!receivedResource) throw new Error("No resource found"); + expect(receivedResource).toBeDefined();Apply this pattern to other similar instances in the file (lines 100-101, 203-204, 218-219, 224-225).
426-430: Use consistent assertion patterns.This block uses a conditional throw instead of the expect assertions used elsewhere in the file. For consistency:
- if ( - environmentResponse.response.status !== 200 || - environmentResponse.data == null - ) - throw new Error("Failed to create environment"); + expect(environmentResponse.response.status).toBe(200); + expect(environmentResponse.data).toBeDefined();
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
e2e/tests/api/deployments.spec.ts(1 hunks)e2e/tests/api/environments.spec.ts(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- e2e/tests/api/deployments.spec.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/environments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
e2e/tests/api/environments.spec.ts (7)
12-13: Code refactoring improves reusability.Moving the
yamlPathconstant to module scope improves organization and allows it to be potentially reused across different tests without duplication.
43-74: Test refactoring aligns with modern practices.The refactored test removes Playwright page dependency, adds explicit assertions, and uses native Promise-based timeout, making the test more maintainable and framework-agnostic.
75-108: Thorough validation of matched resources and release targets.The additional assertions for resource identifiers, release target environment IDs, and deployment IDs provide comprehensive validation of the selector computation results, which is important for verifying the new queue-based architecture.
110-226: New test validates selector update functionality.This comprehensive test ensures that environment selector updates work correctly with the new queue-based computation system. The test properly validates:
- Initial resource matching with QA selector
- Updating to a new selector targeting prod resources
- Verification of the updated selector's effects
- Release target validation
These validations are critical for ensuring that the queue-based selector computation behaves correctly when selectors are updated.
228-286: Test for null selector validates important edge case.This test ensures that environment selectors can be unset, which is an important capability that might otherwise be overlooked. It correctly verifies:
- The API behavior when a selector is set to undefined (becomes null)
- The appropriate error response (400) when trying to get resources for an environment without a selector
This is particularly important given the architectural change to queue-based selector computation.
288-338: Test for environment metadata updates is comprehensive.The test correctly validates that environment name and description can be updated, ensuring the API maintains this basic functionality while implementing the queue-based architecture changes.
376-442: Test for deleted resources correctly validates edge case.This test ensures that deleted resources are not matched by environment selectors, which is important for maintaining data integrity in the queue-based architecture. The test creates a resource, deletes it, and verifies that an environment selector targeting that resource matches no resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (4)
apps/webservice/src/app/api/v1/deployment-versions/openapi.ts (1)
56-57: Consider more specific error responses.Replacing the 409 conflict response with a generic 500 error reduces API specificity. While this may be necessary for the queue-based architecture, consider whether more granular error types could be maintained or added to help API consumers handle different error scenarios.
You might want to retain conflict scenarios as distinct from other server errors, for example:
"500": { description: "Internal server error", content: { "application/json": { schema: { type: "object", properties: { error: { type: "string" }, + code: { type: "string", enum: ["INTERNAL_ERROR", "VERSION_CONFLICT", "QUEUE_ERROR"] }, }, }, }, }, },e2e/tests/api/deployment-version.spec.ts (1)
29-57: Reduce duplication & tighten assertionsThe
toBeDefined()+ manualthrowguard pattern is repeated across every test. Playwright’s expect already fails the test when the value isundefined, so the explicit check &throware redundant noise.A concise, DRY approach is:
-const deploymentVersion = deploymentVersionResponse.data; -expect(deploymentVersion).toBeDefined(); -if (!deploymentVersion) throw new Error("Deployment version not found"); +const { data: deploymentVersion } = deploymentVersionResponse; +expect(deploymentVersion).toBeTruthy(); // fails test automatically when falsyYou could extract a tiny helper (
assertPresent<T>(value): value is T) once and reuse, which will both read better and remove six-plus duplicated blocks.apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts (2)
73-99: Metadata replacement causes loss of unknown columnsThe delete → insert pattern correctly replaces metadata, but it discards rows that might have been concurrently written by another request in the same moment – leading to lost updates.
Safer pattern:
SELECT ... FOR UPDATEthe existing keys.- Upsert only the diff (insert new / update existing / delete removed).
If full replacement is intended, document that the API is “last-write-wins” to avoid surprises.
136-136: HTTP status constant mismatch
INTERNAL_SERVER_ERRORfromhttp-statusis the numeric value500, whereasNextResponse.json(..., { status })expects the number. This is fine, but be consistent – elsewhere the code uses the literal500. Consider a sharedHttpStatusenum to avoid mixing styles.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.ts(1 hunks)apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts(4 hunks)apps/webservice/src/app/api/v1/deployment-versions/openapi.ts(1 hunks)apps/webservice/src/app/api/v1/openapi.ts(1 hunks)e2e/api/schema.ts(10 hunks)e2e/tests/api/deployment-version.spec.ts(1 hunks)e2e/tests/api/deployment-version.spec.yaml(1 hunks)openapi.v1.json(10 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.tsapps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.tsapps/webservice/src/app/api/v1/openapi.tsapps/webservice/src/app/api/v1/deployment-versions/openapi.tse2e/tests/api/deployment-version.spec.tse2e/api/schema.ts
🧬 Code Graph Analysis (1)
apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts (3)
packages/db/src/common.ts (1)
takeFirstOrNull(15-20)packages/db/src/schema/deployment-version.ts (2)
deploymentVersion(109-136)deploymentVersionMetadata(160-176)packages/events/src/index.ts (1)
getQueue(28-34)
🔇 Additional comments (21)
e2e/api/schema.ts (11)
529-545: New API endpoint for resource release targets looks good.This adds a new API path
/v1/resources/{resourceId}/release-targetswith a GET operation to support the queue-based architecture for computing release targets.
920-921: Properly typed metadata dictionary in DeploymentVersion.The metadata property now has a more specific type definition where values are constrained to strings, which provides better type safety.
922-923: Added deployment version status tracking.The new status enum field with values "building", "ready", and "failed" enables tracking the lifecycle state of deployment versions in the new queue-based architecture.
1294-1315: Enhanced error handling for updateDeploymentVersion.Adding explicit 404 and 500 error responses improves the API contract by clearly documenting possible error scenarios when updating deployment versions.
1350-1350: Corrected response code for upsertDeploymentVersion.Changed success response code from 200 to 201, which is more semantically appropriate for resource creation operations according to REST conventions.
1358-1368: Updated error handling for upsertDeploymentVersion.Removed the conflict (409) response and replaced it with a more general internal server error (500) response. This aligns with how conflicts are now handled in the queue-based processing approach.
1517-1533: Enhanced updateDeployment request body schema.The request body now uses an explicit inline object with optional fields, including a resourceSelector that allows arbitrary properties. This provides more flexibility for the queue-based selector computation.
1647-1647: Updated resource response types for environment and deployment queries.The response types for getResourcesForDeployment and getResourcesForEnvironment now return arrays of full Resource schemas instead of partial inline objects, providing more complete information to clients.
Also applies to: 1842-1842
1112-1118: Added ReleaseTarget schema.This new schema properly defines the structure of release targets with references to associated resources, environments, and deployments, supporting the new queue-based release target computation.
3082-3104: Implemented getReleaseTargets operation.The operation provides a clean API surface for retrieving release targets associated with a resource, which is essential for the new queue-based architecture.
1725-1727: Updated resourceSelector in createDeployment.The resourceSelector field now allows for arbitrary properties, which provides greater flexibility when specifying resource selectors during deployment creation.
apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.ts (1)
61-86: Good addition of explicit error response schemas.The addition of 404 and 500 error responses with clear schemas improves the API documentation and aligns with best practices for RESTful APIs. This change supports the new queue-based architecture by properly documenting potential failure modes during asynchronous processing.
apps/webservice/src/app/api/v1/openapi.ts (1)
132-136: Improved metadata type safety and added deployment version status.These changes enhance the DeploymentVersion schema by:
- Restricting metadata values to strings, which provides better type safety
- Adding a status enum property that reflects the deployment version lifecycle ("building", "ready", "failed")
The status field is particularly important for supporting the new queue-based selector computation architecture, as it allows tracking the processing state of deployment versions.
e2e/tests/api/deployment-version.spec.yaml (1)
1-30: Well-structured test specification for deployment versions.This YAML specification provides a complete test environment with interconnected system, resource, deployment, and environment entities. The resource selector configuration in the environment properly tests the core functionality being implemented in this PR.
The use of template variables with
{{ prefix }}ensures test isolation and reusability.apps/webservice/src/app/api/v1/deployment-versions/openapi.ts (1)
48-48: Appropriate status code change for resource creation.Changing the success response from 200 to 201 is more semantically correct for resource creation operations, following HTTP standards where 201 specifically indicates a resource was created.
e2e/tests/api/deployment-version.spec.ts (2)
59-80: Missing negative-path testsThe suite only verifies the happy paths. Consider adding cases for:
- 409 conflict on duplicate
{deploymentId, tag}- 400 validation errors (e.g. invalid status)
- 404 when
deploymentIdis unknownThese will protect against future regressions in the new queue-based flow.
106-153: Assert that side-effects happenedUpdating name / tag / metadata implicitly triggers the new selector-computation queues. The test currently checks only HTTP response fields. Add an assertion that the relevant BullMQ job exists (or that an event was emitted), otherwise the asynchronous flow could silently break.
Example (pseudo):
await expect( getQueue(Channel.ComputeDeploymentResourceSelector).getJob(deploymentVersion.id), ).resolves.not.toBeNull();This keeps the E2E test aligned with the PR’s primary objective of “use queue for selector compute”.
openapi.v1.json (4)
297-327: Spec and implementation diverge on 409 vs 500The PATCH
/deployment-versions/{id}handler can still raise a uniqueness violation (e.g. duplicate tag) which the DB surfaces as HTTP 409 in other endpoints. The spec now lists only 404/500. Either:
- Keep 409 in the spec and map the DB error → 409 in the handler, or
- Ensure the handler translates duplicates to 500 (less ideal).
Synchronise to keep clients predictable.
390-415: Good: 201 for resource creationChanging POST
/v1/deployment-versionsfrom 200 to 201 aligns with HTTP semantics.
4521-4533: metadata value type narrowed
DeploymentVersion.metadatais now string-only. Confirm that callers weren’t persisting JSON blobs before; otherwise this is a backwards-incompatible API change.If the backend still stores
textvalues only, 👍; otherwise reconsider typing asstring | number | boolean.
5027-5050: ReleaseTarget schema looks solidThe added reusable schema is well-structured and referenced correctly by the new endpoint.
| test("should update deployment version status to ready", async ({ api }) => { | ||
| const deploymentVersionResponse = await api.POST( | ||
| "/v1/deployment-versions", | ||
| { | ||
| body: { | ||
| tag: faker.string.alphanumeric(10), | ||
| deploymentId: importedEntities.deployments[0].id, | ||
| status: "building", | ||
| }, | ||
| }, | ||
| ); | ||
|
|
||
| expect(deploymentVersionResponse.response.status).toBe(201); | ||
| const deploymentVersion = deploymentVersionResponse.data; | ||
| expect(deploymentVersion).toBeDefined(); | ||
| if (!deploymentVersion) throw new Error("Deployment version not found"); | ||
|
|
||
| expect(deploymentVersion.status).toBe("building"); | ||
|
|
||
| const updatedDeploymentVersionResponse = await api.PATCH( | ||
| `/v1/deployment-versions/{deploymentVersionId}`, | ||
| { | ||
| params: { | ||
| path: { deploymentVersionId: deploymentVersion.id }, | ||
| }, | ||
| body: { | ||
| status: "ready", | ||
| }, | ||
| }, | ||
| ); | ||
|
|
||
| expect(updatedDeploymentVersionResponse.response.status).toBe(200); | ||
| const updatedDeploymentVersion = updatedDeploymentVersionResponse.data; | ||
| expect(updatedDeploymentVersion).toBeDefined(); | ||
| if (!updatedDeploymentVersion) | ||
| throw new Error("Deployment version not found"); | ||
|
|
||
| expect(updatedDeploymentVersion.status).toBe("ready"); | ||
| }); | ||
|
|
||
| test("should update deployment version status to failed", async ({ api }) => { | ||
| const deploymentVersionResponse = await api.POST( | ||
| "/v1/deployment-versions", | ||
| { | ||
| body: { | ||
| tag: faker.string.alphanumeric(10), | ||
| deploymentId: importedEntities.deployments[0].id, | ||
| status: "building", | ||
| }, | ||
| }, | ||
| ); | ||
|
|
||
| expect(deploymentVersionResponse.response.status).toBe(201); | ||
| const deploymentVersion = deploymentVersionResponse.data; | ||
| expect(deploymentVersion).toBeDefined(); | ||
| if (!deploymentVersion) throw new Error("Deployment version not found"); | ||
|
|
||
| expect(deploymentVersion.status).toBe("building"); | ||
|
|
||
| const updatedDeploymentVersionResponse = await api.PATCH( | ||
| `/v1/deployment-versions/{deploymentVersionId}`, | ||
| { | ||
| params: { | ||
| path: { deploymentVersionId: deploymentVersion.id }, | ||
| }, | ||
| body: { | ||
| status: "failed", | ||
| }, | ||
| }, | ||
| ); | ||
|
|
||
| expect(updatedDeploymentVersionResponse.response.status).toBe(200); | ||
| const updatedDeploymentVersion = updatedDeploymentVersionResponse.data; | ||
| expect(updatedDeploymentVersion).toBeDefined(); | ||
| if (!updatedDeploymentVersion) | ||
| throw new Error("Deployment version not found"); | ||
|
|
||
| expect(updatedDeploymentVersion.status).toBe("failed"); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Race condition risk when polling status
Both status-transition tests (building → ready/failed) immediately PATCH and then assert the returned status value. In the live system, a concurrent worker could change the status between the POST and PATCH causing flaky tests.
Mitigation:
- Capture the
updatedAt(or version row lock) before the PATCH and assert it increased. - Or poll until the PATCH completes successfully and the queue finishes processing.
This will make the suite deterministic when workers run in parallel CI.
| if (deploymentVersion == null) | ||
| return NextResponse.json( | ||
| { error: "Deployment version not found" }, | ||
| { status: NOT_FOUND }, | ||
| ); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Early return returns the wrong type
return NextResponse.json(...) inside the transaction returns a NextResponse, but the surrounding function later treats the result as a DeploymentVersion object. This will surface as a runtime error when the branch is taken (even if currently unreachable).
Prefer throwing a typed error and handling it outside the transaction, or simply remove this duplicate 404 as the pre-query already guards for “not found”.
| try { | ||
| const deploymentVersion = await ctx.db | ||
| .update(SCHEMA.deploymentVersion) | ||
| .set(body) | ||
| .where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId)) | ||
| .returning() | ||
| .then(takeFirst); | ||
|
|
||
| if (Object.keys(body.metadata ?? {}).length > 0) | ||
| await ctx.db | ||
| const deploymentVersion = await ctx.db.transaction(async (tx) => { | ||
| const deploymentVersion = await ctx.db | ||
| .update(SCHEMA.deploymentVersion) | ||
| .set(body) | ||
| .where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId)) | ||
| .returning() | ||
| .then(takeFirstOrNull); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use the transaction handle for all DB writes
Inside the ctx.db.transaction callback you call ctx.db.update(...) instead of tx.update(...).
If the update succeeds but later tx.insert(...) fails, the update will still be committed because it is executed outside the transaction boundary – violating atomicity.
-const deploymentVersion = await ctx.db
+const deploymentVersion = await tx
.update(SCHEMA.deploymentVersion)Repeat for any additional statements in this block.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const deploymentVersion = await ctx.db | |
| .update(SCHEMA.deploymentVersion) | |
| .set(body) | |
| .where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId)) | |
| .returning() | |
| .then(takeFirst); | |
| if (Object.keys(body.metadata ?? {}).length > 0) | |
| await ctx.db | |
| const deploymentVersion = await ctx.db.transaction(async (tx) => { | |
| const deploymentVersion = await ctx.db | |
| .update(SCHEMA.deploymentVersion) | |
| .set(body) | |
| .where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId)) | |
| .returning() | |
| .then(takeFirstOrNull); | |
| try { | |
| const deploymentVersion = await ctx.db.transaction(async (tx) => { | |
| const deploymentVersion = await tx | |
| .update(SCHEMA.deploymentVersion) | |
| .set(body) | |
| .where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId)) | |
| .returning() | |
| .then(takeFirstOrNull); |
| const shouldTrigger = | ||
| deploymentVersion.status === DeploymentVersionStatus.Ready; | ||
| if (!shouldTrigger) return NextResponse.json(deploymentVersion); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid duplicate Ready events
shouldTrigger fires whenever the resulting status is Ready, even if it was already Ready.
This will enqueue the same deploymentVersion on every PATCH that touches any other field.
-const shouldTrigger = deploymentVersion.status === DeploymentVersionStatus.Ready;
+const shouldTrigger =
+ deploymentVersion.status === DeploymentVersionStatus.Ready &&
+ prevDeploymentVersion.status !== DeploymentVersionStatus.Ready;Prevents unnecessary queue traffic and downstream work.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const shouldTrigger = | |
| deploymentVersion.status === DeploymentVersionStatus.Ready; | |
| if (!shouldTrigger) return NextResponse.json(deploymentVersion); | |
| const shouldTrigger = | |
| deploymentVersion.status === DeploymentVersionStatus.Ready && | |
| prevDeploymentVersion.status !== DeploymentVersionStatus.Ready; | |
| if (!shouldTrigger) return NextResponse.json(deploymentVersion); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (2)
e2e/tests/api/workspace.spec.ts (2)
8-18: Workspace retrieval by ID test looks good, consider adding more assertions.The test correctly verifies fetching a workspace by ID and checks the response status and workspace ID. Consider expanding assertions to validate other workspace properties like
nameandslugfor more comprehensive validation.expect(workspaceResponse.response.status).toBe(200); expect(workspaceResponse.data?.id).toBe(workspace.id); + expect(workspaceResponse.data?.name).toBe(workspace.name); + expect(workspaceResponse.data?.slug).toBe(workspace.slug);
8-33: Consider adding tests for error cases.The current tests only cover the happy path. Consider adding tests for error scenarios such as:
- Fetching a non-existent workspace ID
- Fetching with an invalid workspace slug
- Testing authorization failures
This would make your test suite more robust and comprehensive.
test("should return 404 for non-existent workspace ID", async ({ api }) => { const nonExistentId = "00000000-0000-0000-0000-000000000000"; const response = await api.GET("/v1/workspaces/{workspaceId}", { params: { path: { workspaceId: nonExistentId }, }, }); expect(response.response.status).toBe(404); }); test("should return 404 for non-existent workspace slug", async ({ api }) => { const nonExistentSlug = "non-existent-workspace"; const response = await api.GET("/v1/workspaces/slug/{workspaceSlug}", { params: { path: { workspaceSlug: nonExistentSlug }, }, }); expect(response.response.status).toBe(404); });
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
e2e/tests/api/workspace.spec.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/workspace.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/workspace.spec.ts (2)
e2e/tests/fixtures.ts (1)
test(11-26)packages/db/src/schema/workspace.ts (1)
workspace(18-27)
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: build (linux/amd64)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
e2e/tests/api/workspace.spec.ts (1)
20-33: Workspace retrieval by slug test is well implemented.The test for fetching a workspace by slug is correctly implemented with proper assertions for status code and matching workspace ID.
|
|
||
| import { test } from "../fixtures"; | ||
|
|
||
| const yamlPath = path.join(__dirname, "workspace.spec.yaml"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused variable should be removed or utilized.
The yamlPath constant is defined but never used in the test file. Either remove it if it's unnecessary or utilize it for its intended purpose.
-const yamlPath = path.join(__dirname, "workspace.spec.yaml");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const yamlPath = path.join(__dirname, "workspace.spec.yaml"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
apps/event-worker/src/utils/with-mutex.ts (1)
57-64: Expose TTL / expiry options to avoid indefinitely held locks
redis-semaphoresupports customlockTimeoutandacquireTimeout.
Without overriding the defaults, long-running jobs risk exceeding the default TTL, leading to double-execution, or—conversely—jobs that crash after the lock is obtained may leave the lock dangling until the TTL expires.Consider extending
MutexOptionswith optionalttlMs/acquireTimeoutMsand pass them to theRedisMutexconstructor:-type MutexOptions = { +type MutexOptions = { getDependentLockKeys?: () => Promise<string[]>; + /** How long the lock should live in Redis (ms). Defaults to library default. */ + ttlMs?: number; + /** How long to wait when calling `acquire` (ms). Defaults to 0 for tryAcquire. */ + acquireTimeoutMs?: number; }; ... - const primaryMutex = new RedisMutex(redis, key); + const primaryMutex = new RedisMutex(redis, key, { + lockTimeout: opts.ttlMs, + acquireTimeout: opts.acquireTimeoutMs, + });
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/event-worker/src/utils/with-mutex.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/event-worker/src/utils/with-mutex.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
apps/event-worker/src/utils/with-mutex.ts (1)
71-78:❓ Verification inconclusive
Sort dependent lock keys to guarantee deterministic lock-ordering
The comment states that lock ordering prevents dead-locks, but the current implementation keeps the order returned by
getDependentLockKeys().
If two workers ask for the same lock set in different orders they can still livelock (A → B vs B → A). Sorting the keys before acquisition guarantees a global ordering and eliminates this class of dead-locks.- const dependentLockKeys = await opts.getDependentLockKeys(); + // Ensure a deterministic acquisition order to avoid dead-locks + const dependentLockKeys = (await opts.getDependentLockKeys()).sort();
I’ll inspect how
getDependentLockKeysand its return type are defined to confirm whether sorting is applicable.
| const acquiredLocks = await Promise.all( | ||
| dependentMutexes.map((m) => m.tryAcquire()), | ||
| ); | ||
|
|
||
| // If any dependent lock couldn't be acquired, release primary and return | ||
| if (acquiredLocks.some((acquired) => !acquired)) { | ||
| await primaryMutex.release(); | ||
| return [false, null]; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Release only the locks that were actually acquired & keep true LIFO order
dependentMutexesmay contain locks that failedtryAcquire().
Callingrelease()on an un-acquired mutex can throw (redis-semaphorerejects with “Lock not owned”).- The code advertises “LIFO lock release”, but the
Promise.allcall keeps the original order and releases in parallel.
Track the subset that succeeded and release them sequentially in reverse order to match the contract:
- const acquiredLocks = await Promise.all(
- dependentMutexes.map((m) => m.tryAcquire()),
- );
+ const acquiredStates = await Promise.all(
+ dependentMutexes.map((m) => m.tryAcquire()),
+ );
+
+ // Filter the actually-acquired mutexes for proper cleanup later
+ dependentMutexes = dependentMutexes.filter((_, idx) => acquiredStates[idx]);
@@
- // Release all locks in reverse order
- await Promise.all(dependentMutexes.map((m) => m.release()));
- await primaryMutex.release();
+ // Release dependents last-in-first-out to respect nested critical sections
+ for (let i = dependentMutexes.length - 1; i >= 0; i--) {
+ try {
+ await dependentMutexes[i].release();
+ } catch (e) {
+ console.error("Failed to release dependent mutex", e);
+ }
+ }
+ await primaryMutex.release();Also applies to: 93-95
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (3)
e2e/tests/api/deployments.spec.ts (3)
151-221: Consider uncommenting release target verificationThe test creates a deployment and verifies resource matching, but the release target verification code (lines 206-221) is commented out. Since the PR introduces queue-based architecture for release targets computation, having these verifications active would provide more comprehensive test coverage.
If the release target API endpoints are ready, consider uncommenting and updating these verification steps to validate the queue-based release target computation.
459-459: Remove debug console.log statementThe console.log statement will clutter test output logs.
- console.log(resource);
254-254: Inconsistent timeout values for asynchronous operationsThe test uses different timeout values (5000ms vs 10000ms) for waiting for the same type of asynchronous operation to complete.
Standardize timeout values by using a constant:
+// Define a constant at the top of the file +const SELECTOR_COMPUTATION_TIMEOUT = 10_000; // Then use it consistently throughout the tests - await page.waitForTimeout(5_000); + await page.waitForTimeout(SELECTOR_COMPUTATION_TIMEOUT);Better yet, implement the polling approach suggested earlier for all wait operations.
Also applies to: 287-287, 380-380, 433-433, 444-444
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
apps/event-worker/src/workers/compute-deployment-resource-selector.ts(1 hunks)e2e/tests/api/deployments.spec.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- apps/event-worker/src/workers/compute-deployment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
e2e/tests/api/deployments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/deployments.spec.ts (4)
e2e/tests/fixtures.ts (1)
test(11-26)e2e/api/yaml-loader.ts (3)
ImportedEntities(54-84)importEntitiesFromYaml(94-252)cleanupImportedEntities(259-307)packages/db/src/schema/workspace.ts (1)
workspace(18-27)packages/db/src/schema/resource.ts (1)
resource(59-87)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Typecheck
- GitHub Check: Lint
- GitHub Check: build (linux/amd64)
- GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
e2e/tests/api/deployments.spec.ts (1)
327-399: Good test isolation implementationThis test properly addresses the previous review comment by creating a dedicated test resource and deleting it, rather than affecting shared test resources. This approach ensures proper test isolation.
| }, | ||
| }); | ||
|
|
||
| await page.waitForTimeout(10_000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using a polling approach instead of fixed timeout
Using a fixed timeout of 10 seconds could make tests flaky if the queue processing takes longer or shorter than expected.
Replace the fixed timeout with a polling approach that checks for the expected condition:
- await page.waitForTimeout(10_000);
+ // Poll until resources are matched or timeout after 30 seconds
+ let attempts = 0;
+ let resourcesMatched = false;
+ while (!resourcesMatched && attempts < 30) {
+ const checkResources = await api.GET(
+ "/v1/deployments/{deploymentId}/resources",
+ { params: { path: { deploymentId } } }
+ );
+
+ if (checkResources.response.status === 200 &&
+ checkResources.data?.resources?.length > 0) {
+ resourcesMatched = true;
+ } else {
+ attempts++;
+ await page.waitForTimeout(1000);
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await page.waitForTimeout(10_000); | |
| // Poll until resources are matched or timeout after 30 seconds | |
| let attempts = 0; | |
| let resourcesMatched = false; | |
| while (!resourcesMatched && attempts < 30) { | |
| const checkResources = await api.GET( | |
| "/v1/deployments/{deploymentId}/resources", | |
| { params: { path: { deploymentId } } } | |
| ); | |
| if (checkResources.response.status === 200 && | |
| checkResources.data?.resources?.length > 0) { | |
| resourcesMatched = true; | |
| } else { | |
| attempts++; | |
| await page.waitForTimeout(1000); | |
| } | |
| } |
| expect(resources.response.status).toBe(400); | ||
|
|
||
| /** | ||
| * Since the deployment selector is set to null | ||
| * all resources should actually have release targets now | ||
| * since the deployment no longer has a specific scope | ||
| */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Expected 400 error seems inconsistent with the comment
The test expects a 400 status code when querying for resources, but then the comment (lines 454-457) indicates resources should have release targets - suggesting the deployment still exists but just doesn't match resources.
Verify if status code 400 is actually the expected response when a deployment has a null resource selector. Could this instead be a status 200 with an empty resources array?
🏁 Script executed:
#!/bin/bash
# Check how null selector is handled in the API implementation
# Check deployment resources API route handling
rg -A 10 'deployments.*resources' --glob "apps/webservice/src/**/*.ts"Length of output: 1277
🏁 Script executed:
#!/bin/bash
set -e
echo "Files under apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources:"
ls -1 apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources || true
echo -e "\n---- openapi.ts (first 200 lines) ----"
sed -n '1,200p' apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources/openapi.ts || true
echo -e "\nChecking for route.ts and its content (first 200 lines) ----"
if [ -f apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources/route.ts ]; then
echo "Found route.ts:"
sed -n '1,200p' apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources/route.ts
else
echo "route.ts not found"
fiLength of output: 4073
Clarify test comment for null resource selector error
The test correctly expects a 400 response when resourceSelector is null, but the existing comment suggests otherwise. Update the comment to reflect that the 400 error is intentional:
• File: e2e/tests/api/deployments.spec.ts
• Lines: 451–457
- /**
- * Since the deployment selector is set to null
- * all resources should actually have release targets now
- * since the deployment no longer has a specific scope
- */
+ // Expect 400 because the deployment’s `resourceSelector` is null,
+ // which the API treats as a bad request indicating all workspace resources apply.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| expect(resources.response.status).toBe(400); | |
| /** | |
| * Since the deployment selector is set to null | |
| * all resources should actually have release targets now | |
| * since the deployment no longer has a specific scope | |
| */ | |
| expect(resources.response.status).toBe(400); | |
| // Expect 400 because the deployment’s `resourceSelector` is null, | |
| // which the API treats as a bad request indicating all workspace resources apply. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (2)
50-50: Incorrect property name on joined result
rt.release_target.idassumes Drizzle returns an underscored alias. Joins in Drizzle return an object shaped like{ releaseTarget: { … } }by default, so this will beundefined, causing allreleaseTargetIdvalues to beundefined.- releaseTargetId: rt.release_target.id, + releaseTargetId: rt.releaseTarget.id,
31-44:⚠️ Potential issuePrevent null-selector crash in selector queries
The
policyTarget.resourceSelector,deploymentSelector, andenvironmentSelectorfields can be null (from schema definition), but the code doesn't handle this case. If any selector is null, the query will likely throw an error.- selector() - .query() - .resources() - .where(policyTarget.resourceSelector) - .sql(), - selector() - .query() - .deployments() - .where(policyTarget.deploymentSelector) - .sql(), - selector() - .query() - .environments() - .where(policyTarget.environmentSelector) - .sql(), + policyTarget.resourceSelector + ? selector() + .query() + .resources() + .where(policyTarget.resourceSelector) + .sql() + : selector().query().resources().sql(true), + policyTarget.deploymentSelector + ? selector() + .query() + .deployments() + .where(policyTarget.deploymentSelector) + .sql() + : selector().query().deployments().sql(true), + policyTarget.environmentSelector + ? selector() + .query() + .environments() + .where(policyTarget.environmentSelector) + .sql() + : selector().query().environments().sql(true),
🧹 Nitpick comments (1)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)
119-131: Consider adding more detailed error loggingWhile row-lock errors are properly handled with job requeuing, other errors are simply propagated without additional context. Adding more detailed logging would help with debugging.
} catch (e: any) { const isRowLocked = e.code === "55P03"; if (isRowLocked) { await getQueue(Channel.ComputePolicyTargetReleaseTargetSelector).add( job.name, job.data, { deduplication: { id: job.data.id, ttl: 500 } }, ); return; } + console.error(`Error computing policy target release target selector: ${e.message}`, { + policyTargetId: job.data.id, + error: e, + }); throw e; }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...
**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
🧬 Code Graph Analysis (1)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (5)
packages/db/src/common.ts (1)
Tx(22-22)packages/db/src/schema/policy.ts (3)
policyTarget(64-78)PolicyTarget(186-186)policy(42-62)packages/db/src/selectors/index.ts (1)
selector(13-13)packages/events/src/index.ts (2)
createWorker(10-25)getQueue(28-34)packages/db/src/client.ts (1)
db(15-15)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: build (linux/amd64)
- GitHub Check: Typecheck
- GitHub Check: build (linux/amd64)
- GitHub Check: Lint
🔇 Additional comments (2)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (2)
70-102: Clean transaction with proper row lockingThe transaction uses row-level locking with
FOR UPDATE NOWAITwhich properly handles concurrent updates by failing fast if rows are already locked. This approach combined with job requeuing for lock conflicts is a robust pattern for ensuring data consistency.
113-118: Good use of job deduplicationUsing deduplication with a short TTL (500ms) for the downstream evaluation jobs is an effective way to prevent duplicate work while ensuring timely processing.
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests
Documentation
Chores