Skip to content

Conversation

@jsbroks
Copy link
Member

@jsbroks jsbroks commented Apr 23, 2025

Summary by CodeRabbit

  • New Features

    • Added an API endpoint to fetch release targets for a specific resource.
    • Enhanced deployment and environment APIs to support flexible resource selectors with arbitrary properties.
    • Improved OpenAPI schemas and API responses for deployments, environments, and deployment versions, including new status and metadata fields.
    • Introduced background workers for dynamic computation of resource associations and release targets.
    • Expanded end-to-end test coverage for deployments, environments, deployment versions, and release targets.
  • Bug Fixes

    • Improved error handling and status codes for deployment and deployment version update endpoints.
  • Refactor

    • Simplified and modularized background job processing for resource and policy selector computations.
    • Removed unused or redundant internal resource selector computation utilities.
  • Tests

    • Added comprehensive end-to-end test suites and YAML specifications for deployments, environments, deployment versions, and release targets.
    • Updated existing tests for improved assertions and reliability.
  • Documentation

    • Updated OpenAPI specifications to reflect new and modified API endpoints and schemas.
  • Chores

    • Added new test scripts for targeted API testing.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Apr 23, 2025

Walkthrough

This update introduces a major refactor to the computation of resource selectors and release targets within the system. The previous approach, which relied on builder classes and mutex-based concurrency control for selector computations, has been replaced with a set of new event-driven workers. These workers handle the computation and updating of resource associations for environments, deployments, and policy targets using transactional row-level locking and job re-queuing on lock conflicts. The worker registry and event types have been updated accordingly. The API is extended with new endpoints and OpenAPI schema changes, and comprehensive end-to-end tests are added or updated to validate the new flows.

Changes

File(s) / Path(s) Change Summary
apps/event-worker/src/workers/compute-deployment-resource-selector.ts,
compute-environment-resource-selector.ts,
compute-policy-target-release-target-selector.ts,
compute-systems-release-targets.ts
New worker modules added for computing resource selectors and release targets for deployments, environments, policy targets, and systems. Workers use transactional row-level locking, deduplication, and downstream job enqueuing.
apps/event-worker/src/workers/index.ts Registered new workers in the worker registry for new event channels.
apps/event-worker/src/workers/new-deployment.ts,
new-environment.ts,
new-policy.ts,
new-resource.ts,
update-deployment.ts,
update-environment.ts,
updated-resources/index.ts,
update-resource-variable.ts
Refactored existing workers to delegate resource selector and release target computations to the new event-driven workers, removing inline logic and previous builder usage. Some workers are now no-ops or simplified.
packages/events/src/types.ts Added new event channels and updated the channel map for new worker types.
apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/openapi.ts,
route.ts
Added new GET endpoint and OpenAPI schema for fetching release targets associated with a resource.
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts,
resources/route.ts
Removed selector computation and event dispatch after resource updates/creation. Simplified queue job addition.
apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts,
route.ts,
openapi.ts
Changed PATCH handler to use schema-validated body and updated OpenAPI request/response schemas.
apps/webservice/src/app/api/v1/environments/route.ts Removed try-catch error handling and logging from environment creation.
apps/webservice/src/app/api/v1/systems/[systemId]/route.ts Adjusted authorization parameter destructuring and fallback.
apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.ts,
route.ts,
openapi.ts
Improved PATCH handler for deployment versions with pre-checks, transactional updates, and OpenAPI error responses.
apps/webservice/src/app/api/v1/deployment-versions/openapi.ts Changed POST response codes and error schemas.
apps/webservice/src/app/api/v1/deployments/openapi.ts,
[deploymentId]/resources/openapi.ts
Updated OpenAPI schemas for resourceSelector property and response resource objects.
apps/webservice/src/app/api/v1/environments/[environmentId]/resources/openapi.ts Updated OpenAPI schema to reference Resource component for response items.
apps/webservice/src/app/api/v1/openapi.ts Refined DeploymentVersion schema for metadata and status fields.
e2e/tests/api/deployments.spec.ts,
deployments.spec.yaml,
deployment-version.spec.ts,
deployment-version.spec.yaml,
release-targets.spec.ts,
release-targets.spec.yaml,
environments.spec.ts,
environments.spec.yaml,
workspace.spec.ts
Added or updated end-to-end test suites and YAML specs to validate new resource selector, deployment, and release target behaviors.
e2e/tests/api/resource-selectors.spec.ts,
resource-selectors.spec.yaml
Deleted obsolete resource selectors API test suite and YAML spec.
e2e/api/schema.ts,
openapi.v1.json
Updated API schema: new endpoints, refined types, updated request/response schemas, and added ReleaseTarget.
packages/db/src/selectors/compute/compute.ts,
environment-builder.ts,
deployment-builder.ts,
policy-builder.ts,
resource-builder.ts,
mutex.ts
Deleted all compute builder classes and Redis mutex logic for selector computations.
packages/db/src/selectors/index.ts Removed compute() method and ComputeBuilder import from selector module.
packages/rule-engine/src/db/create-policy.ts,
update-policy.ts
Removed selector computation logic and logging from policy create/update flows.
e2e/package.json Added script to run release targets API tests.

Sequence Diagram(s)

sequenceDiagram
  participant API
  participant Worker
  participant DB
  participant Queue

  API->>Queue: Enqueue ComputeDeploymentResourceSelector job (deploymentId)
  Queue->>Worker: Trigger job (deploymentId)
  Worker->>DB: Fetch deployment and system
  alt Deployment not found
    Worker->>Queue: Fail job
  else
    Worker->>DB: Begin transaction
    Worker->>DB: Lock deployment row (FOR UPDATE NOWAIT)
    Worker->>DB: Delete existing computed deployment resources
    alt Has resource selector
      Worker->>DB: Query matching resources
      Worker->>DB: Insert computed deployment resources
    end
    Worker->>DB: Commit transaction
    Worker->>Queue: Enqueue ComputeSystemsReleaseTargets job (systemId)
  end
  Note over Worker: On lock conflict, re-enqueue job with delay
Loading
sequenceDiagram
  participant API
  participant Worker
  participant DB
  participant Queue

  API->>Queue: Enqueue ComputeEnvironmentResourceSelector job (environmentId)
  Queue->>Worker: Trigger job (environmentId)
  Worker->>DB: Fetch environment and system
  alt Environment not found
    Worker->>Queue: Fail job
  else
    Worker->>DB: Begin transaction
    Worker->>DB: Lock environment row (FOR UPDATE NOWAIT)
    Worker->>DB: Delete existing computed environment resources
    alt Has resource selector
      Worker->>DB: Query matching resources
      Worker->>DB: Insert computed environment resources
    end
    Worker->>DB: Commit transaction
    Worker->>Queue: Enqueue ComputeSystemsReleaseTargets job (systemId)
  end
  Note over Worker: On lock conflict, re-enqueue job with delay
Loading
sequenceDiagram
  participant API
  participant Worker
  participant DB
  participant Queue

  API->>Queue: Enqueue ComputePolicyTargetReleaseTargetSelector job (policyTargetId)
  Queue->>Worker: Trigger job (policyTargetId)
  Worker->>DB: Fetch policy target and policy
  alt Not found
    Worker->>Queue: Fail job
  else
    Worker->>DB: Begin transaction
    Worker->>DB: Lock release targets (FOR UPDATE NOWAIT)
    Worker->>DB: Delete existing computed policy target release targets
    Worker->>DB: Query matching release targets
    Worker->>DB: Insert computed policy target release targets
    Worker->>DB: Commit transaction
    Worker->>Queue: Enqueue EvaluateReleaseTarget job(s)
  end
  Note over Worker: On lock conflict, re-enqueue job with delay
Loading

Possibly related PRs

  • ctrlplanedev/ctrlplane#505: Modifies the same environment creation endpoint, focusing on transaction and queue insertion order.
  • ctrlplanedev/ctrlplane#492: Refactors release target upsert logic and modifies related workers, directly involving deployment resource selector computations.
  • ctrlplanedev/ctrlplane#504: Adds mutex-based concurrency control to compute builder classes, which are replaced by the new worker-driven approach in this PR.

Suggested reviewers

  • jsbroks

Poem

In the warren of code, selectors once nested,
Now workers hop in, their logic well-tested.
With queues and with locks, they bound through the night,
Matching resources with transactional might.
Builders and mutexes, now gone from the scene—
The garden grows tidier, the carrots more green!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

♻️ Duplicate comments (2)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)

83-90: Retry logic never executes

Because acquired is always true (see first comment), this branch is never taken.
After adding { tryAcquire: true }, the retry logic will start working as intended.
No action needed after that change – just mentioning for clarity.

apps/event-worker/src/workers/compute-environment-resource-selector.ts (1)

77-84: Retry branch currently unreachable

Same root cause as above: without { tryAcquire: true }, acquired is always true, making this branch unreachable.
After the mutex fix, this will work; no additional change needed.

🧹 Nitpick comments (8)
apps/event-worker/src/utils/with-mutex.ts (1)

1-28: Good implementation of a distributed mutex pattern.

The withMutex utility successfully implements a Redis-based distributed lock mechanism that will help prevent race conditions when multiple workers attempt to process the same entity simultaneously. The implementation properly handles both blocking and non-blocking lock acquisition, executes the provided function when the lock is acquired, and ensures the lock is released in all scenarios.

Consider adding a timeout parameter to prevent indefinite waiting in blocking mode:

export const withMutex = async <T>(
  key: string,
  fn: (mutex: RedisMutex) => Promise<T> | T,
-  opts: { tryAcquire: boolean } = { tryAcquire: false },
+  opts: { tryAcquire: boolean; timeoutMs?: number } = { tryAcquire: false },
): Promise<[boolean, T | null]> => {
  const mutex = new RedisMutex(redis, key);

  try {
    if (opts.tryAcquire) {
      const acquired = await mutex.tryAcquire();
      if (!acquired) return [false, null];
    } else {
-      await mutex.acquire();
+      await mutex.acquire(opts.timeoutMs);
    }

    const result = await fn(mutex);
    return [true, result];
  } catch (error) {
    console.error("Error executing mutex function", { error, key });
    throw error;
  } finally {
    await mutex.release();
  }
};
apps/event-worker/src/workers/new-environment.ts (1)

3-14: Consider removing commented code instead of leaving it in place

The recomputeReleaseTargets function has been commented out as it's no longer needed with the new queue-based architecture. Consider removing this commented code entirely to improve maintainability, as it can be retrieved from version control if needed in the future.

-// const recomputeReleaseTargets = async (environment: schema.Environment) => {
-//   const computeBuilder = selector().compute();
-//   await computeBuilder.environments([environment]).resourceSelectors();
-//   const { systemId } = environment;
-//   const system = await db
-//     .select()
-//     .from(schema.system)
-//     .where(eq(schema.system.id, systemId))
-//     .then(takeFirst);
-//   const { workspaceId } = system;
-//   return computeBuilder.allResources(workspaceId).releaseTargets();
-// };
apps/event-worker/src/workers/updated-resources/index.ts (1)

1-1: Remove unused lodash import

_ is no longer referenced after the inline logic was deleted, so the import can be dropped to keep the file clean.

-import _ from "lodash";
apps/event-worker/src/workers/update-environment.ts (1)

65-70: Add workspace filter to avoid cross‑workspace exits

The current findMany selects resources purely by selector logic; if multiple workspaces share identical resource structures, you may publish exit hooks for resources that were never part of the system.

-      const exitedResources = await db.query.resource.findMany({
-        where: and(
+      const exitedResources = await db.query.resource.findMany({
+        where: and(
+          eq(schema.resource.workspaceId, job.data.workspaceId),
           selector().query().resources().where(oldSelector).sql(),
           not(selector().query().resources().where(resourceSelector).sql()!),
         ),
       });
apps/event-worker/src/workers/new-resource.ts (1)

29-44: Bulk‑enqueue jobs instead of one‑by‑one loops

Iterating with await …add() issues a separate round‑trip to Redis per job and serialises the loop.
BullMQ supports addBulk, which would cut latency dramatically when a workspace contains many environments/deployments.

-    for (const environment of environments) {
-      await getQueue(Channel.ComputeEnvironmentResourceSelector).add(
-        environment.id,
-        environment,
-        { jobId: environment.id },
-      );
-    }
+    await getQueue(Channel.ComputeEnvironmentResourceSelector).addBulk(
+      environments.map((env) => ({
+        name: env.id,
+        data: env,
+        opts: { jobId: env.id },
+      })),
+    );

Apply the same pattern to the deployment loop below.

apps/event-worker/src/workers/compute-systems-release-targets.ts (1)

118-123: Re‑queue with exponential back‑off to avoid tight retry loops

A fixed 1 s delay can lead to hot‑looping when contention is high.
Consider exponentially increasing the delay or relying on BullMQ’s built‑in backoff option:

-        { delay: ms("1s"), jobId: job.id },
+        {
+          jobId: job.id,
+          backoff: { type: "exponential", delay: ms("1s") },
+        },
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)

102-106: Duplicated jobId may overwrite jobs

jobId: rt.id is fine for uniqueness per release target, but if the same release target is re‑queued rapidly (e.g., selector churn) the add will throw JobExistsError.
Consider incorporating a timestamp or a deterministic suffix, or rely on BullMQ’s default autogenerated IDs.

apps/event-worker/src/workers/compute-environment-resource-selector.ts (1)

86-90: Validate payload sent to ComputeSystemsReleaseTargets

The job is enqueued with data = environment.system.
Double‑check that the consumer worker indeed expects the entire system object; otherwise pass only the system ID to avoid unnecessary payload bloat.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3b3d187 and 5dd2f3a.

📒 Files selected for processing (17)
  • apps/event-worker/src/utils/with-mutex.ts (1 hunks)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-systems-release-targets.ts (1 hunks)
  • apps/event-worker/src/workers/index.ts (2 hunks)
  • apps/event-worker/src/workers/new-deployment.ts (1 hunks)
  • apps/event-worker/src/workers/new-environment.ts (1 hunks)
  • apps/event-worker/src/workers/new-policy.ts (1 hunks)
  • apps/event-worker/src/workers/new-resource.ts (2 hunks)
  • apps/event-worker/src/workers/update-deployment.ts (2 hunks)
  • apps/event-worker/src/workers/update-environment.ts (2 hunks)
  • apps/event-worker/src/workers/updated-resources/index.ts (2 hunks)
  • apps/webservice/src/app/api/v1/environments/route.ts (1 hunks)
  • e2e/tests/api/environments.spec.ts (2 hunks)
  • e2e/tests/api/resource-selectors.spec.ts (1 hunks)
  • packages/events/src/types.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/webservice/src/app/api/v1/environments/route.ts
  • e2e/tests/api/resource-selectors.spec.ts
  • apps/event-worker/src/workers/new-policy.ts
  • apps/event-worker/src/workers/new-deployment.ts
  • apps/event-worker/src/workers/update-environment.ts
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
  • e2e/tests/api/environments.spec.ts
  • apps/event-worker/src/utils/with-mutex.ts
  • apps/event-worker/src/workers/compute-systems-release-targets.ts
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
  • apps/event-worker/src/workers/update-deployment.ts
  • apps/event-worker/src/workers/new-environment.ts
  • apps/event-worker/src/workers/new-resource.ts
  • apps/event-worker/src/workers/index.ts
  • apps/event-worker/src/workers/updated-resources/index.ts
  • packages/events/src/types.ts
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts
🧬 Code Graph Analysis (3)
e2e/tests/api/environments.spec.ts (1)
packages/db/src/schema/environment.ts (1)
  • environment (59-84)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (5)
packages/events/src/index.ts (2)
  • createWorker (10-25)
  • getQueue (28-34)
packages/db/src/schema/policy.ts (1)
  • policyTarget (64-78)
packages/db/src/client.ts (1)
  • db (15-15)
apps/event-worker/src/utils/with-mutex.ts (1)
  • withMutex (5-28)
packages/db/src/selectors/index.ts (1)
  • selector (18-18)
apps/event-worker/src/workers/new-resource.ts (4)
packages/db/src/client.ts (1)
  • db (15-15)
packages/db/src/schema/environment.ts (1)
  • environment (59-84)
packages/events/src/index.ts (1)
  • getQueue (28-34)
packages/db/src/schema/deployment.ts (1)
  • deployment (69-93)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (19)
e2e/tests/api/resource-selectors.spec.ts (1)

18-18: Consistent spacing improves readability.

This added blank line helps maintain consistent code style by separating setup steps in the beforeAll hook.

packages/events/src/types.ts (2)

32-35: Well-structured channel additions for the queue-based architecture.

The new channels follow a consistent naming pattern and align with the overall objective of introducing a queue-based architecture for computing resource selectors and release targets.


67-70: Appropriate payload structure for the new channel types.

Each new channel type uses a consistent and simple payload structure with just an id property, which is appropriate for these computation tasks where the entity ID is sufficient to perform the necessary operations.

e2e/tests/api/environments.spec.ts (2)

12-13: Improved code organization.

Moving the yamlPath constant to module scope is a good practice as it avoids redefining it on each test run and makes it available to the entire module.


69-73: Enhanced test assertions for better validation.

The explicit assertions for response status and data existence improve the test robustness by verifying the API's behavior more thoroughly. The non-null assertion used after verifying that the data is defined ensures type safety while maintaining clear intent.

apps/event-worker/src/workers/new-policy.ts (3)

1-4: Updated imports to support the queue-based architecture

The imports have been updated to include database-related functionality while removing unnecessary imports that were previously used for direct selector computations. This aligns with the architectural shift to queue-based processing.


7-9: LGTM: Replaced inline selector computation with explicit database query

The code now fetches policy targets directly from the database instead of using selector chains, which aligns with the new architecture approach.


11-16:

✅ Verification successful

Queue-based delegation implemented correctly

The worker now delegates the computation of release target selectors to a dedicated worker by enqueuing jobs on the ComputePolicyTargetReleaseTargetSelector channel. This follows the PR objective of introducing a modular, queue-based architecture for selector computations.

Consider adding error handling for potential database or queue failures.


🏁 Script executed:

#!/bin/bash
# Check if the ComputePolicyTargetReleaseTargetSelector channel is properly registered
grep -r "ComputePolicyTargetReleaseTargetSelector" --include="*.ts" .

Length of output: 1102


Queue-based delegation to ComputePolicyTargetReleaseTargetSelector channel is correct

  • Verified that the ComputePolicyTargetReleaseTargetSelector channel is declared in packages/events/src/types.ts and registered in apps/event-worker/src/workers/index.ts.
  • The loop in apps/event-worker/src/workers/new-policy.ts correctly enqueues each policyTarget for processing.
  • Consider wrapping the getQueue(...).add(...) call in a try/catch or appending a .catch() to handle potential database or queue failures gracefully.
apps/webservice/src/app/api/v1/environments/route.ts (5)

34-38: Simplified database query with promise chaining

The environment lookup has been restructured to use promise chaining with then(takeFirstOrNull) instead of await syntax, which aligns with the coding guideline that allows for various error handling patterns including Promise chains.


40-68: Well-structured transaction for environment creation

The transaction logic correctly handles:

  1. Processing version channels
  2. Upserting the environment
  3. Creating jobs for the new environment
  4. Returning the environment with metadata

This ensures atomic operations when creating an environment.


70-74: Proper event enqueueing for environment updates

When an existing environment is found, an update event is enqueued with both the new environment data and the old selector, which is necessary for proper diff computation in the worker.


76-77: Proper event enqueueing for new environments

When creating a new environment, a new environment event is correctly enqueued with the environment data.


79-79: Clean response handling

The response correctly returns the environment data as JSON, maintaining a clear API contract.

apps/event-worker/src/workers/new-deployment.ts (2)

1-1: Simplified imports for queue-based architecture

The imports have been streamlined to only include event-related functionality, removing database and schema imports that were previously used for direct computations.


6-10:

✅ Verification successful

Replaced inline processing with queue-based delegation

The worker now delegates all computation by enqueuing a job on the ComputeDeploymentResourceSelector channel instead of performing inline database operations. This is aligned with the PR objective of moving to a queue-based architecture.

The jobId is explicitly set to match the deployment ID, which ensures consistent job tracking.


🏁 Script executed:

#!/bin/bash
# Check if the ComputeDeploymentResourceSelector channel and worker are properly registered
grep -r "ComputeDeploymentResourceSelector" --include="*.ts" .

Length of output: 1256


✅ Resolved: Queue-based computation delegation is correctly implemented

All references to ComputeDeploymentResourceSelector are properly registered and the new-deployment worker now enqueues jobs instead of performing inline DB work:

  • Location: apps/event-worker/src/workers/new-deployment.ts (lines 6–10)
  • Delegates computation by calling
    await getQueue(Channel.ComputeDeploymentResourceSelector).add(
      job.data.id,
      job.data,
      { jobId: job.data.id },
    );
  • jobId explicitly set to the deployment ID for consistent tracking
  • Verified registrations in:
    • apps/event-worker/src/workers/index.ts
    • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
    • Additional uses in new-resource.ts, update-deployment.ts, and updated-resources/index.ts

No further changes needed.

apps/event-worker/src/workers/new-environment.ts (2)

1-1: Simplified imports for queue-based architecture

Imports have been updated to only include event-related functionality, removing database and utility imports that were previously used for direct computations.


21-25:

✅ Verification successful

Replaced inline processing with queue-based delegation

The worker now delegates computation by enqueuing a job on the ComputeEnvironmentResourceSelector channel instead of directly computing release targets. This aligns with the PR objective of introducing a queue-based architecture.

The jobId is explicitly set to match the environment ID, which ensures consistent job tracking.


🏁 Script executed:

#!/bin/bash
# Check if the ComputeEnvironmentResourceSelector channel and worker are properly registered
grep -r "ComputeEnvironmentResourceSelector" --include="*.ts" .

Length of output: 1131


Queue-based delegation for ComputeEnvironmentResourceSelector verified

The ComputeEnvironmentResourceSelector channel and its worker are correctly registered in:

  • packages/events/src/types.ts
  • apps/event-worker/src/workers/index.ts
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts

The enqueue call in apps/event-worker/src/workers/new-environment.ts is accurate and requires no further changes.

apps/event-worker/src/workers/index.ts (1)

47-55: Registration looks good

All newly introduced workers are properly registered and typed.
No further action required.

apps/event-worker/src/workers/compute-systems-release-targets.ts (1)

138-144: Verify column alias emitted by the join

drizzle (and other SQL builders) will alias joined columns based on table names.
The destructuring for (const { policy_target: policyTarget } of policyTargets) assumes the alias is policy_target.

If the driver instead produces { policyTarget: … } the loop will silently produce undefined.
Double‑check the generated column naming or use an explicit select({ policyTarget: schema.policyTarget.* }) to make this deterministic.

Comment on lines 27 to 32
await getQueue(Channel.ComputeDeploymentResourceSelector).add(
resource.id,
resource,
{ jobId: resource.id },
);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Wrong queue/channel + payload combination will crash the downstream worker

computeDeploymentResourceSelectorWorkerEvent expects a payload containing a deployment (or at minimum { id: <deploymentId> }).
Here, we enqueue a resource object; the worker will look up a deployment whose id === resource.id, fail the query, and throw “Deployment not found”, causing the job to retry forever.

-    await getQueue(Channel.ComputeDeploymentResourceSelector).add(
-      resource.id,
-      resource,
-      { jobId: resource.id },
-    );
+    // Option A – enqueue one job per deployment that might be affected
+    // const deployments = await db.query.deployment.findMany({ …filter by workspace/system… });
+    // for (const d of deployments) {
+    //   await getQueue(Channel.ComputeDeploymentResourceSelector).add(
+    //     d.id,
+    //     d,
+    //     { jobId: d.id },
+    //   );
+    // }
+
+    // Option B – if the intent was system‑level recomputation, target that specific queue instead:
+    // await getQueue(Channel.ComputeSystemsReleaseTargets).add(
+    //   resource.workspaceId,
+    //   { workspaceId: resource.workspaceId },
+    //   { jobId: `ws:${resource.workspaceId}` },
+    // );

Please adopt whichever option matches the design, but sending a Resource to ComputeDeploymentResourceSelector is definitely incorrect.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await getQueue(Channel.ComputeDeploymentResourceSelector).add(
resource.id,
resource,
{ jobId: resource.id },
);
// Option A – enqueue one job per deployment that might be affected
// const deployments = await db.query.deployment.findMany({ …filter by workspace/system… });
// for (const d of deployments) {
// await getQueue(Channel.ComputeDeploymentResourceSelector).add(
// d.id,
// d,
// { jobId: d.id },
// );
// }
// Option B – if the intent was system‑level recomputation, target that specific queue instead:
// await getQueue(Channel.ComputeSystemsReleaseTargets).add(
// resource.workspaceId,
// { workspaceId: resource.workspaceId },
// { jobId: `ws:${resource.workspaceId}` },
// );

Comment on lines 40 to 43
getQueue(Channel.ComputeDeploymentResourceSelector).add(data.id, data, {
jobId: data.id,
});
if (deployment == null)
throw new Error(`Deployment not found: ${data.id}`);

const { releaseTargets } = deployment;
const currentResources = releaseTargets.map((rt) => rt.resource);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Await the enqueue operation to surface queue‑level failures

queue.add() returns a promise. By not awaiting it, any network / Redis error will be swallowed and the worker will ACK the job even though the follow‑up job was never scheduled, silently breaking the pipeline.

-      getQueue(Channel.ComputeDeploymentResourceSelector).add(data.id, data, {
-        jobId: data.id,
-      });
+      await getQueue(Channel.ComputeDeploymentResourceSelector).add(
+        data.id,
+        data,
+        { jobId: data.id },
+      );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
getQueue(Channel.ComputeDeploymentResourceSelector).add(data.id, data, {
jobId: data.id,
});
if (deployment == null)
throw new Error(`Deployment not found: ${data.id}`);
const { releaseTargets } = deployment;
const currentResources = releaseTargets.map((rt) => rt.resource);
await getQueue(Channel.ComputeDeploymentResourceSelector).add(
data.id,
data,
{ jobId: data.id },
);

Comment on lines 70 to 86
const { deployments, environments } = system;
const deploymentIds = deployments.map((d) => d.id);
const environmentIds = environments.map((e) => e.id);
const { workspaceId } = system;

const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
const [acquired, createdReleaseTargets] = await withMutex(key, () =>
db.transaction(async (tx) => {
const previousReleaseTargets = await tx
.delete(schema.releaseTarget)
.where(
or(
inArray(schema.releaseTarget.deploymentId, deploymentIds),
inArray(schema.releaseTarget.environmentId, environmentIds),
),
)
.returning();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Handle empty deployment / environment lists to avoid IN () SQL

If a system is created without deployments or environments the inArray() calls will yield an invalid IN () clause or (depending on the dialect) match everything.
Add an early return or guard:

+    if (deploymentIds.length === 0 && environmentIds.length === 0) {
+      return;
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { deployments, environments } = system;
const deploymentIds = deployments.map((d) => d.id);
const environmentIds = environments.map((e) => e.id);
const { workspaceId } = system;
const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
const [acquired, createdReleaseTargets] = await withMutex(key, () =>
db.transaction(async (tx) => {
const previousReleaseTargets = await tx
.delete(schema.releaseTarget)
.where(
or(
inArray(schema.releaseTarget.deploymentId, deploymentIds),
inArray(schema.releaseTarget.environmentId, environmentIds),
),
)
.returning();
const { deployments, environments } = system;
const deploymentIds = deployments.map((d) => d.id);
const environmentIds = environments.map((e) => e.id);
+ if (deploymentIds.length === 0 && environmentIds.length === 0) {
+ return;
+ }
const { workspaceId } = system;
const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
const [acquired, createdReleaseTargets] = await withMutex(key, () =>
db.transaction(async (tx) => {
const previousReleaseTargets = await tx
.delete(schema.releaseTarget)
.where(
or(
inArray(schema.releaseTarget.deploymentId, deploymentIds),
inArray(schema.releaseTarget.environmentId, environmentIds),
),
)
.returning();

Comment on lines 68 to 73
.then((rt) =>
rt.map((rt) => ({
policyTargetId: policyTarget.id,
releaseTargetId: rt.release_target.id,
})),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Incorrect property name on joined result

rt.release_target.id assumes Drizzle returns an underscored alias.
Joins in Drizzle return an object shaped like { releaseTarget: { … } } by default, so this will be undefined, causing all releaseTargetId values to be undefined.

-              releaseTargetId: rt.release_target.id,
+              releaseTargetId: rt.releaseTarget.id,

Please verify the actual shape returned by drizzle and adjust accordingly.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.then((rt) =>
rt.map((rt) => ({
policyTargetId: policyTarget.id,
releaseTargetId: rt.release_target.id,
})),
);
.then((rt) =>
rt.map((rt) => ({
policyTargetId: policyTarget.id,
releaseTargetId: rt.releaseTarget.id,
})),
);

Comment on lines 49 to 65
and(
isNull(schema.resource.deletedAt),
selector()
.query()
.resources()
.where(policyTarget.resourceSelector)
.sql(),
selector()
.query()
.deployments()
.where(policyTarget.deploymentSelector)
.sql(),
selector()
.query()
.environments()
.where(policyTarget.environmentSelector)
.sql(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Potential null‑selector crash

policyTarget.resourceSelector, .deploymentSelector, and .environmentSelector can be null (they are nullable in the schema).
If any of them is null, the builder’s where() will likely throw because it expects a condition object.

Guard the call or fall back to .sql(true) / a tautology when the selector is missing.


🏁 Script executed:

#!/bin/bash
# Search for PolicyTarget type definitions
rg -n "interface PolicyTarget" -A5
rg -n "type PolicyTarget" -A5

# Inspect usages of resourceSelector, deploymentSelector, and environmentSelector
rg -n "\.resourceSelector" -A3
rg -n "\.deploymentSelector" -A3
rg -n "\.environmentSelector" -A3

Length of output: 69330


🏁 Script executed:

#!/bin/bash
# Locate the target file
file_path=$(fd --full-path compute-policy-target-release-target-selector.ts)

# Display lines 1-200 for context
sed -n '1,200p' "$file_path"

Length of output: 3548


Prevent null‑selector crash in computePolicyTargetReleaseTargetSelectorWorkerEvent

Guard each .where(policyTarget.*Selector) call so that a missing selector doesn’t cause the query builder to throw. If a selector is null, fall back to a no‑op filter (e.g. .sql(true)).

Locations to update:

  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts lines ~54–60

Suggested diff:

-         .where(
-           and(
-             isNull(schema.resource.deletedAt),
-             selector()
-               .query()
-               .resources()
-               .where(policyTarget.resourceSelector)
-               .sql(),
-             selector()
-               .query()
-               .deployments()
-               .where(policyTarget.deploymentSelector)
-               .sql(),
-             selector()
-               .query()
-               .environments()
-               .where(policyTarget.environmentSelector)
-               .sql(),
-           ),
-         )
+         .where(
+           and(
+             isNull(schema.resource.deletedAt),
+             policyTarget.resourceSelector
+               ? selector()
+                   .query()
+                   .resources()
+                   .where(policyTarget.resourceSelector)
+                   .sql()
+               : selector().query().resources().sql(true),
+             policyTarget.deploymentSelector
+               ? selector()
+                   .query()
+                   .deployments()
+                   .where(policyTarget.deploymentSelector)
+                   .sql()
+               : selector().query().deployments().sql(true),
+             policyTarget.environmentSelector
+               ? selector()
+                   .query()
+                   .environments()
+                   .where(policyTarget.environmentSelector)
+                   .sql()
+               : selector().query().environments().sql(true),
+           ),
+         )

This ensures that when any of the selectors is null, the query will include a tautology rather than throwing.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
and(
isNull(schema.resource.deletedAt),
selector()
.query()
.resources()
.where(policyTarget.resourceSelector)
.sql(),
selector()
.query()
.deployments()
.where(policyTarget.deploymentSelector)
.sql(),
selector()
.query()
.environments()
.where(policyTarget.environmentSelector)
.sql(),
.where(
and(
isNull(schema.resource.deletedAt),
policyTarget.resourceSelector
? selector()
.query()
.resources()
.where(policyTarget.resourceSelector)
.sql()
: selector()
.query()
.resources()
.sql(true),
policyTarget.deploymentSelector
? selector()
.query()
.deployments()
.where(policyTarget.deploymentSelector)
.sql()
: selector()
.query()
.deployments()
.sql(true),
policyTarget.environmentSelector
? selector()
.query()
.environments()
.where(policyTarget.environmentSelector)
.sql()
: selector()
.query()
.environments()
.sql(true),
),
)

Comment on lines 41 to 75
const [acquired] = await withMutex(key, () => {
db.transaction(async (tx) => {
await tx
.delete(schema.computedEnvironmentResource)
.where(
eq(
schema.computedEnvironmentResource.environmentId,
environment.id,
),
);

if (environment.resourceSelector == null) return;

const resources = await tx.query.resource.findMany({
where: and(
eq(schema.resource.workspaceId, workspaceId),
selector()
.query()
.resources()
.where(environment.resourceSelector)
.sql(),
isNull(schema.resource.deletedAt),
),
});

const computedEnvironmentResources = resources.map((r) => ({
environmentId: environment.id,
resourceId: r.id,
}));

await tx
.insert(schema.computedEnvironmentResource)
.values(computedEnvironmentResources);
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

db.transaction never awaited – main work is skipped

The function passed to withMutex uses a block body without a return, so it returns undefined.
As a result, the transaction executes asynchronously and withMutex resolves immediately, releasing the mutex before the DB work completes (and acquired is always true, see below).

-    const [acquired] = await withMutex(key, () => {
-      db.transaction(async (tx) => {
+    const [acquired] = await withMutex(key, () => {
+      return db.transaction(async (tx) => {
         …
       });
     });

Also pass { tryAcquire: true } so the retry branch becomes functional:

-    const [acquired] = await withMutex(key, () => {
+    const [acquired] = await withMutex(
+      key,
+      () => {
+        return db.transaction(async (tx) => { … });
+      },
+      { tryAcquire: true },
+    );

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
apps/event-worker/src/workers/update-resource-variable.ts (1)

1-1: Remove the unused lodash import

_ is never referenced in this file after the refactor, so the import increases bundle size and trips most linters.

-import _ from "lodash";
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5dd2f3a and dcc9168.

📒 Files selected for processing (13)
  • apps/event-worker/src/workers/new-environment.ts (1 hunks)
  • apps/event-worker/src/workers/update-resource-variable.ts (1 hunks)
  • apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (1 hunks)
  • apps/webservice/src/app/api/v1/resources/route.ts (2 hunks)
  • packages/db/src/selectors/compute/compute.ts (0 hunks)
  • packages/db/src/selectors/compute/deployment-builder.ts (0 hunks)
  • packages/db/src/selectors/compute/environment-builder.ts (0 hunks)
  • packages/db/src/selectors/compute/mutex.ts (0 hunks)
  • packages/db/src/selectors/compute/policy-builder.ts (0 hunks)
  • packages/db/src/selectors/compute/resource-builder.ts (0 hunks)
  • packages/db/src/selectors/index.ts (0 hunks)
  • packages/rule-engine/src/db/create-policy.ts (1 hunks)
  • packages/rule-engine/src/db/update-policy.ts (1 hunks)
💤 Files with no reviewable changes (7)
  • packages/db/src/selectors/index.ts
  • packages/db/src/selectors/compute/mutex.ts
  • packages/db/src/selectors/compute/compute.ts
  • packages/db/src/selectors/compute/resource-builder.ts
  • packages/db/src/selectors/compute/deployment-builder.ts
  • packages/db/src/selectors/compute/environment-builder.ts
  • packages/db/src/selectors/compute/policy-builder.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/event-worker/src/workers/new-environment.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts
  • packages/rule-engine/src/db/create-policy.ts
  • apps/event-worker/src/workers/update-resource-variable.ts
  • packages/rule-engine/src/db/update-policy.ts
  • apps/webservice/src/app/api/v1/resources/route.ts
🧬 Code Graph Analysis (1)
apps/webservice/src/app/api/v1/resources/route.ts (1)
packages/events/src/index.ts (1)
  • getQueue (28-34)
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (10)
apps/webservice/src/app/api/v1/resources/route.ts (2)

5-5: Simplified import statement aligns with architectural changes.

The removal of the selector import aligns with the architectural shift to queue-based selector computation rather than inline processing.


75-78: Well-implemented queue-based approach for selector computation.

The code has been improved by replacing inline selector computation with a direct queue job. This change:

  1. Decreases API response time by offloading computation to background workers
  2. Improves separation of concerns in the codebase
  3. Creates a more consistent pattern for handling resource processing
  4. Makes the code more maintainable by reducing complexity

The use of jobId: insertedResource.id ensures idempotency, preventing duplicate processing if the job is requeued.

packages/rule-engine/src/db/update-policy.ts (2)

4-4: Refactored imports align with queue-based architecture

The removal of selector-related imports matches the architectural shift to a queue-based system for computing resource selectors and release targets.

This change is part of the broader refactoring to delegate selector computations to dedicated worker events rather than performing them inline. The previous call to selector().compute().policies([policy.id]).releaseTargetSelectors() has been removed from this file, and the computation will now be handled by workers triggered through queue events.


139-192: LGTM: Function maintains core functionality while delegating selector computations

The updatePolicyInTx function now focuses solely on data persistence operations, with the release target selector computation delegated to worker events.

This approach improves separation of concerns by:

  1. Keeping database operations focused on CRUD functionality
  2. Moving potentially expensive selector computations to asynchronous workers
  3. Allowing better scaling and failure isolation for computation-heavy operations

The transaction pattern with Promise.all for concurrent updates is maintained, which is a good practice.

packages/rule-engine/src/db/create-policy.ts (2)

4-4: Refactored imports align with queue-based architecture

The removal of selector-related imports matches the architectural shift to a queue-based system for computing resource selectors and release targets.

This change is part of the broader refactoring to delegate selector computations to dedicated worker events rather than performing them inline. The previous computations related to release target selectors have been removed from this file, aligning with the strategy to handle these operations in dedicated workers.


88-192: LGTM: Function maintains core functionality while delegating selector computations

The createPolicyInTx function correctly focuses on data persistence while relying on the queue event at line 181 to trigger worker-based selector computations.

This architecture improvement:

  1. Maintains the core policy creation functionality
  2. Delegates complex selector computations to specialized workers
  3. Uses distributed mutexes (implemented elsewhere) to ensure exclusive processing
  4. Provides better isolation of concerns, making the code more maintainable

The retained queue event on line 181 (getQueue(Channel.NewPolicy).add(policy.id, policy)) serves as the trigger for the new worker-based computation of selectors.

apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (4)

5-5: Import change aligns with architecture updates

The removal of the selector import from @ctrlplane/db is consistent with the architectural shift to delegate selector computations to dedicated background workers instead of performing them inline within API handlers.


106-116: Consistent response format maintained

The endpoint still constructs a properly formatted response after the resource update, maintaining the API contract even after removing the selector computation. The metadata formatting is preserved correctly.


118-125: Error handling remains unchanged

The error handling remains comprehensive, catching all errors and converting them to appropriate API responses with logging. This ensures the API behavior remains consistent and predictable even with architectural changes.


92-126:

❓ Verification inconclusive

Confirm queue job is being properly enqueued elsewhere

The PATCH handler no longer computes resource selectors or adds the resource to the UpdatedResource queue after updates. This implementation correctly focuses the API endpoint on its core responsibility of updating the resource. However, we should verify that the resource selector computation and necessary queue jobs are now being handled by the new dedicated worker system.


🏁 Script executed:

#!/bin/bash
# Search for code that handles resource updates in the worker system
echo "Checking for UpdatedResource worker implementation..."
rg -A 15 "Channel\.UpdatedResource.*add" --glob "*.ts"

echo "Checking for resource selector computation workers..."
rg -A 10 "ComputeEnvironmentResourceSelector|ComputeDeploymentResourceSelector" --glob "*.ts" 

Length of output: 16475


🏁 Script executed:

#!/bin/bash
echo "Locate all references to upsertResources across the repo:"
rg -n "upsertResources" --glob "*.ts" --glob "*.js"

echo -e "\nShow import and context in route.ts:"
rg -n "upsertResources" -C3 apps/webservice/src/app/api/v1/resources/\[resourceId\]/route.ts

echo -e "\nDisplay the implementation of upsertResources (first 200 lines):"
# Find the file defining upsertResources
file=$(rg -l "export function upsertResources" -g "*.ts")
if [ -n "$file" ]; then
  echo "Found in $file"
  sed -n '1,200p' "$file"
else
  echo "Definition not found by name; searching for function signature..."
  rg -n "function upsertResources" -C5 -g "*.ts"
fi

Length of output: 2461


🏁 Script executed:

#!/bin/bash
echo "Searching for Channel.UpdatedResource in event-worker worker mapping..."
rg -n "UpdatedResource" apps/event-worker/src/workers/index.ts || echo "No Channel.UpdatedResource entry in index.ts"

echo -e "\nDisplaying the first 50 lines of the updated-resources worker..."
if [ -f "apps/event-worker/src/workers/updated-resources/index.ts" ]; then
  sed -n '1,50p' apps/event-worker/src/workers/updated-resources/index.ts
else
  echo "Worker file not found: apps/event-worker/src/workers/updated-resources/index.ts"
fi

Length of output: 2098


🏁 Script executed:

#!/bin/bash
echo "Printing first 200 lines of packages/db/src/upsert-resources.ts to inspect upsertResources implementation..."
sed -n '1,200p' packages/db/src/upsert-resources.ts

echo -e "\nSearching within upsert-resources.ts for queue interactions (getQueue or Channel.UpdatedResource)..."
rg -n "getQueue|UpdatedResource" -C3 packages/db/src/upsert-resources.ts

Length of output: 5218


Verify UpdatedResource jobs are enqueued for API‐driven updates

The PATCH handler now focuses solely on persisting changes via upsertResources and no longer computes selectors or enqueues Channel.UpdatedResource. We’ve confirmed:
upsertResources (and the API route) do not call getQueue(Channel.UpdatedResource)
- An updatedResourceWorker is registered in apps/event-worker to process Channel.UpdatedResource
- Other subsystems (provider‐scan, agent‐socket) do enqueue UpdatedResource jobs

Please ensure that updates coming through this API still publish to the UpdatedResource queue—for example, by adding a queue call in this handler, wiring it into your job‐dispatch layer, or via a DB trigger—so selector recomputation continues as expected.

Comment on lines +7 to 9
async () => {
// todo:?
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

No‑op worker will silently swallow UpdateResourceVariable events

The worker is still subscribed to Channel.UpdateResourceVariable, but the handler body is empty.
Current behaviour:

  1. Job is dequeued and immediately acknowledged (the promise resolves).
  2. Nothing is published to the new compute queues.
  3. Any downstream logic that relied on the selector / release‑target recomputation is now skipped.

Unless this channel is completely deprecated, leaving the worker in place will create hard‑to‑trace data loss.
Two safer options:

  1. Delete the worker and stop registering it, so producers receive “no consumer” alerts.
  2. Delegate the event to the new compute workers instead of ignoring it.

Example delegation stub (pseudo‑code ‑ adjust API names accordingly):

-  async () => {
-    // todo:?
+  async (payload, ctx) => {
+    // TEMP: delegate to new compute flow until this channel is removed
+    await ctx.producer.produce(
+      Channel.ComputeEnvironmentResourceSelector,
+      payload
+    );
   },

If the channel is obsolete, remove the worker entirely to avoid confusion.

Run the following to confirm no other worker handles UpdateResourceVariable anymore:


🏁 Script executed:

#!/bin/bash
# Search for remaining consumers of the channel
rg --context 2 'Channel\.UpdateResourceVariable'

Length of output: 1758


Empty handler for UpdateResourceVariable will silently swallow events

The updateResourceVariableWorker in apps/event-worker/src/workers/update-resource-variable.ts is still subscribed to Channel.UpdateResourceVariable but its handler is a no‑op. A quick search confirms no other worker consumes this channel, so every event will be dequeued and acknowledged without any downstream processing—likely causing invisible data loss.

Two remediation paths:

  • Remove the worker
    If UpdateResourceVariable is deprecated, unregister it in apps/event-worker/src/workers/index.ts:

    -  [Channel.UpdateResourceVariable]: updateResourceVariableWorker,

    and delete update-resource-variable.ts.

  • Delegate to the new compute flow
    Forward incoming payloads to your compute queue until the old channel is removed:

     export const updateResourceVariableWorker = createWorker(
       Channel.UpdateResourceVariable,
    -  async () => {
    -    // todo:?
    +  async (payload, ctx) => {
    +    // TEMP: delegate to new compute flow until this channel is removed
    +    await ctx.producer.produce(
    +      Channel.ComputeEnvironmentResourceSelector,
    +      payload
    +    );
     },

Please apply one of these fixes to avoid hard‑to‑trace data loss.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async () => {
// todo:?
},
export const updateResourceVariableWorker = createWorker(
Channel.UpdateResourceVariable,
async (payload, ctx) => {
// TEMP: delegate to new compute flow until this channel is removed
await ctx.producer.produce(
Channel.ComputeEnvironmentResourceSelector,
payload
);
},
);

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (5)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (3)

21-22: Add tryAcquire: true option to mutex acquisition

The current mutex call will always return acquired = true because it doesn't use the tryAcquire option, making the condition check at line 84 redundant.

Apply this fix to correctly implement non-blocking mutex acquisition:

-    const [acquired, rts] = await withMutex(key, () =>
+    const [acquired, rts] = await withMutex(
+      key,
+      () =>
       db.transaction(async (tx) => {
         ...
       }),
+      { tryAcquire: true },
+    );

49-65: Prevent null-selector crash in where conditions

The current implementation doesn't handle cases where selectors are null, which can cause the query builder to throw errors since policyTarget.resourceSelector, .deploymentSelector, and .environmentSelector are nullable in the schema.

Add null checks to prevent potential crashes:

          .where(
            and(
              isNull(schema.resource.deletedAt),
              selector()
                .query()
                .resources()
-               .where(policyTarget.resourceSelector)
+               .where(policyTarget.resourceSelector ?? {})
                .sql(),
              selector()
                .query()
                .deployments()
-               .where(policyTarget.deploymentSelector)
+               .where(policyTarget.deploymentSelector ?? {})
                .sql(),
              selector()
                .query()
                .environments()
-               .where(policyTarget.environmentSelector)
+               .where(policyTarget.environmentSelector ?? {})
                .sql(),
            ),
          )

Or use a more robust approach with conditional checks:

          .where(
            and(
              isNull(schema.resource.deletedAt),
-             selector()
-               .query()
-               .resources()
-               .where(policyTarget.resourceSelector)
-               .sql(),
+             policyTarget.resourceSelector
+               ? selector()
+                   .query()
+                   .resources()
+                   .where(policyTarget.resourceSelector)
+                   .sql()
+               : selector().query().resources().sql(true),
-             selector()
-               .query()
-               .deployments()
-               .where(policyTarget.deploymentSelector)
-               .sql(),
+             policyTarget.deploymentSelector
+               ? selector()
+                   .query()
+                   .deployments()
+                   .where(policyTarget.deploymentSelector)
+                   .sql()
+               : selector().query().deployments().sql(true),
-             selector()
-               .query()
-               .environments()
-               .where(policyTarget.environmentSelector)
-               .sql(),
+             policyTarget.environmentSelector
+               ? selector()
+                   .query()
+                   .environments()
+                   .where(policyTarget.environmentSelector)
+                   .sql()
+               : selector().query().environments().sql(true),
            ),
          )

68-73: Fix property access on joined result

The current implementation uses rt.release_target.id which assumes Drizzle returns underscored aliases for joined tables. By default, Drizzle uses camelCase property names.

Apply this fix:

          .then((rt) =>
            rt.map((rt) => ({
              policyTargetId: policyTarget.id,
-              releaseTargetId: rt.release_target.id,
+              releaseTargetId: rt.releaseTarget.id,
            })),
          );
apps/event-worker/src/workers/compute-systems-release-targets.ts (2)

70-74: Handle empty deployment/environment lists to avoid invalid SQL

When a system has no deployments or environments, the inArray() calls on lines 82-83 could generate invalid SQL statements with empty IN () clauses.

Add an early return when both lists are empty:

    const { deployments, environments } = system;
    const deploymentIds = deployments.map((d) => d.id);
    const environmentIds = environments.map((e) => e.id);
    const { workspaceId } = system;

+   if (deploymentIds.length === 0 && environmentIds.length === 0) {
+     return;
+   }

88-98: Ensure uniqueness before bulk-insert to prevent primary-key violations

The findMatchingEnvironmentDeploymentPairs function can generate duplicate rows when a resource matches multiple computed-deployment rows through the LEFT JOIN, which could lead to primary key violations during insertion.

Add deduplication before insertion:

        const releaseTargets = await findMatchingEnvironmentDeploymentPairs(
          tx,
          workspaceId,
        );

+       // Deduplicate release targets to prevent primary key violations
+       const uniqueReleaseTargets = releaseTargets.filter(
+         (rt, index, self) =>
+           index ===
+           self.findIndex(
+             (t) =>
+               t.deploymentId === rt.deploymentId &&
+               t.environmentId === rt.environmentId &&
+               t.resourceId === rt.resourceId
+           )
+       );

        if (releaseTargets.length > 0)
          await tx
            .insert(schema.releaseTarget)
-           .values(releaseTargets)
+           .values(uniqueReleaseTargets)
            .onConflictDoNothing();
🧹 Nitpick comments (3)
apps/webservice/src/app/api/v1/systems/[systemId]/route.ts (1)

73-73: Authorization pattern updated.

The destructuring pattern has changed from ({ can, extra: { params } }) to ({ can, params }) directly. Note that this creates an inconsistency with the GET endpoint (line 16) which still uses the old pattern. Consider updating all endpoints to use the same pattern for consistency.

apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)

12-19: Add try/catch block around critical operations

While the worker framework might handle exceptions at a high level, it's better to have explicit error handling around critical operations to provide more context-specific error messages and recovery strategies.

  async (job) => {
+   try {
      const { id } = job.data;

      const policyTarget = await db.query.policyTarget.findFirst({
        where: eq(schema.policyTarget.id, id),
      });

      if (policyTarget == null) throw new Error("Policy target not found");
+   } catch (error) {
+     console.error(`Error processing policy target ${job.data.id}:`, error);
+     throw error;
+   }
apps/event-worker/src/workers/compute-systems-release-targets.ts (1)

11-58: Consider improving the resource-matching conditions for better readability

The findMatchingEnvironmentDeploymentPairs function uses complex join conditions and filtering logic. Consider extracting these conditions into more descriptive named variables for better readability and maintainability.

For example:

  const findMatchingEnvironmentDeploymentPairs = (
    tx: Tx,
    workspaceId: string,
  ) => {
    const isResourceMatchingEnvironment = eq(
      schema.computedEnvironmentResource.resourceId,
      schema.resource.id,
    );
    const isResourceMatchingDeployment = or(
      isNull(schema.deployment.resourceSelector),
      eq(schema.computedDeploymentResource.resourceId, schema.resource.id),
    );
+   const isResourceInWorkspace = eq(schema.resource.workspaceId, workspaceId);
+   const isResourceNotDeleted = isNull(schema.resource.deletedAt);
+   const isDeploymentInSameSystem = eq(schema.deployment.systemId, schema.environment.systemId);

    return tx
      .select({
        environmentId: schema.environment.id,
        deploymentId: schema.deployment.id,
        resourceId: schema.resource.id,
      })
      .from(schema.resource)
      .innerJoin(
        schema.computedEnvironmentResource,
        eq(schema.computedEnvironmentResource.resourceId, schema.resource.id),
      )
      .innerJoin(
        schema.environment,
        eq(
          schema.computedEnvironmentResource.environmentId,
          schema.environment.id,
        ),
      )
      .innerJoin(
        schema.deployment,
-       eq(schema.deployment.systemId, schema.environment.systemId),
+       isDeploymentInSameSystem,
      )
      .leftJoin(
        schema.computedDeploymentResource,
        eq(schema.computedDeploymentResource.deploymentId, schema.deployment.id),
      )
      .where(
        and(
          isResourceMatchingEnvironment,
          isResourceMatchingDeployment,
-         eq(schema.resource.workspaceId, workspaceId),
-         isNull(schema.resource.deletedAt),
+         isResourceInWorkspace,
+         isResourceNotDeleted,
        ),
      );
  };
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between dcc9168 and b7c5489.

📒 Files selected for processing (6)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-systems-release-targets.ts (1 hunks)
  • apps/webservice/src/app/api/v1/systems/[systemId]/route.ts (1 hunks)
  • e2e/tests/api/environments.spec.ts (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • e2e/tests/api/environments.spec.ts
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/webservice/src/app/api/v1/systems/[systemId]/route.ts
  • apps/event-worker/src/workers/compute-systems-release-targets.ts
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
🧬 Code Graph Analysis (2)
apps/event-worker/src/workers/compute-systems-release-targets.ts (5)
packages/db/src/common.ts (1)
  • Tx (22-22)
packages/events/src/index.ts (2)
  • createWorker (10-25)
  • getQueue (28-34)
packages/db/src/client.ts (1)
  • db (15-15)
apps/event-worker/src/utils/with-mutex.ts (1)
  • withMutex (5-28)
packages/db/src/schema/policy.ts (1)
  • policyTarget (64-78)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (5)
packages/events/src/index.ts (2)
  • createWorker (10-25)
  • getQueue (28-34)
packages/db/src/schema/policy.ts (1)
  • policyTarget (64-78)
packages/db/src/client.ts (1)
  • db (15-15)
apps/event-worker/src/utils/with-mutex.ts (1)
  • withMutex (5-28)
packages/db/src/selectors/index.ts (1)
  • selector (13-13)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
🔇 Additional comments (1)
apps/webservice/src/app/api/v1/systems/[systemId]/route.ts (1)

76-76: Added null safety with fallback empty string.

Good defensive programming practice adding the nullish coalescing operator (??). This ensures a defined value is always passed to the permission check, preventing potential runtime errors if systemId is null or undefined.

)
.where(eq(schema.policy.workspaceId, workspaceId));

for (const { policy_target: policyTarget } of policyTargets) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix property access on joined result

The current implementation uses policy_target with an underscore, which assumes Drizzle returns underscored aliases for joined tables. By default, Drizzle uses camelCase property names.

Apply this fix:

-   for (const { policy_target: policyTarget } of policyTargets) {
+   for (const { policyTarget } of policyTargets) {
      getQueue(Channel.ComputePolicyTargetReleaseTargetSelector).add(
        policyTarget.id,
        policyTarget,
        { delay: ms("500ms"), jobId: policyTarget.id },
      );
    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for (const { policy_target: policyTarget } of policyTargets) {
for (const { policyTarget } of policyTargets) {
getQueue(Channel.ComputePolicyTargetReleaseTargetSelector).add(
policyTarget.id,
policyTarget,
{ delay: ms("500ms"), jobId: policyTarget.id },
);
}

Comment on lines 75 to 76
const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
const [acquired, createdReleaseTargets] = await withMutex(key, () =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add tryAcquire: true option to mutex acquisition

The current mutex call will always return acquired = true because it doesn't use the tryAcquire option, making the condition check at line 121 redundant.

Apply this fix to correctly implement non-blocking mutex acquisition:

    const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
-   const [acquired, createdReleaseTargets] = await withMutex(key, () =>
+   const [acquired, createdReleaseTargets] = await withMutex(
+     key,
+     () =>
      db.transaction(async (tx) => {
        ...
      }),
+     { tryAcquire: true },
+   );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
const [acquired, createdReleaseTargets] = await withMutex(key, () =>
const key = `${Channel.ComputeSystemsReleaseTargets}:${system.id}`;
- const [acquired, createdReleaseTargets] = await withMutex(key, () =>
- db.transaction(async (tx) => {
- ...
- }),
- );
+ const [acquired, createdReleaseTargets] = await withMutex(
+ key,
+ () =>
+ db.transaction(async (tx) => {
+ ...
+ }),
+ { tryAcquire: true },
+ );

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
e2e/tests/api/environments.spec.ts (1)

89-186: Consider implementing a polling mechanism instead of fixed timeouts.

The test is comprehensive in verifying the selector update workflow. However, it relies on fixed timeouts (waitForTimeout(10_000) and waitForTimeout(30_000)) which could lead to flaky tests if the computation takes longer than expected.

A polling approach would be more reliable:

- // Wait longer for selector compute to complete (30 seconds)
- await page.waitForTimeout(30_000);
+ // Poll for resources until they're updated or timeout
+ const startTime = Date.now();
+ const timeout = 30_000;
+ let resourcesUpdated = false;
+ 
+ while (!resourcesUpdated && Date.now() - startTime < timeout) {
+   const pollResponse = await api.GET(
+     "/v1/environments/{environmentId}/resources",
+     { params: { path: { environmentId: updatedEnvironmentId } } },
+   );
+   
+   if (pollResponse.response.status === 200 && 
+       pollResponse.data?.resources?.length === 1 &&
+       pollResponse.data?.resources[0]?.identifier === 
+         importedEntities.resources.find((r) => r.metadata?.env === "prod")?.identifier) {
+     resourcesUpdated = true;
+   } else {
+     await page.waitForTimeout(1000); // Wait 1 second between polls
+   }
+ }
+ 
+ expect(resourcesUpdated).toBe(true, "Resources were not updated within the timeout period");

Otherwise, the test structure and assertions are well-implemented.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b7c5489 and b23ba29.

📒 Files selected for processing (1)
  • e2e/tests/api/environments.spec.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/environments.spec.ts (2)
packages/db/src/schema/environment.ts (1)
  • environment (59-84)
e2e/tests/fixtures.ts (1)
  • test (11-26)
🔇 Additional comments (4)
e2e/tests/api/environments.spec.ts (4)

12-13: Good refactoring of the yamlPath constant.

Moving the constant to the module scope improves code reusability across the test suite.


69-73: Great improvement using explicit Playwright assertions.

Using proper expect() assertions instead of manual error checks improves test clarity and provides better error messages on failure.


188-238: Well-structured test with good documentation of API behavior.

This test thoroughly validates the environment metadata update flow and clearly documents that POST creates a new environment version rather than updating in place. Good practice to verify both the update response and the subsequent GET response.


89-274: Great implementation of e2e tests for the queue-based selector computation.

These comprehensive tests effectively validate the new asynchronous queue-based architecture for computing environment selectors. The tests properly:

  1. Create environments with different selectors
  2. Wait for background computation to complete
  3. Verify the correct resources are matched
  4. Validate the environment update and deletion workflows

This demonstrates that the refactoring from inline computation to queue-based worker processing is functioning correctly.

Comment on lines +240 to +274
test("should delete an environment", async ({ api }) => {
// First create an environment
const environmentName = faker.string.alphanumeric(10);
const environmentResponse = await api.POST("/v1/environments", {
body: {
name: environmentName,
systemId: importedEntities.system.id,
},
});

expect(environmentResponse.response.status).toBe(200);
expect(environmentResponse.data?.id).toBeDefined();

const environmentId = environmentResponse.data!.id;

// Delete the environment
const deleteResponse = await api.DELETE(
"/v1/environments/{environmentId}",
{
params: { path: { environmentId } },
},
);

expect(deleteResponse.response.status).toBe(200);

// Try to get the deleted environment - should fail (either 404 or 500)
// Note: The API appears to return 500 instead of 404 when an environment is not found
const getResponse = await api.GET("/v1/environments/{environmentId}", {
params: { path: { environmentId } },
});

// Accept either 404 (not found) or 500 (internal server error) as valid responses
// since the API implementation may be returning 500 for deleted resources
expect([404, 500]).toContain(getResponse.response.status);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

The test adapts well to current API behavior, but consider investigating the 500 error.

This test correctly validates the environment deletion flow. However, it's worth noting that the API returns 500 (server error) instead of the standard 404 (not found) response when requesting a deleted environment. While the test correctly handles this situation, consider:

The RESTful convention would be to return 404 for resources that don't exist, while 500 indicates a server-side issue. This might be worth addressing in a future improvement.


🏁 Script executed:

#!/bin/bash
# Check if there are other endpoints that also return 500 instead of 404
echo "Checking for HTTP 500 error patterns in the codebase..."
rg -B 2 -A 2 "throw new Error|return Response.json.*status.*500" --type=ts apps/

Length of output: 16545


🏁 Script executed:

#!/bin/bash
echo "Locating environment API route handlers under apps/webservice/src/app/api/v1/environments..."
# Find all route.ts files for environments API
ROUTE_FILES=$(rg -l --glob "apps/webservice/src/app/api/v1/environments/**/route.ts" "")
if [ -z "$ROUTE_FILES" ]; then
  echo "No route.ts files found under environments API."
  exit 0
fi

echo "Found route files:"
echo "$ROUTE_FILES"
echo

# For each route.ts, print the GET handler and its error handling
for FILE in $ROUTE_FILES; do
  echo "---- $FILE ----"
  # Print lines around the GET export
  rg -B 5 -A 20 "export async function GET" "$FILE" || echo "No GET handler in $FILE"
  echo
done

Length of output: 2638


🏁 Script executed:

#!/bin/bash
echo "Displaying environment API route handler implementations..."
for FILE in apps/webservice/src/app/api/v1/environments/**/route.ts; do
  echo "---- $FILE ----"
  sed -n '1,200p' "$FILE"
  echo
done

Length of output: 2486


Update test to expect only 404 for deleted environments

The GET handler for /v1/environments/{environmentId} explicitly returns a 404 when the environment isn’t found, so we can remove the fallback for a 500. In e2e/tests/api/environments.spec.ts around lines 270–274:

-    // Accept either 404 (not found) or 500 (internal server error) as valid responses
-    expect([404, 500]).toContain(getResponse.response.status);
+    // Expect 404 (not found) response for deleted environment
+    expect(getResponse.response.status).toBe(404);
  • File: e2e/tests/api/environments.spec.ts
  • Replace the array‐based assertion with a single 404 check.
  • Remove the comment about 500 since the API now correctly returns 404.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test("should delete an environment", async ({ api }) => {
// First create an environment
const environmentName = faker.string.alphanumeric(10);
const environmentResponse = await api.POST("/v1/environments", {
body: {
name: environmentName,
systemId: importedEntities.system.id,
},
});
expect(environmentResponse.response.status).toBe(200);
expect(environmentResponse.data?.id).toBeDefined();
const environmentId = environmentResponse.data!.id;
// Delete the environment
const deleteResponse = await api.DELETE(
"/v1/environments/{environmentId}",
{
params: { path: { environmentId } },
},
);
expect(deleteResponse.response.status).toBe(200);
// Try to get the deleted environment - should fail (either 404 or 500)
// Note: The API appears to return 500 instead of 404 when an environment is not found
const getResponse = await api.GET("/v1/environments/{environmentId}", {
params: { path: { environmentId } },
});
// Accept either 404 (not found) or 500 (internal server error) as valid responses
// since the API implementation may be returning 500 for deleted resources
expect([404, 500]).toContain(getResponse.response.status);
});
test("should delete an environment", async ({ api }) => {
// First create an environment
const environmentName = faker.string.alphanumeric(10);
const environmentResponse = await api.POST("/v1/environments", {
body: {
name: environmentName,
systemId: importedEntities.system.id,
},
});
expect(environmentResponse.response.status).toBe(200);
expect(environmentResponse.data?.id).toBeDefined();
const environmentId = environmentResponse.data!.id;
// Delete the environment
const deleteResponse = await api.DELETE(
"/v1/environments/{environmentId}",
{
params: { path: { environmentId } },
},
);
expect(deleteResponse.response.status).toBe(200);
// Try to get the deleted environment - should fail (either 404 or 500)
// Note: The API appears to return 500 instead of 404 when an environment is not found
const getResponse = await api.GET("/v1/environments/{environmentId}", {
params: { path: { environmentId } },
});
// Expect 404 (not found) response for deleted environment
expect(getResponse.response.status).toBe(404);
});

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
e2e/tests/api/deployments.spec.ts (1)

29-43: Well-structured test case with appropriate assertions.

The test is well-implemented with:

  • Random data generation using faker to prevent test flakiness
  • Clear API request structure
  • Multiple assertions that verify both the HTTP status code and response data properties

Consider expanding the test suite in the future to cover additional scenarios such as:

  • Updating existing deployments
  • Deleting deployments
  • Error cases with invalid inputs
  • Fetching deployment details
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b23ba29 and fe320d6.

📒 Files selected for processing (3)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts (1 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • e2e/tests/api/deployments.spec.yaml (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • e2e/tests/api/deployments.spec.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
🔇 Additional comments (3)
e2e/tests/api/deployments.spec.ts (3)

1-11: Clean import structure.

The imports are well-organized with standard libraries first, followed by custom API helpers. The explicit importing of specific functions rather than entire modules is good practice for maintaining clear dependencies.


12-12: Well-defined test data path.

Using path.join to create a platform-independent file path is the right approach, ensuring the tests will work consistently across different operating systems.


14-27: Good test isolation with proper setup and teardown.

The test suite follows best practices by importing test entities before all tests and cleaning them up afterward. This ensures test isolation and prevents test data pollution between test runs.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
e2e/tests/api/environments.spec.ts (1)

271-273: Update test to expect only 404 for deleted environments

This test assertion contradicts previous review feedback. The GET handler for /v1/environments/{environmentId} explicitly returns a 404 when the environment isn't found, so we should remove the fallback for a 500.

-    // Accept either 404 (not found) or 500 (internal server error) as valid responses
-    // since the API implementation may be returning 500 for deleted resources
-    expect([404, 500]).toContain(getResponse.response.status);
+    // Expect 404 (not found) response for deleted environment
+    expect(getResponse.response.status).toBe(404);
🧹 Nitpick comments (1)
e2e/tests/api/environments.spec.ts (1)

172-172: Consider parameterizing wait times

The wait time has been increased from 10 seconds in previous tests to 30 seconds here. Consider extracting these values to named constants to make test adjustment easier in the future and document why different operations need different wait times.

-  // Wait longer for selector compute to complete (30 seconds)
-  await page.waitForTimeout(30_000);
+  // Selector compute operations need longer than resource matching
+  const SELECTOR_COMPUTE_WAIT_MS = 30_000;
+  await page.waitForTimeout(SELECTOR_COMPUTE_WAIT_MS);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fe320d6 and 6072088.

📒 Files selected for processing (6)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts (1 hunks)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (3 hunks)
  • e2e/api/schema.ts (1 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • e2e/tests/api/environments.spec.ts (3 hunks)
  • openapi.v1.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • e2e/tests/api/deployments.spec.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts
  • e2e/api/schema.ts
  • e2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (2)
apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (3)
packages/db/src/common.ts (2)
  • Tx (22-22)
  • takeFirstOrNull (15-20)
packages/db/src/schema/deployment.ts (1)
  • deployment (69-93)
packages/logger/src/index.ts (1)
  • logger (48-48)
e2e/tests/api/environments.spec.ts (2)
packages/db/src/schema/environment.ts (1)
  • environment (59-84)
e2e/tests/fixtures.ts (1)
  • test (11-26)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Lint
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (12)
e2e/tests/api/environments.spec.ts (5)

12-13: Good refactoring!

Moving the yamlPath constant to module scope improves code organization and eliminates potential duplication.


69-72: Improved error assertions

The explicit Playwright expect assertions provide better readability and error reporting compared to manual error checking.


89-186: Well-structured test for environment selector updates

This test comprehensively covers the update of environment selectors and resource matching. The explicit wait times and clear assertions ensure proper testing of the asynchronous selector computation pipeline.

Note the documented behavior on line 167: "The API creates a new environment rather than updating the existing one". This is an important behavior to be aware of and is correctly handled in the test.


188-238: Good test coverage for environment metadata updates

This test thoroughly covers the editing of environment name and description, with clear setup, execution, and verification steps.


240-274: Good test coverage for environment deletion

This test properly covers the full lifecycle of environment deletion, which is an important addition to the test suite.

apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts (1)

106-129: Schema definition change improves clarity and alignment with queue-based architecture.

This change replaces the previous reference to an external UpdateDeployment component with an explicit inline schema. The inline definition clearly shows all possible properties that can be updated, which aligns well with the new queue-based architecture for handling resource selector computations.

openapi.v1.json (1)

617-652: Schema consistency maintained across API specifications.

The inline schema definition here mirrors the changes in the TypeScript implementation, ensuring consistency in the API contract. This explicit property enumeration makes it clear what fields can be updated through the PATCH endpoint, which is important for API consumers.

e2e/api/schema.ts (1)

1470-1486: Schema update provides better type safety for deployment updates.

The explicit inline schema definition for the updateDeployment operation provides clearer type safety and better documentation than the previous generic schema reference. This change aligns with the new queue-based architecture for computing resource selectors asynchronously.

The schema now properly includes the resourceSelector field (line 1483-1485) which is necessary for the new async processing model while maintaining all existing deployment properties.

apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (4)

2-2: Appropriate type import for schema validation.

Adding the z type import from Zod enables proper type inference when using the schema validation middleware.


12-12: Good use of middleware for request validation.

Using the parseBody middleware centralizes validation logic and improves separation of concerns.


92-96: Improved request processing with middleware-based validation.

The code now uses the parseBody middleware with the schema definition, which:

  1. Ensures consistent validation
  2. Provides proper typing for the request body
  3. Simplifies the handler implementation

This is a cleaner pattern that separates validation concerns from business logic.


112-118: Simplified database update with validated body.

The database update operation is now cleaner, directly using the validated body object that matches the database schema. This approach:

  1. Reduces code duplication
  2. Ensures type safety
  3. Aligns with the new architecture where resource selector computation is handled asynchronously

The use of returning() with takeFirstOrNull properly handles the response.

Comment on lines 97 to 126
try {
const { deploymentId } = await params;

const updatedDeployment = await db
.update(SCHEMA.deployment)
.set(body)
.where(eq(SCHEMA.deployment.id, deploymentId))
.returning()
.then(takeFirstOrNull);
const deployment = await db
.select()
.from(SCHEMA.deployment)
.where(eq(SCHEMA.deployment.id, deploymentId))
.then(takeFirstOrNull);

return NextResponse.json(updatedDeployment);
} catch (error) {
logger.error("Failed to update deployment", { error: error });
if (deployment == null)
return NextResponse.json(
{ error: "Failed to update deployment" },
{ status: httpStatus.INTERNAL_SERVER_ERROR },
{ error: "Deployment not found" },
{ status: httpStatus.NOT_FOUND },
);
}
},
);

const updatedDeployment = await db
.update(SCHEMA.deployment)
.set(body)
.where(eq(SCHEMA.deployment.id, deploymentId))
.returning()
.then(takeFirstOrNull);

return NextResponse.json(updatedDeployment);
} catch (error) {
logger.error("Failed to update deployment", { error: error });
return NextResponse.json(
{ error: "Failed to update deployment" },
{ status: httpStatus.INTERNAL_SERVER_ERROR },
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify that resource selector changes trigger computation events.

While the code for updating the deployment is correct, we should verify that changes to the resourceSelector field properly trigger the new async computation workers.

Run the following script to check if relevant event producers are properly set up to handle deployment updates:


🏁 Script executed:

#!/bin/bash
# Search for event producers that handle deployment updates and resource selector computation

# Look for event producers that might be triggered on deployment updates
echo "=== Checking deployment update event producers ==="
rg "Deployment.*Update" -A 10 -B 5 --glob "**/*.ts" | grep -E "(produce|enqueue|emit)"

# Check for the new compute deployment resource selector event handlers
echo "=== Checking resource selector computation handlers ==="
rg "ComputeDeploymentResourceSelector" -A 3 -B 3 --glob "**/*.ts"

Length of output: 6583


Add event enqueuing for resource‑selector updates

The update handler never emits or enqueues the new compute job when resourceSelector changes. We need to detect that change and push a ComputeDeploymentResourceSelector event. For example, in apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts:

@@ try {
   const { deploymentId } = await params;

   const deployment = await db
     .select()
     .from(SCHEMA.deployment)
     .where(eq(SCHEMA.deployment.id, deploymentId))
     .then(takeFirstOrNull);

   if (deployment == null)
@@
   const updatedDeployment = await db
     .update(SCHEMA.deployment)
     .set(body)
     .where(eq(SCHEMA.deployment.id, deploymentId))
     .returning()
     .then(takeFirstOrNull);

+  // enqueue a compute job if resourceSelector changed
+  if (!isEqual(deployment.resourceSelector, body.resourceSelector)) {
+    await getQueue(Channel.ComputeDeploymentResourceSelector).add(
+      deploymentId,
+      {
+        id: deploymentId,
+        oldSelector: deployment.resourceSelector,
+        resourceSelector: body.resourceSelector,
+      },
+      { jobId: deploymentId }
+    );
+  }
 
   return NextResponse.json(updatedDeployment);

• Import isEqual (from lodash), getQueue and Channel as needed.
• Ensure this logic runs before sending the response.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const { deploymentId } = await params;
const updatedDeployment = await db
.update(SCHEMA.deployment)
.set(body)
.where(eq(SCHEMA.deployment.id, deploymentId))
.returning()
.then(takeFirstOrNull);
const deployment = await db
.select()
.from(SCHEMA.deployment)
.where(eq(SCHEMA.deployment.id, deploymentId))
.then(takeFirstOrNull);
return NextResponse.json(updatedDeployment);
} catch (error) {
logger.error("Failed to update deployment", { error: error });
if (deployment == null)
return NextResponse.json(
{ error: "Failed to update deployment" },
{ status: httpStatus.INTERNAL_SERVER_ERROR },
{ error: "Deployment not found" },
{ status: httpStatus.NOT_FOUND },
);
}
},
);
const updatedDeployment = await db
.update(SCHEMA.deployment)
.set(body)
.where(eq(SCHEMA.deployment.id, deploymentId))
.returning()
.then(takeFirstOrNull);
return NextResponse.json(updatedDeployment);
} catch (error) {
logger.error("Failed to update deployment", { error: error });
return NextResponse.json(
{ error: "Failed to update deployment" },
{ status: httpStatus.INTERNAL_SERVER_ERROR },
);
}
try {
const { deploymentId } = await params;
const deployment = await db
.select()
.from(SCHEMA.deployment)
.where(eq(SCHEMA.deployment.id, deploymentId))
.then(takeFirstOrNull);
if (deployment == null)
return NextResponse.json(
{ error: "Deployment not found" },
{ status: httpStatus.NOT_FOUND },
);
const updatedDeployment = await db
.update(SCHEMA.deployment)
.set(body)
.where(eq(SCHEMA.deployment.id, deploymentId))
.returning()
.then(takeFirstOrNull);
// enqueue a compute job if resourceSelector changed
if (!isEqual(deployment.resourceSelector, body.resourceSelector)) {
await getQueue(Channel.ComputeDeploymentResourceSelector).add(
deploymentId,
{
id: deploymentId,
oldSelector: deployment.resourceSelector,
resourceSelector: body.resourceSelector,
},
{ jobId: deploymentId }
);
}
return NextResponse.json(updatedDeployment);
} catch (error) {
logger.error("Failed to update deployment", { error: error });
return NextResponse.json(
{ error: "Failed to update deployment" },
{ status: httpStatus.INTERNAL_SERVER_ERROR },
);
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
e2e/tests/api/deployments.spec.ts (1)

1-147: Consider adding error case tests.

The test suite thoroughly tests the happy path scenarios but doesn't include tests for error cases such as invalid inputs, unauthorized access, or duplicate slugs. Consider adding these to verify proper error handling.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6072088 and a40d14f.

📒 Files selected for processing (2)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (4 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Lint
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (6)
e2e/tests/api/deployments.spec.ts (6)

1-13: Module imports and setup look good.

The necessary imports are correctly defined, and the YAML path is properly set up for importing test entities.


14-28: Well-structured test setup with proper cleanup.

Good practice to import test entities before all tests and clean them up afterward, ensuring test isolation and preventing test pollution.


29-43: Create deployment test is correctly implemented.

The test properly generates a random deployment name, submits it to the API, and validates both the response status code and expected data properties.


45-70: Get deployment test is thorough and complete.

The test creates a deployment, retrieves it by ID, and performs appropriate assertions. The error check on line 56 ensures proper handling when deployment creation fails.


72-108: Update deployment test covers all required fields.

The test successfully creates a deployment, updates multiple fields (name, slug, description, retryCount, timeout), and verifies each updated field with individual assertions.


110-146: Delete deployment test includes comprehensive verification.

The test confirms both successful deletion and verifies that the deployment can no longer be retrieved, providing complete validation of the delete functionality.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (2)
apps/event-worker/src/workers/update-environment.ts (1)

72-72: ⚠️ Potential issue

Fix incorrect ID passed to dispatchExitHooks

The function dispatchExitHooks expects a system ID, but you're passing the environment ID (job.data.id). This will cause exit hooks to never be dispatched because the deployment query looks for deployment.systemId === systemId.

-      await dispatchExitHooks(db, job.data.id, exitedResources);
+      await dispatchExitHooks(db, job.data.systemId, exitedResources);
e2e/tests/api/environments.spec.ts (1)

238-272: ⚠️ Potential issue

Update test to expect only 404 for deleted environments.

The GET handler for /v1/environments/{environmentId} explicitly returns a 404 when the environment isn't found, so we can remove the fallback for a 500.

-    // Accept either 404 (not found) or 500 (internal server error) as valid responses
-    // since the API implementation may be returning 500 for deleted resources
-    expect([404, 500]).toContain(getResponse.response.status);
+    // Expect 404 (not found) response for deleted environment
+    expect(getResponse.response.status).toBe(404);
🧹 Nitpick comments (4)
apps/event-worker/src/workers/update-environment.ts (1)

58-63: Remove debugging console.log statement

This console.log statement appears to be for debugging purposes and should be removed before merging to production.

-      console.log("updateEnvironmentWorker");
apps/event-worker/src/workers/new-environment.ts (2)

7-7: Remove debugging console.log statement

This console.log statement appears to be for debugging purposes and should be removed before merging to production.

-    console.log("newEnvironmentWorker");

8-11: Consider adding deduplication to environment queue job

Unlike the new-deployment.ts worker, this job doesn't have deduplication configuration. Consider adding it for consistency and to prevent duplicate processing in case of rapid successive events.

     await getQueue(Channel.ComputeEnvironmentResourceSelector).add(
       environment.id,
       environment,
+      { deduplication: { id: environment.id, ttl: 500 } },
     );
e2e/tests/api/environments.spec.ts (1)

74-74: Consider a more deterministic approach instead of fixed delay.

The 10-second fixed delay may be too long in some cases and too short in others. Consider implementing a polling mechanism that checks for the completion of the background processing instead.

-    await new Promise((resolve) => setTimeout(resolve, 10000));
+    // Poll for resource matching completion or use a more deterministic approach
+    let attempts = 0;
+    while (attempts < 10) {
+      const statusResponse = await api.GET("/v1/environments/{environmentId}", 
+        { params: { path: { environmentId: environment.id } } }
+      );
+      if (statusResponse.data?.resourceSelectorProcessed) {
+        break;
+      }
+      await new Promise((resolve) => setTimeout(resolve, 1000));
+      attempts++;
+    }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a40d14f and 754a2c1.

📒 Files selected for processing (11)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1 hunks)
  • apps/event-worker/src/workers/compute-systems-release-targets.ts (1 hunks)
  • apps/event-worker/src/workers/new-deployment.ts (1 hunks)
  • apps/event-worker/src/workers/new-environment.ts (1 hunks)
  • apps/event-worker/src/workers/new-resource.ts (2 hunks)
  • apps/event-worker/src/workers/update-deployment.ts (2 hunks)
  • apps/event-worker/src/workers/update-environment.ts (2 hunks)
  • apps/event-worker/src/workers/updated-resources/index.ts (2 hunks)
  • e2e/tests/api/environments.spec.ts (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (7)
  • apps/event-worker/src/workers/updated-resources/index.ts
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
  • apps/event-worker/src/workers/new-resource.ts
  • apps/event-worker/src/workers/update-deployment.ts
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
  • apps/event-worker/src/workers/compute-systems-release-targets.ts
  • apps/event-worker/src/workers/compute-environment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/event-worker/src/workers/update-environment.ts
  • apps/event-worker/src/workers/new-environment.ts
  • apps/event-worker/src/workers/new-deployment.ts
  • e2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/environments.spec.ts (2)
e2e/tests/fixtures.ts (1)
  • test (11-26)
packages/db/src/schema/environment.ts (1)
  • environment (59-84)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
apps/event-worker/src/workers/new-deployment.ts (1)

1-12: Properly implements queue-based deployment resource computation

The worker has been refactored to delegate the resource computation to a dedicated worker through the queue system, which aligns with the PR's objective of using queues for selector computation. This improves modularity and separation of concerns.

The deduplication configuration with a 500ms TTL is a good practice to avoid processing the same deployment multiple times in rapid succession.

apps/event-worker/src/workers/update-environment.ts (1)

65-70: Good approach for finding exited resources using SQL selectors

The approach to find exited resources directly through SQL selectors is efficient, using a combination of and and not operators to identify resources that matched the old selector but don't match the new one.

e2e/tests/api/environments.spec.ts (5)

12-13: Clean refactoring of yamlPath to module scope.

Moving the yamlPath to module scope improves reusability across tests and avoids duplication. This is a good practice.


43-75: Improved test structure with explicit assertions.

The test has been refactored to use explicit assertions with expect instead of throwing errors manually, which improves readability and error reporting.


89-184: Good addition of test for environment selector updates.

This test properly validates the queue-based selector compute functionality by:

  1. Creating an environment with QA resource selector
  2. Verifying initial resource matching
  3. Updating to prod resource selector
  4. Verifying updated resource matching

The test structure follows a clear setup-action-verify pattern which enhances readability.


186-236: Good test coverage for environment metadata updates.

This test properly verifies that environment name and description can be updated, with appropriate assertions for the updated values.


166-168:

❌ Incorrect review comment

Consider adding a test for direct environment updates.

The comment "The API creates a new environment rather than updating the existing one" suggests unexpected behavior. If this is intended behavior, the test is correct. However, if updates are supposed to modify the existing environment, this might indicate a bug in the API implementation.


🏁 Script executed:

#!/bin/bash
# Check for environment update implementation
echo "Looking for environment update implementation..."
rg -A 15 "export.*function (PUT|PATCH|POST).*environments" --type=ts apps/webservice/src/app/api/v1/environments || echo "No environment update endpoint found"

# Check for environment creation vs update logic
echo -e "\nChecking for environment creation vs update logic..."
rg -A 10 "if \(.*id\)" --type=ts apps/webservice/src/app/api/v1/environments

Length of output: 504


🏁 Script executed:

#!/bin/bash
echo "Listing environment API files..."
find apps/webservice/src/app/api/v1/environments -type f 2>/dev/null

echo -e "\nSearching for HTTP method decorators in environment controllers..."
rg -n "@(Post|Put|Patch)" -g '*.ts' apps/webservice/src/app/api/v1/environments || echo "No decorators found"

echo -e "\nExtracting test snippet around updateResponse..."
rg -n -C3 "updateResponse" e2e/tests/api/environments.spec.ts || echo "No occurrences of updateResponse found"

Length of output: 2483


#!/bin/bash
echo "Inspecting base POST handler in route.ts..."
rg -n -C5 "export async function POST" apps/webservice/src/app/api/v1/environments/route.ts || echo "No base POST handler found"

echo -e "\nInspecting dynamic [environmentId] route handlers..."
sed -n '1,200p' apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts

#!/bin/bash
echo "Searching for update handlers in dynamic environment route..."
rg -n "export async function (PATCH|PUT)" -g 'apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts' || echo "No update handler found"

echo -e "\nShowing dynamic route implementation (first 200 lines):"
sed -n '1,200p' apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts

🏁 Script executed:

#!/bin/bash
echo "Inspecting dynamic environment route for update handlers..."
rg -n "export async function (PUT|PATCH)" -g 'apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts' || echo "No PUT/PATCH handlers found"

echo -e "\nShowing dynamic route file (first 200 lines)..."
sed -n '1,200p' apps/webservice/src/app/api/v1/environments/\[environmentId\]/route.ts

Length of output: 2537


No direct update endpoint exists—tests reflect current API behavior.

I checked apps/webservice/src/app/api/v1/environments/route.ts and the dynamic [environmentId]/route.ts; there are only GET, DELETE, and POST handlers (no PUT or PATCH). Since the API doesn’t support updating an existing environment via a dedicated endpoint, the tests’ use of POST (which always creates) is correct. If you intend to add environment‐update functionality, please implement a PUT/PATCH handler first; otherwise, you can dismiss the suggestion to add a “direct update” test.

Likely an incorrect or invalid review comment.

const environment = environmentResponse.data!;

// Verify initial resources (should only be the QA resource)
await new Promise((resolve) => setTimeout(resolve, 10_000));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Standardize time delays across all tests.

Multiple instances of 10-second delays exist throughout the tests. Consider:

  1. Creating a helper function to standardize these delays
  2. Implementing a more reliable approach like polling for status
// Add at top of file
+const waitForSelectorProcessing = async (api, environmentId, maxWaitMs = 10000) => {
+  const start = Date.now();
+  while (Date.now() - start < maxWaitMs) {
+    // Poll every second
+    await new Promise(resolve => setTimeout(resolve, 1000));
+    // Add status check here if an endpoint is available
+  }
+};

// Then replace all instances of:
-await new Promise((resolve) => setTimeout(resolve, 10000));
+await waitForSelectorProcessing(api, environmentId);

Also applies to: 170-170, 324-324

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
e2e/tests/api/deployments.spec.ts (3)

179-179: Consider more robust waiting mechanism

The hard-coded wait time might be problematic in different environments. Consider implementing a polling mechanism that waits for a specific condition rather than using a fixed timeout.

- await page.waitForTimeout(10_000);
+ // Poll for resource matching completion
+ let attempts = 0;
+ const maxAttempts = 10;
+ while (attempts < maxAttempts) {
+   const checkResources = await api.GET(`/v1/deployments/${deploymentId}/resources`, {
+     params: { path: { deploymentId } }
+   });
+   if (checkResources.data?.resources?.length > 0) break;
+   await page.waitForTimeout(1000);
+   attempts++;
+ }

235-235: Consider consistent wait times

There's a discrepancy between wait times (5 seconds here vs 10 seconds elsewhere). Consider standardizing wait times or implementing a more reliable waiting approach.

- await page.waitForTimeout(5_000);
+ await page.waitForTimeout(10_000); // Make consistent with other waits

268-268: Consider more robust waiting mechanism

Similar to previous comment, using a hard-coded wait time might be problematic in different environments.

- await page.waitForTimeout(10_000);
+ // Poll for resource matching completion
+ let attempts = 0;
+ const maxAttempts = 10;
+ while (attempts < maxAttempts) {
+   const checkResources = await api.GET(`/v1/deployments/${deploymentId}/resources`, {
+     params: { path: { deploymentId } }
+   });
+   // Poll until we find resources with env=prod
+   const hasProdResources = checkResources.data?.resources?.some(
+     r => r.metadata?.env === "prod"
+   );
+   if (hasProdResources) break;
+   await page.waitForTimeout(1000);
+   attempts++;
+ }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 754a2c1 and 17bccd4.

📒 Files selected for processing (6)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts (1 hunks)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts (3 hunks)
  • apps/webservice/src/app/api/v1/deployments/openapi.ts (1 hunks)
  • e2e/api/schema.ts (2 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • openapi.v1.json (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • apps/webservice/src/app/api/v1/deployments/openapi.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
  • e2e/api/schema.ts
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/route.ts
  • openapi.v1.json
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
e2e/tests/api/deployments.spec.ts (7)

1-13: Well-structured imports and test setup

The imports are well-organized, using appropriate modules for testing (Playwright, faker) and file path handling. The YAML-based test fixtures approach is clean and maintainable.


14-28: Good test lifecycle management

The beforeAll/afterAll hooks properly handle test fixture setup and cleanup, ensuring that tests have necessary data and the environment is cleaned afterward.


29-43: LGTM: Create deployment test

This test effectively verifies the deployment creation endpoint with appropriate assertions on both status code and response data.


45-71: Appropriate error handling in GET test

Good implementation of the GET deployment test with proper error handling for undefined deployment IDs.


73-110: Comprehensive field update testing

The update test effectively covers multiple field updates in a single request, thoroughly validating the PATCH functionality.


112-149: Thorough deletion verification

The deletion test properly validates both the DELETE operation and confirms removal by attempting to retrieve the deleted deployment.


14-288: Thorough test suite implementation

This test suite comprehensively covers all CRUD operations and resource matching functionality for deployments. It aligns well with the PR objectives of implementing a queue-based architecture for computing resource selectors. The test validations properly check both API responses and the end results of asynchronous resource matching.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
e2e/tests/api/deployments.spec.ts (2)

179-179: Consider using polling instead of fixed timeouts.

Hard-coded timeouts could make tests flaky in different environments. A polling mechanism with a reasonable timeout would be more robust.

-    await page.waitForTimeout(10_000);
+    // Poll for resource matching completion with timeout
+    const maxWaitTime = 30_000;
+    const startTime = Date.now();
+    let resourcesMatched = false;
+    
+    while (!resourcesMatched && Date.now() - startTime < maxWaitTime) {
+      const resourceCheck = await api.GET("/v1/deployments/{deploymentId}/resources", {
+        params: { path: { deploymentId } },
+      });
+      
+      if (resourceCheck.response.status === 200 && 
+          resourceCheck.data?.resources?.length > 0) {
+        resourcesMatched = true;
+      } else {
+        await page.waitForTimeout(1000); // Wait 1 second between checks
+      }
+    }

235-235: Inconsistent wait times between operations.

The test uses 5 seconds wait after creation but 10 seconds after update. Consider standardizing these wait times or implementing a polling approach as suggested earlier.

Also applies to: 268-268

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 17bccd4 and fb39db2.

📒 Files selected for processing (1)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (10)
e2e/tests/api/deployments.spec.ts (10)

1-13: LGTM - Good test setup with proper imports and utilities.

The imports and test setup are well organized, with clear separation between internal utilities and test frameworks.


14-27: LGTM - Well-structured test lifecycle hooks.

The test setup and teardown logic properly handles importing test fixtures from YAML and cleaning up afterward to prevent test pollution.


29-43: LGTM - Clean deployment creation test.

This test case properly validates the deployment creation API endpoint with appropriate assertions for status code and response properties.


45-71: LGTM - Thorough deployment retrieval test.

The test correctly validates the GET endpoint functionality with appropriate error handling when the deployment ID is undefined.


73-110: LGTM - Comprehensive update test.

The test properly validates updating various deployment fields including name, slug, description, and operational parameters.


112-149: LGTM - Complete deletion test flow.

The test correctly verifies both the deletion operation and confirms the deployment is no longer retrievable afterward.


151-202: LGTM - Good resource selector matching test.

The test effectively verifies that the deployment resource selector correctly matches resources based on metadata filters.


204-287: LGTM - Well-structured resource selector update test.

This test effectively validates that changing a deployment's resource selector correctly updates the matched resources.


289-353: LGTM - Good edge case testing for deleted resources.

The test correctly verifies that deleted resources are no longer matched by deployment selectors, which is an important edge case.


1-354: Overall well-structured test suite for the new queue-based architecture.

This test suite provides excellent coverage for the Deployments API including CRUD operations and the new queue-based resource selector computation. The tests are thorough and align well with the PR objectives of implementing a queue for selector computation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
apps/event-worker/src/workers/update-environment.ts (1)

75-75: Fixed systemId parameter as noted in previous review.

This change correctly passes job.data.systemId to the dispatchExitHooks function instead of job.data.id, addressing the issue noted in a previous review. This ensures that exit hooks will be properly dispatched.

🧹 Nitpick comments (1)
apps/event-worker/src/workers/update-environment.ts (1)

58-58: Remove debugging log statement before merging.

This console.log statement appears to be for debugging purposes and should be removed before merging to production.

-      console.log("updateEnvironmentWorker");
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fb39db2 and b19a9f3.

📒 Files selected for processing (2)
  • apps/event-worker/src/workers/update-deployment.ts (2 hunks)
  • apps/event-worker/src/workers/update-environment.ts (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/event-worker/src/workers/update-deployment.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/event-worker/src/workers/update-environment.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (2)
apps/event-worker/src/workers/update-environment.ts (2)

59-63: LGTM! Effective delegation to specialized worker.

The code now properly delegates the computation to a specialized worker by enqueuing a job on the ComputeEnvironmentResourceSelector queue instead of performing inline processing. The deduplication configuration will prevent redundant processing of the same environment.


65-73: Good improvement in exited resources detection.

The new approach for detecting exited resources using SQL conditions and database queries is more efficient than the previous in-memory filtering approach. It properly combines old and new selectors using logical operators to find resources that are no longer selected.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
e2e/tests/api/deployments.spec.ts (2)

151-202: Consider implementing a polling mechanism instead of fixed waits.

Using a fixed 10-second wait time (line 179) might lead to flaky tests if the asynchronous processing takes longer under load. Consider implementing a polling mechanism that waits for the condition to be met with a timeout.

-    await page.waitForTimeout(10_000);
+    // Poll until resources are matched or timeout
+    let matchedResources = { data: { resources: [] } };
+    const startTime = Date.now();
+    const timeout = 30_000; // 30 seconds max wait
+    
+    while (Date.now() - startTime < timeout) {
+      matchedResources = await api.GET(
+        "/v1/deployments/{deploymentId}/resources",
+        {
+          params: {
+            path: {
+              deploymentId,
+            },
+          },
+        },
+      );
+      
+      if (matchedResources.data?.resources?.length > 0) {
+        break;
+      }
+      
+      // Short wait between attempts
+      await page.waitForTimeout(1000);
+    }

204-287: Consider extracting resource selector definitions to reduce repetition.

The resource selector conditions are repeated across multiple tests. Consider extracting these into helper functions or constants for better maintainability.

+  // Helper function to create a resource selector for a specific environment
+  const createEnvResourceSelector = (env: string, systemPrefix: string) => ({
+    type: "comparison",
+    operator: "and",
+    conditions: [
+      {
+        type: "metadata",
+        operator: "equals",
+        key: "env",
+        value: env,
+      },
+      {
+        type: "identifier",
+        operator: "starts-with",
+        value: systemPrefix,
+      },
+    ],
+  });
+

Then in the test:

-        resourceSelector: {
-          type: "comparison",
-          operator: "and",
-          conditions: [
-            {
-              type: "metadata",
-              operator: "equals",
-              key: "env",
-              value: "qa",
-            },
-            {
-              type: "identifier",
-              operator: "starts-with",
-              value: systemPrefix,
-            },
-          ],
-        },
+        resourceSelector: createEnvResourceSelector("qa", systemPrefix),
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b19a9f3 and e75186f.

📒 Files selected for processing (3)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • e2e/tests/api/resource-selectors.spec.ts (0 hunks)
  • e2e/tests/api/resource-selectors.spec.yaml (0 hunks)
💤 Files with no reviewable changes (2)
  • e2e/tests/api/resource-selectors.spec.ts
  • e2e/tests/api/resource-selectors.spec.yaml
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
e2e/tests/api/deployments.spec.ts (7)

1-13: LGTM: Imports and setup are well-structured.

The imports are appropriate for an E2E test file, including path utilities, faker for generating test data, and test fixtures.


14-27: Well-structured test suite setup and teardown.

The beforeAll/afterAll hooks properly import test entities from YAML and clean them up afterward, ensuring test isolation.


29-43: LGTM: Create deployment test is properly implemented.

This test correctly validates the deployment creation API, including response status code and expected data fields.


45-71: LGTM: Get deployment test follows best practices.

The test properly handles the deployment ID existence check before proceeding and validates the API response.


73-110: LGTM: Update deployment test covers all relevant fields.

This test thoroughly verifies that all fields (name, slug, description, retryCount, timeout) can be updated correctly.


112-149: LGTM: Delete deployment test verifies complete removal.

Good practice to verify that the deleted deployment cannot be retrieved afterward.


355-406: LGTM: Good test for resource selector clearing.

This test correctly verifies that setting the resourceSelector to undefined removes all matched resources.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
e2e/tests/api/release-targets.spec.ts (1)

29-38: Consider reducing timeout duration

While the test correctly waits for background processing to complete, the 5-second timeout may be excessive. Consider implementing a more dynamic approach like polling for completion or using a shorter timeout if possible.

-    await page.waitForTimeout(5_000);
+    await page.waitForTimeout(2_000);

Or better yet, consider implementing a polling mechanism if the API provides a way to check for completion status.

apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts (2)

38-46: Consider adding more context to the response

While the response mapping includes essential data, it might be helpful to include additional metadata such as creation timestamps or additional attributes that could be useful for API consumers.

Consider enhancing the response with additional useful fields:

 const releaseTargets = releaseTargetsRows.map((rt) => ({
   id: rt.id,
   resource: rt.resource,
   environment: rt.environment,
   deployment: rt.deployment,
+  createdAt: rt.createdAt,
+  updatedAt: rt.updatedAt
 }));

46-54: Error handling could be more specific

While the current error handling catches all errors, consider adding more specific error handling for different error types to provide more detailed feedback to API consumers.

Consider implementing more specific error handling:

 try {
   // ... existing code
   return NextResponse.json(releaseTargets);
 } catch (error) {
   log.error(error);
+  if (error instanceof SomeSpecificError) {
+    return NextResponse.json(
+      { error: "Specific error message" },
+      { status: 400 }
+    );
+  }
   return NextResponse.json(
     { error: "Internal server error" },
     { status: 500 },
   );
 }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e75186f and fea089b.

📒 Files selected for processing (7)
  • apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/openapi.ts (1 hunks)
  • apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts (1 hunks)
  • e2e/api/schema.ts (5 hunks)
  • e2e/package.json (1 hunks)
  • e2e/tests/api/release-targets.spec.ts (1 hunks)
  • e2e/tests/api/release-targets.spec.yaml (1 hunks)
  • openapi.v1.json (4 hunks)
✅ Files skipped from review due to trivial changes (3)
  • e2e/package.json
  • apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/openapi.ts
  • e2e/tests/api/release-targets.spec.yaml
🚧 Files skipped from review as they are similar to previous changes (2)
  • openapi.v1.json
  • e2e/api/schema.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/release-targets.spec.ts
  • apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Lint
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (8)
e2e/tests/api/release-targets.spec.ts (5)

1-13: Imports and setup look good

The imports are well-organized and the test file is properly set up with the required dependencies for testing the API. The YAML path is correctly defined using path.join to ensure cross-platform compatibility.


14-28: Test fixtures properly initialized and cleaned up

The test suite correctly uses beforeAll and afterAll hooks to import entities from YAML before tests run and clean them up afterward. This ensures tests start with a known state and don't leave test data behind.


39-54: Resource retrieval logic is robust

The resource retrieval logic is well-implemented with proper error handling and assertions. The test correctly throws descriptive errors if expected data is missing, which will make test failures easier to diagnose.


55-69: Release targets API testing is thorough

The test correctly calls the new API endpoint and validates both the response status and the presence of release targets. Error handling is properly implemented with descriptive error messages.


70-83: Relationship validation is comprehensive

The test thoroughly validates the relationships between release targets and their associated resources, environments, and deployments. This ensures that the computed selectors maintain proper references between entities.

apps/webservice/src/app/api/v1/resources/[resourceId]/release-targets/route.ts (3)

1-14: Imports and logging setup look good

The imports are well-organized and necessary dependencies are included. The logger setup with a child logger using a specific module name follows good practices for structured logging.


15-24: Authentication and authorization correctly implemented

The endpoint properly uses middleware for authentication and authorization. The authorization check correctly verifies the ResourceGet permission on the specific resource ID from the request parameters.


25-37: Database query is efficient and includes related entities

The implementation correctly retrieves the resource ID from route parameters and queries the database for release targets. The query efficiently includes related entities (resource, environment, deployment) in a single operation using the with option.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
e2e/tests/api/environments.spec.ts (4)

74-74: Standardize time delays across all tests.

Multiple instances of similar delays exist throughout this file. Consider creating a standardized helper function for consistency, as suggested in previous reviews.

// Add at top of file after imports
+const waitForSelectorProcessing = async (api, environmentId, maxWaitMs = 10000) => {
+  await new Promise(resolve => setTimeout(resolve, maxWaitMs));
+};

// Then replace all timeout instances with
-await new Promise((resolve) => setTimeout(resolve, 10000));
+await waitForSelectorProcessing(api, environment.id);

145-145: Standardize time delays across all tests.

Another instance of hardcoded delay. Consider using the helper function suggested earlier.


426-426: Standardize time delays across all tests.

Another instance of hardcoded delay. Consider using the helper function suggested earlier.


371-373: 🛠️ Refactor suggestion

Update test to expect only 404 for deleted environments

The GET handler for /v1/environments/{environmentId} explicitly returns a 404 when the environment isn't found, so we can remove the fallback for a 500.

-    // Accept either 404 (not found) or 500 (internal server error) as valid responses
-    // since the API implementation may be returning 500 for deleted resources
-    expect([404, 500]).toContain(getResponse.response.status);
+    // Expect 404 (not found) response for deleted environment
+    expect(getResponse.response.status).toBe(404);
🧹 Nitpick comments (6)
e2e/tests/api/environments.spec.ts (6)

85-89: Simplify validation with a more concise assertion pattern.

The current pattern of defining a variable, checking if it's defined, then throwing an error can be simplified.

-    const receivedResource = resourcesResponse.data?.resources?.[0];
-    expect(receivedResource).toBeDefined();
-    if (!receivedResource) throw new Error("No resource found");
-    expect(receivedResource.identifier).toBe(
-      importedEntities.resources.find((r) => r.metadata?.env === "qa")
-        ?.identifier,
-    );
+    const receivedResource = resourcesResponse.data?.resources?.[0];
+    expect(receivedResource).toBeDefined();
+    expect(receivedResource?.identifier).toBe(
+      importedEntities.resources.find((r) => r.metadata?.env === "qa")
+        ?.identifier,
+    );

100-107: Simplify release target validation with a more concise assertion pattern.

Similar to the previous comment, this validation can be simplified.

-    const releaseTarget = releaseTargetsResponse.data?.[0];
-    expect(releaseTarget).toBeDefined();
-    if (!releaseTarget) throw new Error("No release target found");
-    expect(releaseTarget.environment.id).toBe(environment.id);
-    const deploymentMatch = importedEntities.deployments.find(
-      (d) => d.id === releaseTarget.deployment.id,
-    );
-    expect(deploymentMatch).toBeDefined();
-    if (!deploymentMatch) throw new Error("No deployment match found");
-    expect(deploymentMatch.id).toBe(releaseTarget.deployment.id);
+    const releaseTarget = releaseTargetsResponse.data?.[0];
+    expect(releaseTarget).toBeDefined();
+    expect(releaseTarget?.environment.id).toBe(environment.id);
+    
+    const deploymentMatch = importedEntities.deployments.find(
+      (d) => d.id === releaseTarget?.deployment.id,
+    );
+    expect(deploymentMatch).toBeDefined();
+    expect(deploymentMatch?.id).toBe(releaseTarget?.deployment.id);

187-189: Add a verification for environment ID consistency.

The comment notes that the API creates a new environment rather than updating an existing one. It would be valuable to verify that the new ID is actually different from the original.

    // Note: The API creates a new environment rather than updating the existing one
    const updatedEnvironmentId = updateResponse.data!.id;
    expect(updatedEnvironmentId).toBeDefined();
+   expect(updatedEnvironmentId).not.toBe(environment.id);

203-207: Apply the same simplification pattern for resource validation.

Consistent with earlier suggestions, apply the same pattern simplification here.

-    const receivedResource = updatedResourcesResponse.data?.resources?.[0];
-    expect(receivedResource).toBeDefined();
-    if (!receivedResource) throw new Error("No resource found");
-    expect(receivedResource.identifier).toBe(
-      importedEntities.resources.find((r) => r.metadata?.env === "prod")
-        ?.identifier,
-    );
+    const receivedResource = updatedResourcesResponse.data?.resources?.[0];
+    expect(receivedResource).toBeDefined();
+    expect(receivedResource?.identifier).toBe(
+      importedEntities.resources.find((r) => r.metadata?.env === "prod")
+        ?.identifier,
+    );

259-259: Consider using consistent timeout values.

This test uses a 5 second timeout while others use 10 seconds. Unless there's a specific reason for this difference, consider standardizing on a single timeout value.


323-324: Add verification for different behaviors between creating and updating.

Similar to the earlier test, it would be helpful to explicitly verify that updating creates a new environment rather than modifying the existing one.

    // Note: It appears that the POST endpoint creates a new environment rather than updating the existing one
    const updatedEnvironmentId = updateResponse.data!.id;
    expect(updatedEnvironmentId).toBeDefined();
+   expect(updatedEnvironmentId).not.toBe(environment.id);
+   
+   // Verify the original environment still exists with original values
+   const originalEnvResponse = await api.GET("/v1/environments/{environmentId}", {
+     params: { path: { environmentId: environment.id } },
+   });
+   expect(originalEnvResponse.response.status).toBe(200);
+   expect(originalEnvResponse.data?.name).toBe(originalName);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e416e65 and d9bb587.

📒 Files selected for processing (1)
  • e2e/tests/api/environments.spec.ts (5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/environments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/environments.spec.ts (3)
e2e/tests/fixtures.ts (1)
  • test (11-26)
packages/db/src/schema/environment.ts (1)
  • environment (59-84)
packages/db/src/schema/release.ts (1)
  • releaseTarget (20-42)
⏰ Context from checks skipped due to timeout of 90000ms (6)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
e2e/tests/api/environments.spec.ts (1)

12-13: LGTM: Moving yamlPath outside beforeAll is a clean refactoring

This change improves code organization by making the constant available at module scope rather than being defined inside the beforeAll hook.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (5)
e2e/tests/api/deployments.spec.ts (5)

254-254: Consider replacing fixed wait with retry or polling

As with other locations, fixed wait times can lead to flaky tests depending on the environment speed.


287-287: Consider replacing fixed wait with retry or polling

Fixed timeouts can cause unnecessary delays or flaky tests depending on environment speed.


384-384: Consider replacing fixed wait with retry or polling

Fixed timeouts can make tests slow or flaky depending on environment performance.


437-437: Consider replacing fixed wait with retry or polling

Fixed timeouts like this can make tests unnecessarily slow or flaky in different environments.


448-448: Consider replacing fixed wait with retry or polling

Fixed timeouts may not be reliable in different environments.

🧹 Nitpick comments (2)
e2e/tests/api/deployments.spec.ts (2)

179-179: Consider replacing fixed wait with retry or polling

A fixed 10-second wait may cause flaky tests - it might be too short in slow environments or unnecessarily long in fast ones.

Consider implementing a polling mechanism that checks for the expected state:

-    await page.waitForTimeout(10_000);
+    // Wait for resource selector computation to complete with exponential backoff
+    let retries = 0;
+    const maxRetries = 5;
+    let resources;
+    
+    while (retries < maxRetries) {
+      resources = await api.GET("/v1/deployments/{deploymentId}/resources", {
+        params: { path: { deploymentId } },
+      });
+      
+      if (resources.response.status === 200 && resources.data?.resources?.length > 0) {
+        break;
+      }
+      
+      await page.waitForTimeout(1000 * Math.pow(2, retries));
+      retries++;
+    }

463-463: Remove console.log statement

Debug statements should not be committed in test code.

-      console.log(resource);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d9bb587 and fb12454.

📒 Files selected for processing (5)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts (1 hunks)
  • e2e/api/schema.ts (7 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • e2e/tests/api/deployments.spec.yaml (1 hunks)
  • openapi.v1.json (6 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
  • apps/webservice/src/app/api/v1/deployments/[deploymentId]/openapi.ts
  • e2e/tests/api/deployments.spec.yaml
  • openapi.v1.json
  • e2e/api/schema.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (3)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (3)
e2e/tests/api/deployments.spec.ts (3)

327-403: Fixed previous test isolation issue

Good job addressing the previous review comment. The test now creates and deletes a new resource instead of using imported entities, which prevents interference with subsequent tests.


405-495: The comment seems to contradict the test

The comment on lines 458-461 suggests that with a null selector, all resources should have release targets. If that's the case, shouldn't the endpoint return a 200 with all resources instead of a 400?

This appears to be testing a behavior where:

  1. A deployment with a null selector can't directly list its resources (error 400)
  2. But all resources in the system will show this deployment in their release targets

Is this the intended behavior? Please confirm.


455-455:

✅ Verification successful

Ensure this is the expected behavior

The test expects a 400 status code when getting resources for a deployment with a null selector. This seems unusual as most APIs would return a 200 with an empty array in this case.


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation or implementation details about this behavior

# Check API implementation for the GET /deployments/{id}/resources endpoint
rg -A 10 "deployments/\[deploymentId\]/resources" --glob "apps/webservice/src/app/api/**/*.ts"

Length of output: 96


🏁 Script executed:

#!/bin/bash
# List all API files containing “resources” to locate the route implementation
rg -l "resources" --glob "apps/webservice/src/app/api/**/*.{ts,js}"

echo
echo "---- deployments.spec.ts (lines 440–480) ----"
# Show the test snippet to see how the endpoint is being called
sed -n '440,480p' e2e/tests/api/deployments.spec.ts

Length of output: 3037


🏁 Script executed:

#!/bin/bash
# Inspect the implementation of GET /v1/deployments/{deploymentId}/resources
sed -n '1,200p' apps/webservice/src/app/api/v1/deployments/\[deploymentId\]/resources/route.ts

Length of output: 1816


No changes needed—400 response is by design
The GET /v1/deployments/{deploymentId}/resources handler explicitly returns a 400 status when deployment.resourceSelector is null, so the test’s expectation is correct.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (4)
e2e/tests/api/environments.spec.ts (4)

74-74: Standardize time delays across all tests.

Multiple instances of hardcoded delays (10000ms) exist throughout the tests. This was previously flagged in a review but hasn't been addressed.

// Add at top of file
+const waitForSelectorProcessing = async (api, environmentId, maxWaitMs = 10000) => {
+  const start = Date.now();
+  while (Date.now() - start < maxWaitMs) {
+    // Poll every second
+    await new Promise(resolve => setTimeout(resolve, 1000));
+    // Add status check here if an endpoint is available
+  }
+};

// Then replace all hardcoded waits with:
-await new Promise((resolve) => setTimeout(resolve, 10000));
+await waitForSelectorProcessing(api, environmentId);

145-145: Standardize time delays across all tests.

Another instance of hardcoded delays. Consider implementing the helper function as suggested previously.


340-374: Update test to expect only 404 for deleted environments.

The GET handler for /v1/environments/{environmentId} explicitly returns a 404 when the environment isn't found, so we can remove the fallback for a 500.

-    // Accept either 404 (not found) or 500 (internal server error) as valid responses
-    expect([404, 500]).toContain(getResponse.response.status);
+    // Expect 404 (not found) response for deleted environment
+    expect(getResponse.response.status).toBe(404);

433-433: Standardize time delays across all tests.

Another instance of hardcoded delays. Consider implementing the helper function as suggested previously.

🧹 Nitpick comments (2)
e2e/tests/api/environments.spec.ts (2)

85-89: Remove redundant error handling after assertions.

The throw statement after the expect assertion is redundant since the test would already fail if the assertion doesn't pass.

-    expect(receivedResource).toBeDefined();
-    if (!receivedResource) throw new Error("No resource found");
+    expect(receivedResource).toBeDefined();

Apply this pattern to other similar instances in the file (lines 100-101, 203-204, 218-219, 224-225).


426-430: Use consistent assertion patterns.

This block uses a conditional throw instead of the expect assertions used elsewhere in the file. For consistency:

-    if (
-      environmentResponse.response.status !== 200 ||
-      environmentResponse.data == null
-    )
-      throw new Error("Failed to create environment");
+    expect(environmentResponse.response.status).toBe(200);
+    expect(environmentResponse.data).toBeDefined();
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between fb12454 and 37f918f.

📒 Files selected for processing (2)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
  • e2e/tests/api/environments.spec.ts (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • e2e/tests/api/deployments.spec.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/environments.spec.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (7)
e2e/tests/api/environments.spec.ts (7)

12-13: Code refactoring improves reusability.

Moving the yamlPath constant to module scope improves organization and allows it to be potentially reused across different tests without duplication.


43-74: Test refactoring aligns with modern practices.

The refactored test removes Playwright page dependency, adds explicit assertions, and uses native Promise-based timeout, making the test more maintainable and framework-agnostic.


75-108: Thorough validation of matched resources and release targets.

The additional assertions for resource identifiers, release target environment IDs, and deployment IDs provide comprehensive validation of the selector computation results, which is important for verifying the new queue-based architecture.


110-226: New test validates selector update functionality.

This comprehensive test ensures that environment selector updates work correctly with the new queue-based computation system. The test properly validates:

  1. Initial resource matching with QA selector
  2. Updating to a new selector targeting prod resources
  3. Verification of the updated selector's effects
  4. Release target validation

These validations are critical for ensuring that the queue-based selector computation behaves correctly when selectors are updated.


228-286: Test for null selector validates important edge case.

This test ensures that environment selectors can be unset, which is an important capability that might otherwise be overlooked. It correctly verifies:

  1. The API behavior when a selector is set to undefined (becomes null)
  2. The appropriate error response (400) when trying to get resources for an environment without a selector

This is particularly important given the architectural change to queue-based selector computation.


288-338: Test for environment metadata updates is comprehensive.

The test correctly validates that environment name and description can be updated, ensuring the API maintains this basic functionality while implementing the queue-based architecture changes.


376-442: Test for deleted resources correctly validates edge case.

This test ensures that deleted resources are not matched by environment selectors, which is important for maintaining data integrity in the queue-based architecture. The test creates a resource, deletes it, and verifies that an environment selector targeting that resource matches no resources.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
apps/webservice/src/app/api/v1/deployment-versions/openapi.ts (1)

56-57: Consider more specific error responses.

Replacing the 409 conflict response with a generic 500 error reduces API specificity. While this may be necessary for the queue-based architecture, consider whether more granular error types could be maintained or added to help API consumers handle different error scenarios.

You might want to retain conflict scenarios as distinct from other server errors, for example:

  "500": {
    description: "Internal server error",
    content: {
      "application/json": {
        schema: {
          type: "object",
          properties: {
            error: { type: "string" },
+           code: { type: "string", enum: ["INTERNAL_ERROR", "VERSION_CONFLICT", "QUEUE_ERROR"] },
          },
        },
      },
    },
  },
e2e/tests/api/deployment-version.spec.ts (1)

29-57: Reduce duplication & tighten assertions

The toBeDefined() + manual throw guard pattern is repeated across every test. Playwright’s expect already fails the test when the value is undefined, so the explicit check & throw are redundant noise.

A concise, DRY approach is:

-const deploymentVersion = deploymentVersionResponse.data;
-expect(deploymentVersion).toBeDefined();
-if (!deploymentVersion) throw new Error("Deployment version not found");
+const { data: deploymentVersion } = deploymentVersionResponse;
+expect(deploymentVersion).toBeTruthy();              // fails test automatically when falsy

You could extract a tiny helper (assertPresent<T>(value): value is T) once and reuse, which will both read better and remove six-plus duplicated blocks.

apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts (2)

73-99: Metadata replacement causes loss of unknown columns

The delete → insert pattern correctly replaces metadata, but it discards rows that might have been concurrently written by another request in the same moment – leading to lost updates.

Safer pattern:

  1. SELECT ... FOR UPDATE the existing keys.
  2. Upsert only the diff (insert new / update existing / delete removed).

If full replacement is intended, document that the API is “last-write-wins” to avoid surprises.


136-136: HTTP status constant mismatch

INTERNAL_SERVER_ERROR from http-status is the numeric value 500, whereas NextResponse.json(..., { status }) expects the number. This is fine, but be consistent – elsewhere the code uses the literal 500. Consider a shared HttpStatus enum to avoid mixing styles.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 37f918f and 8b27c32.

📒 Files selected for processing (8)
  • apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.ts (1 hunks)
  • apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts (4 hunks)
  • apps/webservice/src/app/api/v1/deployment-versions/openapi.ts (1 hunks)
  • apps/webservice/src/app/api/v1/openapi.ts (1 hunks)
  • e2e/api/schema.ts (10 hunks)
  • e2e/tests/api/deployment-version.spec.ts (1 hunks)
  • e2e/tests/api/deployment-version.spec.yaml (1 hunks)
  • openapi.v1.json (10 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.ts
  • apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts
  • apps/webservice/src/app/api/v1/openapi.ts
  • apps/webservice/src/app/api/v1/deployment-versions/openapi.ts
  • e2e/tests/api/deployment-version.spec.ts
  • e2e/api/schema.ts
🧬 Code Graph Analysis (1)
apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/route.ts (3)
packages/db/src/common.ts (1)
  • takeFirstOrNull (15-20)
packages/db/src/schema/deployment-version.ts (2)
  • deploymentVersion (109-136)
  • deploymentVersionMetadata (160-176)
packages/events/src/index.ts (1)
  • getQueue (28-34)
🔇 Additional comments (21)
e2e/api/schema.ts (11)

529-545: New API endpoint for resource release targets looks good.

This adds a new API path /v1/resources/{resourceId}/release-targets with a GET operation to support the queue-based architecture for computing release targets.


920-921: Properly typed metadata dictionary in DeploymentVersion.

The metadata property now has a more specific type definition where values are constrained to strings, which provides better type safety.


922-923: Added deployment version status tracking.

The new status enum field with values "building", "ready", and "failed" enables tracking the lifecycle state of deployment versions in the new queue-based architecture.


1294-1315: Enhanced error handling for updateDeploymentVersion.

Adding explicit 404 and 500 error responses improves the API contract by clearly documenting possible error scenarios when updating deployment versions.


1350-1350: Corrected response code for upsertDeploymentVersion.

Changed success response code from 200 to 201, which is more semantically appropriate for resource creation operations according to REST conventions.


1358-1368: Updated error handling for upsertDeploymentVersion.

Removed the conflict (409) response and replaced it with a more general internal server error (500) response. This aligns with how conflicts are now handled in the queue-based processing approach.


1517-1533: Enhanced updateDeployment request body schema.

The request body now uses an explicit inline object with optional fields, including a resourceSelector that allows arbitrary properties. This provides more flexibility for the queue-based selector computation.


1647-1647: Updated resource response types for environment and deployment queries.

The response types for getResourcesForDeployment and getResourcesForEnvironment now return arrays of full Resource schemas instead of partial inline objects, providing more complete information to clients.

Also applies to: 1842-1842


1112-1118: Added ReleaseTarget schema.

This new schema properly defines the structure of release targets with references to associated resources, environments, and deployments, supporting the new queue-based release target computation.


3082-3104: Implemented getReleaseTargets operation.

The operation provides a clean API surface for retrieving release targets associated with a resource, which is essential for the new queue-based architecture.


1725-1727: Updated resourceSelector in createDeployment.

The resourceSelector field now allows for arbitrary properties, which provides greater flexibility when specifying resource selectors during deployment creation.

apps/webservice/src/app/api/v1/deployment-versions/[deploymentVersionId]/openapi.ts (1)

61-86: Good addition of explicit error response schemas.

The addition of 404 and 500 error responses with clear schemas improves the API documentation and aligns with best practices for RESTful APIs. This change supports the new queue-based architecture by properly documenting potential failure modes during asynchronous processing.

apps/webservice/src/app/api/v1/openapi.ts (1)

132-136: Improved metadata type safety and added deployment version status.

These changes enhance the DeploymentVersion schema by:

  1. Restricting metadata values to strings, which provides better type safety
  2. Adding a status enum property that reflects the deployment version lifecycle ("building", "ready", "failed")

The status field is particularly important for supporting the new queue-based selector computation architecture, as it allows tracking the processing state of deployment versions.

e2e/tests/api/deployment-version.spec.yaml (1)

1-30: Well-structured test specification for deployment versions.

This YAML specification provides a complete test environment with interconnected system, resource, deployment, and environment entities. The resource selector configuration in the environment properly tests the core functionality being implemented in this PR.

The use of template variables with {{ prefix }} ensures test isolation and reusability.

apps/webservice/src/app/api/v1/deployment-versions/openapi.ts (1)

48-48: Appropriate status code change for resource creation.

Changing the success response from 200 to 201 is more semantically correct for resource creation operations, following HTTP standards where 201 specifically indicates a resource was created.

e2e/tests/api/deployment-version.spec.ts (2)

59-80: Missing negative-path tests

The suite only verifies the happy paths. Consider adding cases for:

  • 409 conflict on duplicate {deploymentId, tag}
  • 400 validation errors (e.g. invalid status)
  • 404 when deploymentId is unknown

These will protect against future regressions in the new queue-based flow.


106-153: Assert that side-effects happened

Updating name / tag / metadata implicitly triggers the new selector-computation queues. The test currently checks only HTTP response fields. Add an assertion that the relevant BullMQ job exists (or that an event was emitted), otherwise the asynchronous flow could silently break.

Example (pseudo):

await expect(
  getQueue(Channel.ComputeDeploymentResourceSelector).getJob(deploymentVersion.id),
).resolves.not.toBeNull();

This keeps the E2E test aligned with the PR’s primary objective of “use queue for selector compute”.

openapi.v1.json (4)

297-327: Spec and implementation diverge on 409 vs 500

The PATCH /deployment-versions/{id} handler can still raise a uniqueness violation (e.g. duplicate tag) which the DB surfaces as HTTP 409 in other endpoints. The spec now lists only 404/500. Either:

  1. Keep 409 in the spec and map the DB error → 409 in the handler, or
  2. Ensure the handler translates duplicates to 500 (less ideal).

Synchronise to keep clients predictable.


390-415: Good: 201 for resource creation

Changing POST /v1/deployment-versions from 200 to 201 aligns with HTTP semantics.


4521-4533: metadata value type narrowed

DeploymentVersion.metadata is now string-only. Confirm that callers weren’t persisting JSON blobs before; otherwise this is a backwards-incompatible API change.

If the backend still stores text values only, 👍; otherwise reconsider typing as string | number | boolean.


5027-5050: ReleaseTarget schema looks solid

The added reusable schema is well-structured and referenced correctly by the new endpoint.

Comment on lines +155 to +233
test("should update deployment version status to ready", async ({ api }) => {
const deploymentVersionResponse = await api.POST(
"/v1/deployment-versions",
{
body: {
tag: faker.string.alphanumeric(10),
deploymentId: importedEntities.deployments[0].id,
status: "building",
},
},
);

expect(deploymentVersionResponse.response.status).toBe(201);
const deploymentVersion = deploymentVersionResponse.data;
expect(deploymentVersion).toBeDefined();
if (!deploymentVersion) throw new Error("Deployment version not found");

expect(deploymentVersion.status).toBe("building");

const updatedDeploymentVersionResponse = await api.PATCH(
`/v1/deployment-versions/{deploymentVersionId}`,
{
params: {
path: { deploymentVersionId: deploymentVersion.id },
},
body: {
status: "ready",
},
},
);

expect(updatedDeploymentVersionResponse.response.status).toBe(200);
const updatedDeploymentVersion = updatedDeploymentVersionResponse.data;
expect(updatedDeploymentVersion).toBeDefined();
if (!updatedDeploymentVersion)
throw new Error("Deployment version not found");

expect(updatedDeploymentVersion.status).toBe("ready");
});

test("should update deployment version status to failed", async ({ api }) => {
const deploymentVersionResponse = await api.POST(
"/v1/deployment-versions",
{
body: {
tag: faker.string.alphanumeric(10),
deploymentId: importedEntities.deployments[0].id,
status: "building",
},
},
);

expect(deploymentVersionResponse.response.status).toBe(201);
const deploymentVersion = deploymentVersionResponse.data;
expect(deploymentVersion).toBeDefined();
if (!deploymentVersion) throw new Error("Deployment version not found");

expect(deploymentVersion.status).toBe("building");

const updatedDeploymentVersionResponse = await api.PATCH(
`/v1/deployment-versions/{deploymentVersionId}`,
{
params: {
path: { deploymentVersionId: deploymentVersion.id },
},
body: {
status: "failed",
},
},
);

expect(updatedDeploymentVersionResponse.response.status).toBe(200);
const updatedDeploymentVersion = updatedDeploymentVersionResponse.data;
expect(updatedDeploymentVersion).toBeDefined();
if (!updatedDeploymentVersion)
throw new Error("Deployment version not found");

expect(updatedDeploymentVersion.status).toBe("failed");
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Race condition risk when polling status

Both status-transition tests (building → ready/failed) immediately PATCH and then assert the returned status value. In the live system, a concurrent worker could change the status between the POST and PATCH causing flaky tests.

Mitigation:

  1. Capture the updatedAt (or version row lock) before the PATCH and assert it increased.
  2. Or poll until the PATCH completes successfully and the queue finishes processing.

This will make the suite deterministic when workers run in parallel CI.

Comment on lines +67 to +72
if (deploymentVersion == null)
return NextResponse.json(
{ error: "Deployment version not found" },
{ status: NOT_FOUND },
);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Early return returns the wrong type

return NextResponse.json(...) inside the transaction returns a NextResponse, but the surrounding function later treats the result as a DeploymentVersion object. This will surface as a runtime error when the branch is taken (even if currently unreachable).

Prefer throwing a typed error and handling it outside the transaction, or simply remove this duplicate 404 as the pre-query already guards for “not found”.

Comment on lines 58 to +66
try {
const deploymentVersion = await ctx.db
.update(SCHEMA.deploymentVersion)
.set(body)
.where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId))
.returning()
.then(takeFirst);

if (Object.keys(body.metadata ?? {}).length > 0)
await ctx.db
const deploymentVersion = await ctx.db.transaction(async (tx) => {
const deploymentVersion = await ctx.db
.update(SCHEMA.deploymentVersion)
.set(body)
.where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId))
.returning()
.then(takeFirstOrNull);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use the transaction handle for all DB writes

Inside the ctx.db.transaction callback you call ctx.db.update(...) instead of tx.update(...).
If the update succeeds but later tx.insert(...) fails, the update will still be committed because it is executed outside the transaction boundary – violating atomicity.

-const deploymentVersion = await ctx.db
+const deploymentVersion = await tx
   .update(SCHEMA.deploymentVersion)

Repeat for any additional statements in this block.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const deploymentVersion = await ctx.db
.update(SCHEMA.deploymentVersion)
.set(body)
.where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId))
.returning()
.then(takeFirst);
if (Object.keys(body.metadata ?? {}).length > 0)
await ctx.db
const deploymentVersion = await ctx.db.transaction(async (tx) => {
const deploymentVersion = await ctx.db
.update(SCHEMA.deploymentVersion)
.set(body)
.where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId))
.returning()
.then(takeFirstOrNull);
try {
const deploymentVersion = await ctx.db.transaction(async (tx) => {
const deploymentVersion = await tx
.update(SCHEMA.deploymentVersion)
.set(body)
.where(eq(SCHEMA.deploymentVersion.id, deploymentVersionId))
.returning()
.then(takeFirstOrNull);

Comment on lines +102 to +104
const shouldTrigger =
deploymentVersion.status === DeploymentVersionStatus.Ready;
if (!shouldTrigger) return NextResponse.json(deploymentVersion);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Avoid duplicate Ready events

shouldTrigger fires whenever the resulting status is Ready, even if it was already Ready.
This will enqueue the same deploymentVersion on every PATCH that touches any other field.

-const shouldTrigger = deploymentVersion.status === DeploymentVersionStatus.Ready;
+const shouldTrigger =
+  deploymentVersion.status === DeploymentVersionStatus.Ready &&
+  prevDeploymentVersion.status !== DeploymentVersionStatus.Ready;

Prevents unnecessary queue traffic and downstream work.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const shouldTrigger =
deploymentVersion.status === DeploymentVersionStatus.Ready;
if (!shouldTrigger) return NextResponse.json(deploymentVersion);
const shouldTrigger =
deploymentVersion.status === DeploymentVersionStatus.Ready &&
prevDeploymentVersion.status !== DeploymentVersionStatus.Ready;
if (!shouldTrigger) return NextResponse.json(deploymentVersion);

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
e2e/tests/api/workspace.spec.ts (2)

8-18: Workspace retrieval by ID test looks good, consider adding more assertions.

The test correctly verifies fetching a workspace by ID and checks the response status and workspace ID. Consider expanding assertions to validate other workspace properties like name and slug for more comprehensive validation.

  expect(workspaceResponse.response.status).toBe(200);
  expect(workspaceResponse.data?.id).toBe(workspace.id);
+ expect(workspaceResponse.data?.name).toBe(workspace.name);
+ expect(workspaceResponse.data?.slug).toBe(workspace.slug);

8-33: Consider adding tests for error cases.

The current tests only cover the happy path. Consider adding tests for error scenarios such as:

  1. Fetching a non-existent workspace ID
  2. Fetching with an invalid workspace slug
  3. Testing authorization failures

This would make your test suite more robust and comprehensive.

test("should return 404 for non-existent workspace ID", async ({ api }) => {
  const nonExistentId = "00000000-0000-0000-0000-000000000000";
  const response = await api.GET("/v1/workspaces/{workspaceId}", {
    params: {
      path: { workspaceId: nonExistentId },
    },
  });

  expect(response.response.status).toBe(404);
});

test("should return 404 for non-existent workspace slug", async ({ api }) => {
  const nonExistentSlug = "non-existent-workspace";
  const response = await api.GET("/v1/workspaces/slug/{workspaceSlug}", {
    params: {
      path: { workspaceSlug: nonExistentSlug },
    },
  });

  expect(response.response.status).toBe(404);
});
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8b27c32 and 30277b7.

📒 Files selected for processing (1)
  • e2e/tests/api/workspace.spec.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/workspace.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/workspace.spec.ts (2)
e2e/tests/fixtures.ts (1)
  • test (11-26)
packages/db/src/schema/workspace.ts (1)
  • workspace (18-27)
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
e2e/tests/api/workspace.spec.ts (1)

20-33: Workspace retrieval by slug test is well implemented.

The test for fetching a workspace by slug is correctly implemented with proper assertions for status code and matching workspace ID.


import { test } from "../fixtures";

const yamlPath = path.join(__dirname, "workspace.spec.yaml");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unused variable should be removed or utilized.

The yamlPath constant is defined but never used in the test file. Either remove it if it's unnecessary or utilize it for its intended purpose.

-const yamlPath = path.join(__dirname, "workspace.spec.yaml");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const yamlPath = path.join(__dirname, "workspace.spec.yaml");

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
apps/event-worker/src/utils/with-mutex.ts (1)

57-64: Expose TTL / expiry options to avoid indefinitely held locks

redis-semaphore supports custom lockTimeout and acquireTimeout.
Without overriding the defaults, long-running jobs risk exceeding the default TTL, leading to double-execution, or—conversely—jobs that crash after the lock is obtained may leave the lock dangling until the TTL expires.

Consider extending MutexOptions with optional ttlMs / acquireTimeoutMs and pass them to the RedisMutex constructor:

-type MutexOptions = {
+type MutexOptions = {
   getDependentLockKeys?: () => Promise<string[]>;
+  /** How long the lock should live in Redis (ms). Defaults to library default. */
+  ttlMs?: number;
+  /** How long to wait when calling `acquire` (ms). Defaults to 0 for tryAcquire. */
+  acquireTimeoutMs?: number;
 };
 ...
-  const primaryMutex = new RedisMutex(redis, key);
+  const primaryMutex = new RedisMutex(redis, key, {
+    lockTimeout: opts.ttlMs,
+    acquireTimeout: opts.acquireTimeoutMs,
+  });
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 30277b7 and e64e6d5.

📒 Files selected for processing (1)
  • apps/event-worker/src/utils/with-mutex.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/event-worker/src/utils/with-mutex.ts
⏰ Context from checks skipped due to timeout of 90000ms (5)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
apps/event-worker/src/utils/with-mutex.ts (1)

71-78:

❓ Verification inconclusive

Sort dependent lock keys to guarantee deterministic lock-ordering

The comment states that lock ordering prevents dead-locks, but the current implementation keeps the order returned by getDependentLockKeys().
If two workers ask for the same lock set in different orders they can still livelock (A → B vs B → A). Sorting the keys before acquisition guarantees a global ordering and eliminates this class of dead-locks.

-      const dependentLockKeys = await opts.getDependentLockKeys();
+      // Ensure a deterministic acquisition order to avoid dead-locks
+      const dependentLockKeys = (await opts.getDependentLockKeys()).sort();

I’ll inspect how getDependentLockKeys and its return type are defined to confirm whether sorting is applicable.

Comment on lines 76 to 85
const acquiredLocks = await Promise.all(
dependentMutexes.map((m) => m.tryAcquire()),
);

// If any dependent lock couldn't be acquired, release primary and return
if (acquiredLocks.some((acquired) => !acquired)) {
await primaryMutex.release();
return [false, null];
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Release only the locks that were actually acquired & keep true LIFO order

  1. dependentMutexes may contain locks that failed tryAcquire().
    Calling release() on an un-acquired mutex can throw (redis-semaphore rejects with “Lock not owned”).
  2. The code advertises “LIFO lock release”, but the Promise.all call keeps the original order and releases in parallel.

Track the subset that succeeded and release them sequentially in reverse order to match the contract:

-      const acquiredLocks = await Promise.all(
-        dependentMutexes.map((m) => m.tryAcquire()),
-      );
+      const acquiredStates = await Promise.all(
+        dependentMutexes.map((m) => m.tryAcquire()),
+      );
+
+      // Filter the actually-acquired mutexes for proper cleanup later
+      dependentMutexes = dependentMutexes.filter((_, idx) => acquiredStates[idx]);

@@
-    // Release all locks in reverse order
-    await Promise.all(dependentMutexes.map((m) => m.release()));
-    await primaryMutex.release();
+    // Release dependents last-in-first-out to respect nested critical sections
+    for (let i = dependentMutexes.length - 1; i >= 0; i--) {
+      try {
+        await dependentMutexes[i].release();
+      } catch (e) {
+        console.error("Failed to release dependent mutex", e);
+      }
+    }
+    await primaryMutex.release();

Also applies to: 93-95

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
e2e/tests/api/deployments.spec.ts (3)

151-221: Consider uncommenting release target verification

The test creates a deployment and verifies resource matching, but the release target verification code (lines 206-221) is commented out. Since the PR introduces queue-based architecture for release targets computation, having these verifications active would provide more comprehensive test coverage.

If the release target API endpoints are ready, consider uncommenting and updating these verification steps to validate the queue-based release target computation.


459-459: Remove debug console.log statement

The console.log statement will clutter test output logs.

-      console.log(resource);

254-254: Inconsistent timeout values for asynchronous operations

The test uses different timeout values (5000ms vs 10000ms) for waiting for the same type of asynchronous operation to complete.

Standardize timeout values by using a constant:

+// Define a constant at the top of the file
+const SELECTOR_COMPUTATION_TIMEOUT = 10_000;

// Then use it consistently throughout the tests
-    await page.waitForTimeout(5_000);
+    await page.waitForTimeout(SELECTOR_COMPUTATION_TIMEOUT);

Better yet, implement the polling approach suggested earlier for all wait operations.

Also applies to: 287-287, 380-380, 433-433, 444-444

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e64e6d5 and 5008174.

📒 Files selected for processing (2)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts (1 hunks)
  • e2e/tests/api/deployments.spec.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • apps/event-worker/src/workers/compute-deployment-resource-selector.ts
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • e2e/tests/api/deployments.spec.ts
🧬 Code Graph Analysis (1)
e2e/tests/api/deployments.spec.ts (4)
e2e/tests/fixtures.ts (1)
  • test (11-26)
e2e/api/yaml-loader.ts (3)
  • ImportedEntities (54-84)
  • importEntitiesFromYaml (94-252)
  • cleanupImportedEntities (259-307)
packages/db/src/schema/workspace.ts (1)
  • workspace (18-27)
packages/db/src/schema/resource.ts (1)
  • resource (59-87)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: Typecheck
  • GitHub Check: Lint
  • GitHub Check: build (linux/amd64)
  • GitHub Check: build (linux/amd64)
🔇 Additional comments (1)
e2e/tests/api/deployments.spec.ts (1)

327-399: Good test isolation implementation

This test properly addresses the previous review comment by creating a dedicated test resource and deleting it, rather than affecting shared test resources. This approach ensures proper test isolation.

},
});

await page.waitForTimeout(10_000);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider using a polling approach instead of fixed timeout

Using a fixed timeout of 10 seconds could make tests flaky if the queue processing takes longer or shorter than expected.

Replace the fixed timeout with a polling approach that checks for the expected condition:

-    await page.waitForTimeout(10_000);
+    // Poll until resources are matched or timeout after 30 seconds
+    let attempts = 0;
+    let resourcesMatched = false;
+    while (!resourcesMatched && attempts < 30) {
+      const checkResources = await api.GET(
+        "/v1/deployments/{deploymentId}/resources",
+        { params: { path: { deploymentId } } }
+      );
+      
+      if (checkResources.response.status === 200 && 
+          checkResources.data?.resources?.length > 0) {
+        resourcesMatched = true;
+      } else {
+        attempts++;
+        await page.waitForTimeout(1000);
+      }
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await page.waitForTimeout(10_000);
// Poll until resources are matched or timeout after 30 seconds
let attempts = 0;
let resourcesMatched = false;
while (!resourcesMatched && attempts < 30) {
const checkResources = await api.GET(
"/v1/deployments/{deploymentId}/resources",
{ params: { path: { deploymentId } } }
);
if (checkResources.response.status === 200 &&
checkResources.data?.resources?.length > 0) {
resourcesMatched = true;
} else {
attempts++;
await page.waitForTimeout(1000);
}
}

Comment on lines +451 to +457
expect(resources.response.status).toBe(400);

/**
* Since the deployment selector is set to null
* all resources should actually have release targets now
* since the deployment no longer has a specific scope
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Expected 400 error seems inconsistent with the comment

The test expects a 400 status code when querying for resources, but then the comment (lines 454-457) indicates resources should have release targets - suggesting the deployment still exists but just doesn't match resources.

Verify if status code 400 is actually the expected response when a deployment has a null resource selector. Could this instead be a status 200 with an empty resources array?


🏁 Script executed:

#!/bin/bash
# Check how null selector is handled in the API implementation

# Check deployment resources API route handling
rg -A 10 'deployments.*resources' --glob "apps/webservice/src/**/*.ts"

Length of output: 1277


🏁 Script executed:

#!/bin/bash
set -e

echo "Files under apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources:"
ls -1 apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources || true

echo -e "\n---- openapi.ts (first 200 lines) ----"
sed -n '1,200p' apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources/openapi.ts || true

echo -e "\nChecking for route.ts and its content (first 200 lines) ----"
if [ -f apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources/route.ts ]; then
  echo "Found route.ts:"
  sed -n '1,200p' apps/webservice/src/app/api/v1/deployments/[deploymentId]/resources/route.ts
else
  echo "route.ts not found"
fi

Length of output: 4073


Clarify test comment for null resource selector error

The test correctly expects a 400 response when resourceSelector is null, but the existing comment suggests otherwise. Update the comment to reflect that the 400 error is intentional:

• File: e2e/tests/api/deployments.spec.ts
• Lines: 451–457

-    /**
-     * Since the deployment selector is set to null
-     * all resources should actually have release targets now
-     * since the deployment no longer has a specific scope
-     */
+    // Expect 400 because the deployment’s `resourceSelector` is null,
+    // which the API treats as a bad request indicating all workspace resources apply.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
expect(resources.response.status).toBe(400);
/**
* Since the deployment selector is set to null
* all resources should actually have release targets now
* since the deployment no longer has a specific scope
*/
expect(resources.response.status).toBe(400);
// Expect 400 because the deployment’s `resourceSelector` is null,
// which the API treats as a bad request indicating all workspace resources apply.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (2)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (2)

50-50: Incorrect property name on joined result

rt.release_target.id assumes Drizzle returns an underscored alias. Joins in Drizzle return an object shaped like { releaseTarget: { … } } by default, so this will be undefined, causing all releaseTargetId values to be undefined.

-        releaseTargetId: rt.release_target.id,
+        releaseTargetId: rt.releaseTarget.id,

31-44: ⚠️ Potential issue

Prevent null-selector crash in selector queries

The policyTarget.resourceSelector, deploymentSelector, and environmentSelector fields can be null (from schema definition), but the code doesn't handle this case. If any selector is null, the query will likely throw an error.

-        selector()
-          .query()
-          .resources()
-          .where(policyTarget.resourceSelector)
-          .sql(),
-        selector()
-          .query()
-          .deployments()
-          .where(policyTarget.deploymentSelector)
-          .sql(),
-        selector()
-          .query()
-          .environments()
-          .where(policyTarget.environmentSelector)
-          .sql(),
+        policyTarget.resourceSelector
+          ? selector()
+              .query()
+              .resources()
+              .where(policyTarget.resourceSelector)
+              .sql()
+          : selector().query().resources().sql(true),
+        policyTarget.deploymentSelector
+          ? selector()
+              .query()
+              .deployments()
+              .where(policyTarget.deploymentSelector)
+              .sql()
+          : selector().query().deployments().sql(true),
+        policyTarget.environmentSelector
+          ? selector()
+              .query()
+              .environments()
+              .where(policyTarget.environmentSelector)
+              .sql()
+          : selector().query().environments().sql(true),
🧹 Nitpick comments (1)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1)

119-131: Consider adding more detailed error logging

While row-lock errors are properly handled with job requeuing, other errors are simply propagated without additional context. Adding more detailed logging would help with debugging.

    } catch (e: any) {
      const isRowLocked = e.code === "55P03";
      if (isRowLocked) {
        await getQueue(Channel.ComputePolicyTargetReleaseTargetSelector).add(
          job.name,
          job.data,
          { deduplication: { id: job.data.id, ttl: 500 } },
        );
        return;
      }

+     console.error(`Error computing policy target release target selector: ${e.message}`, {
+       policyTargetId: job.data.id,
+       error: e,
+     });
      throw e;
    }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5f4d32a and 6fe0ad9.

📒 Files selected for processing (1)
  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.{ts,tsx}`: **Note on Error Handling:** Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error...

**/*.{ts,tsx}: Note on Error Handling:
Avoid strict enforcement of try/catch blocks. Code may use early returns, Promise chains (.then().catch()), or other patterns for error handling. These are acceptable as long as they maintain clarity and predictability.

  • apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts
🧬 Code Graph Analysis (1)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (5)
packages/db/src/common.ts (1)
  • Tx (22-22)
packages/db/src/schema/policy.ts (3)
  • policyTarget (64-78)
  • PolicyTarget (186-186)
  • policy (42-62)
packages/db/src/selectors/index.ts (1)
  • selector (13-13)
packages/events/src/index.ts (2)
  • createWorker (10-25)
  • getQueue (28-34)
packages/db/src/client.ts (1)
  • db (15-15)
⏰ Context from checks skipped due to timeout of 90000ms (4)
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Typecheck
  • GitHub Check: build (linux/amd64)
  • GitHub Check: Lint
🔇 Additional comments (2)
apps/event-worker/src/workers/compute-policy-target-release-target-selector.ts (2)

70-102: Clean transaction with proper row locking

The transaction uses row-level locking with FOR UPDATE NOWAIT which properly handles concurrent updates by failing fast if rows are already locked. This approach combined with job requeuing for lock conflicts is a robust pattern for ensuring data consistency.


113-118: Good use of job deduplication

Using deduplication with a short TTL (500ms) for the downstream evaluation jobs is an effective way to prevent duplicate work while ensuring timely processing.

@adityachoudhari26 adityachoudhari26 merged commit cce1648 into main Apr 25, 2025
9 of 10 checks passed
@adityachoudhari26 adityachoudhari26 deleted the use-queue-for-selector-compute branch April 25, 2025 02:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants