Skip to content

Conversation

@jsbroks
Copy link
Member

@jsbroks jsbroks commented Nov 22, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced functionality for managing resource metadata and variables, including insertion and updating processes.
    • Added methods for dispatching jobs related to added and removed resources.
    • Implemented a new resource deletion process that handles associated events.
    • Enhanced querying capabilities for retrieving resources based on provider and workspace identifiers.
    • Added new utility functions for resource management.
    • Updated response structure for resource count in the API.
  • Bug Fixes

    • Enhanced logic for handling resource filters during updates to ensure accurate resource state reflection.
  • Refactor

    • Renamed methods for clarity in resource management.
    • Consolidated export statements for better accessibility of functionalities across modules.
    • Streamlined the codebase by removing outdated modules.
    • Modified internal logic for resource updates to improve data handling.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 22, 2024

Walkthrough

The pull request implements extensive modifications to the resource management system, including renaming functions for clarity, enhancing mutation procedures for resource updates and deletions, and introducing new functionalities for inserting resource metadata and variables. It also adds new methods for managing job dispatching related to resource additions and removals, while several files have been added or deleted to streamline the overall structure and functionality of resource handling.

Changes

File Path Change Summary
packages/api/src/router/environment.ts Renamed dispatchJobsForNewResources to dispatchJobsForAddedResources. Enhanced update and delete mutation procedures for better resource filtering and event handling.
packages/job-dispatch/src/events/triggers/resource-deleted.ts Removed cautionary comment from getEventsForResourceDeleted function.
packages/job-dispatch/src/index.ts Removed exports for ./new-resource.js and ./resource.js; added export for ./resource/index.js.
packages/job-dispatch/src/new-resource.ts Deleted file containing dispatchJobsForNewResources function.
packages/job-dispatch/src/resource.ts Deleted file containing resource management functions (e.g., upsertResources, deleteResources).
packages/job-dispatch/src/resource/delete.ts Introduced deleteResources function for resource deletion, including event handling.
packages/job-dispatch/src/resource/dispatch-resource.ts Added dispatchJobsForAddedResources and dispatchEventsForRemovedResources functions for job dispatching.
packages/job-dispatch/src/resource/index.ts Added exports for all entities from upsert.js, delete.js, and dispatch-resource.js.
packages/job-dispatch/src/resource/insert-resource-metadata.ts Introduced insertResourceMetadata function for inserting resource metadata.
packages/job-dispatch/src/resource/insert-resource-variables.ts Introduced insertResourceVariables function for inserting resource variables with encryption.
packages/job-dispatch/src/resource/insert-resources.ts Added functions for retrieving and inserting resources with conflict resolution.
packages/job-dispatch/src/resource/upsert.ts Introduced upsertResources function for inserting and updating resources, including job dispatching.
packages/job-dispatch/src/resource/utils.ts Added utility functions for retrieving resources by provider and workspace ID.
apps/webservice/src/app/api/v1/resources/route.ts Modified response structure in the POST handler to report resource count using resources.all.length.
apps/pty-proxy/src/controller/agent-socket.ts Updated updateResource method to destructure an object containing an updated property from upsertResources.
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts Enhanced error handling in the PATCH handler after updating a resource, checking for null results.

Possibly related PRs

Warning

Rate limit exceeded

@adityachoudhari26 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 41 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 7ae4a38 and 959d44f.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🧹 Outside diff range and nitpick comments (16)
packages/job-dispatch/src/resource/insert-resource-metadata.ts (1)

24-24: Consider batching large metadata insertions

For resources with large amounts of metadata, consider implementing batch processing to avoid potential performance issues or database constraints.

Consider implementing a batch size limit:

const BATCH_SIZE = 1000;
const batches = _.chunk(resourceMetadataValues, BATCH_SIZE);
for (const batch of batches) {
  await tx.insert(schema.resourceMetadata).values(batch);
}
packages/job-dispatch/src/index.ts (1)

25-25: Consider documenting breaking changes

The restructuring of resource-related exports (new-resource.jsresource/index.js) and function renaming (dispatchJobsForNewResourcesdispatchJobsForAddedResources) constitute breaking changes that may affect package consumers.

Consider:

  1. Adding a CHANGELOG.md entry if not already present
  2. Bumping the major version number following semver
  3. Providing migration guidelines for users
packages/job-dispatch/src/resource/insert-resource-variables.ts (2)

9-11: Consider strengthening type safety for variable values

The any type for the value field reduces type safety. Consider creating a more specific type or using generics to better type-check variable values.

+type VariableValue = string | number | boolean | object;
 export type ResourceWithVariables = Resource & {
-  variables?: Array<{ key: string; value: any; sensitive: boolean }>;
+  variables?: Array<{ key: string; value: VariableValue; sensitive: boolean }>;
 };

28-38: Consider batch size limits for database operations

The function processes all resources in a single transaction without considering batch size limits. This could lead to performance issues with large datasets.

Consider implementing batch processing:

const BATCH_SIZE = 1000;
const batches = _.chunk(resourceVariablesValues, BATCH_SIZE);
for (const batch of batches) {
  await tx
    .insert(schema.resourceVariable)
    .values(batch)
    .onConflictDoUpdate({
      target: [schema.resourceVariable.key, schema.resourceVariable.resourceId],
      set: buildConflictUpdateColumns(schema.resourceVariable, ["value", "sensitive"]),
    });
}
packages/api/src/router/environment.ts (2)

Line range hint 200-326: Consider extracting complex resource filter logic into separate functions

The resource filter update logic is complex with multiple nested conditions and database operations. Consider breaking it down into smaller, focused functions for better maintainability and testability:

  • handleResourceFilterUpdate(oldFilter, newFilter)
  • findAffectedResources(oldFilter, newFilter)
  • processRemovedResources(resources, otherEnvFilters)

Example refactor for the resource filter handling:

async function handleResourceFilterUpdate(
  db: Tx,
  oldEnv: Environment,
  newFilter: ResourceCondition,
) {
  if (!hasResourceFiltersChanged(oldEnv.resourceFilter, newFilter)) {
    return;
  }
  
  const { addedResources, removedResources } = await findAffectedResources(
    db,
    oldEnv,
    newFilter
  );
  
  await processRemovedResources(db, removedResources, oldEnv.system);
  await processAddedResources(db, addedResources, oldEnv.id);
}

Line range hint 327-333: Enhance error handling and logging

Two improvements needed:

  1. Replace console.log with proper structured logging
  2. Add error handling for the dispatch operation
-            await dispatchJobsForAddedResources(
-              ctx.db,
-              newResources.map((r) => r.id),
-              input.id,
-            );
-            console.log(
-              `Found ${newResources.length} new resources for environment ${input.id}`,
-            );
+            try {
+              await dispatchJobsForAddedResources(
+                ctx.db,
+                newResources.map((r) => r.id),
+                input.id,
+              );
+              ctx.logger.info('Resources added to environment', {
+                count: newResources.length,
+                environmentId: input.id
+              });
+            } catch (error) {
+              ctx.logger.error('Failed to dispatch jobs for added resources', {
+                error,
+                resourceCount: newResources.length,
+                environmentId: input.id
+              });
+              throw error;
+            }
packages/job-dispatch/src/resource/delete.ts (1)

34-39: Update the @param documentation to match the function signature

The JSDoc comment for deleteResources mentions @param resourceIds, but the function actually accepts resources: Resource[] as a parameter. Please update the documentation to accurately reflect the parameter name and description.

packages/job-dispatch/src/resource/utils.ts (1)

24-29: Extract common resource filtering logic into a helper function

The logic for filtering resources based on the deleted flag is repeated across multiple functions. Extracting this logic into a helper function will improve code maintainability and reduce duplication.

Consider adding a utility function like this:

const deletedFilter = (deleted: boolean) =>
  deleted ? isNotNull(schema.resource.deletedAt) : isNull(schema.resource.deletedAt);

Then update the functions accordingly:

 where(
   and(
     eq(schema.resource.providerId, providerId),
-    options.deleted
-      ? isNotNull(schema.resource.deletedAt)
-      : isNull(schema.resource.deletedAt),
+    deletedFilter(options.deleted),
   ),
 );
packages/job-dispatch/src/resource/upsert.ts (3)

75-79: Use try-catch instead of .catch() with await for consistency

While inside an async function, it's preferable to use try-catch blocks over chaining .catch() with await. This enhances readability and maintains consistent error-handling patterns throughout the codebase.

Refactor the code as follows:

 if (resources.deleted.length > 0) {
-  await deleteResources(tx, resources.deleted).catch((err) => {
-    log.error("Error deleting resources", { error: err });
-    throw err;
-  });
+  try {
+    await deleteResources(tx, resources.deleted);
+  } catch (err) {
+    log.error("Error deleting resources", { error: err });
+    throw err;
+  }
 }

102-104: Enhance error logging with additional context

When logging errors during the upsert operation, consider adding more contextual information such as workspaceId and resource identifiers. This will facilitate debugging and provide clearer insights into the error's origin.

Update the log statement:

 log.error("Error upserting resources", {
   error: err,
+  workspaceId,
+  resourceIdentifiers: resourcesToInsert.map((r) => r.identifier),
 });

61-72: Optimize resource difference calculation using Sets

The current method of finding added and removed resources uses array filtering, which can be inefficient for large datasets. Using Set data structures can improve performance by providing faster lookup times.

Refactor the code as follows:

 const changedEnvs = envsAfterInsert.map((env) => {
   const beforeEnv = envsBeforeInsert.find((e) => e.id === env.id);
-  const beforeResources = beforeEnv?.resources ?? [];
-  const afterResources = env.resources;
-
-  const removedResources = beforeResources.filter(
-    (br) => !afterResources.some((ar) => ar.id === br.id),
-  );
-  const addedResources = afterResources.filter(
-    (ar) => !beforeResources.some((br) => br.id === ar.id),
-  );
+  const beforeResourceIds = new Set(
+    (beforeEnv?.resources ?? []).map((r) => r.id)
+  );
+  const afterResourceIds = new Set(env.resources.map((r) => r.id));
+
+  const removedResourceIds = [...beforeResourceIds].filter(
+    (id) => !afterResourceIds.has(id)
+  );
+  const addedResourceIds = [...afterResourceIds].filter(
+    (id) => !beforeResourceIds.has(id)
+  );
+
+  const removedResources = (beforeEnv?.resources ?? []).filter((r) =>
+    removedResourceIds.includes(r.id)
+  );
+  const addedResources = env.resources.filter((r) =>
+    addedResourceIds.includes(r.id)
+  );

   return { ...env, removedResources, addedResources };
 });
packages/job-dispatch/src/resource/insert-resources.ts (1)

18-35: Add missing JSDoc comment for getResourcesByWorkspaceIdAndIdentifier.

The function getResourcesByWorkspaceIdAndIdentifier lacks a JSDoc comment, which reduces code readability and maintainability.

Consider adding a JSDoc comment like:

/**
 * Retrieves resources based on workspace ID and identifier pairs.
 * @param tx - Database transaction.
 * @param resources - Array of objects containing `workspaceId` and `identifier`.
 * @returns Promise resolving to an array of resources matching the given pairs.
 */
packages/job-dispatch/src/resource/dispatch-resource.ts (4)

32-34: Consider logging when environment is not found

Currently, if the environment is null, the function returns silently. Adding a log statement can help with debugging and tracking issues when an environment with the given envId is not found.

 if (environment == null) {
+  console.warn(`Environment with id ${envId} not found.`);
   return;
 }

72-80: Add handling for empty releases before creating job triggers

If releases is empty, the subsequent calls to create job triggers may not behave as expected. Ensure that you handle this scenario appropriately.

Consider adding logging or throwing an error if this is an unexpected state.

 if (releases.length === 0) {
+  console.warn('No releases found for deployments. Job triggers will not be created.');
   return;
 }

27-87: Ensure input validation for resourceIds

If resourceIds is empty, the function may proceed unnecessarily. Validate inputs early to avoid redundant processing.

 export async function dispatchJobsForAddedResources(
   db: Tx,
   resourceIds: string[],
   envId: string,
 ): Promise<void> {
+  if (resourceIds.length === 0) {
+    console.warn('No resource IDs provided. Exiting function.');
+    return;
+  }
   const environment = await getEnvironmentWithReleaseChannels(db, envId);
   // Rest of the code...

13-26: Ensure consistency in function declarations

Currently, some functions are defined using arrow functions assigned to constants, and others use the function keyword. For consistency and readability, consider standardizing function declarations.

For example, change getEnvironmentWithReleaseChannels to a named function:

-const getEnvironmentWithReleaseChannels = (db: Tx, envId: string) =>
+function getEnvironmentWithReleaseChannels(db: Tx, envId: string) {
   db.query.environment.findFirst({
     // Function body...
   });
+}

Similarly, adjust getEnvironmentDeployments accordingly.

Also applies to: 89-99

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between cd5c075 and 5511f21.

📒 Files selected for processing (13)
  • packages/api/src/router/environment.ts (2 hunks)
  • packages/job-dispatch/src/events/triggers/resource-deleted.ts (0 hunks)
  • packages/job-dispatch/src/index.ts (1 hunks)
  • packages/job-dispatch/src/new-resource.ts (0 hunks)
  • packages/job-dispatch/src/resource.ts (0 hunks)
  • packages/job-dispatch/src/resource/delete.ts (1 hunks)
  • packages/job-dispatch/src/resource/dispatch-resource.ts (1 hunks)
  • packages/job-dispatch/src/resource/index.ts (1 hunks)
  • packages/job-dispatch/src/resource/insert-resource-metadata.ts (1 hunks)
  • packages/job-dispatch/src/resource/insert-resource-variables.ts (1 hunks)
  • packages/job-dispatch/src/resource/insert-resources.ts (1 hunks)
  • packages/job-dispatch/src/resource/upsert.ts (1 hunks)
  • packages/job-dispatch/src/resource/utils.ts (1 hunks)
💤 Files with no reviewable changes (3)
  • packages/job-dispatch/src/events/triggers/resource-deleted.ts
  • packages/job-dispatch/src/new-resource.ts
  • packages/job-dispatch/src/resource.ts
✅ Files skipped from review due to trivial changes (1)
  • packages/job-dispatch/src/resource/index.ts
🔇 Additional comments (7)
packages/job-dispatch/src/resource/insert-resource-metadata.ts (2)

7-9: LGTM! Well-defined type with proper TypeScript practices

The type definition is clear, properly extends the base Resource type, and uses appropriate TypeScript constructs for the metadata field.


11-14: Consider adding input validation

The function should validate the metadata values before insertion. Consider adding checks for:

  • Maximum key/value string lengths
  • Invalid characters in keys
  • Maximum number of metadata entries per resource
packages/job-dispatch/src/index.ts (1)

25-25: Verify the migration of resource management functionality

The addition of ./resource/index.js export replaces the removed exports (./new-resource.js and ./resource.js). Let's verify that all previously exported functionality is still available through the new path.

✅ Verification successful

Migration of resource functionality appears complete and consistent

The verification shows that:

  • All resource functionality is now properly organized under resource/ directory with clear modular exports (upsert.js, delete.js, dispatch-resource.js)
  • The function rename from dispatchJobsForNewResources to dispatchJobsForAddedResources is consistently applied across the codebase
  • References to resource.js in db schema files are part of a different package (packages/db) and unrelated to this refactoring
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify that all resource-related functionality is properly exported
# through the new resource/index.js file

# Check the exports in the new resource/index.js
echo "Checking new exports in resource/index.js:"
ast-grep --pattern 'export $_' packages/job-dispatch/src/resource/index.ts

# Look for any remaining references to old paths
echo -e "\nChecking for references to old paths:"
rg -l "new-resource|/resource\.js" --type ts

Length of output: 485


Script:

#!/bin/bash
# Let's check the content of the old files in git history to compare with new structure

# First, get the content of resource/index.ts to see current exports
echo "Current exports in resource/index.ts:"
cat packages/job-dispatch/src/resource/index.ts

# Check references in db schema files to understand usage
echo -e "\nChecking usage in schema files:"
rg "new-resource|/resource\.js" --type ts -C 2 packages/db/src/schema/

# Look for dispatchJobsForNewResources references to verify rename
echo -e "\nChecking for function rename references:"
rg "dispatchJobsForNewResources|dispatchJobsForAddedResources" --type ts

Length of output: 2577

packages/job-dispatch/src/resource/insert-resource-variables.ts (1)

13-16: Verify resource existence before inserting variables

The function assumes that all resource IDs are valid. Consider adding a verification step to ensure resources exist before inserting their variables.

✅ Verification successful

Based on the gathered information, I can now provide a final response:

Resource existence is already verified in the upsert flow

The code is actually safe as the insertResourceVariables function is called within a transaction where resources are first inserted via insertResources. The insertResources function includes validation through findExistingResources, and the variables are only inserted after the resources are confirmed to exist in the same transaction. This transactional approach ensures data integrity.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if the Resource table has a foreign key constraint on resourceVariable
# This helps verify if database-level integrity is enforced

ast-grep --pattern 'createTable("resourceVariable", (table) => {
  $$$
  table.foreign("resourceId").$$$
  $$$
})'

Length of output: 116


Script:

#!/bin/bash
# Let's check for any migration files that might define the resourceVariable table structure
fd migration rg "resourceVariable"

# Also check the implementation to see if there's any validation
rg -A 5 "insertResourceVariables" 

Length of output: 1812


Script:

#!/bin/bash
# Let's check for migration files correctly and also look at insert-resources implementation
fd -e sql -e ts migration

# Check insert-resources implementation
rg -A 10 "insertResources = async" 

# Check if there's any resource validation in the upsert flow
rg -B 5 "insertResourceVariables" packages/job-dispatch/src/resource/upsert.ts

Length of output: 1496


Script:

#!/bin/bash
# Let's check the findExistingResources implementation and the resource table schema
rg -A 5 "findExistingResources = async"

# Check if there's a foreign key in the database schema
rg -A 10 "createTable.*resource" 
rg -A 10 "createTable.*resourceVariable"

Length of output: 628

packages/api/src/router/environment.ts (1)

31-31: Verify all usages of the renamed function across the codebase

The function rename from dispatchJobsForNewResources to dispatchJobsForAddedResources improves clarity but requires verification of all usage locations.

✅ Verification successful

Function rename has been properly implemented across the codebase

The verification shows that:

  • No references to the old function name dispatchJobsForNewResources exist in the codebase
  • The new function name dispatchJobsForAddedResources is consistently used in:
    • Definition: packages/job-dispatch/src/resource/dispatch-resource.ts
    • Usage: packages/job-dispatch/src/resource/upsert.ts and packages/api/src/router/environment.ts
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for any remaining references to the old function name
# and verify all occurrences of the new name

echo "Checking for old function name references..."
rg "dispatchJobsForNewResources"

echo "Verifying new function name usage..."
rg "dispatchJobsForAddedResources"

Length of output: 686

packages/job-dispatch/src/resource/upsert.ts (1)

74-74: Verify initialization of resources.deleted

Ensure that resources.deleted is always initialized as an array, even when no resources are deleted. This prevents potential undefined errors when accessing resources.deleted.length or iterating over it.

Run the following script to confirm the initialization:

packages/job-dispatch/src/resource/insert-resources.ts (1)

88-100: Confirm conflict target and updated columns in onConflictDoUpdate.

Verify that the conflict target [resource.identifier, resource.workspaceId] correctly identifies unique resources, and that all necessary fields are included in the set clause for updates.

Ensure that any additional fields that should be updated on conflict are included, and that the conflict target aligns with the database constraints.

}));
});

return tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for the database operation

The function should handle potential database errors gracefully and provide meaningful error messages.

-  return tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
+  try {
+    return await tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
+  } catch (error) {
+    throw new Error(`Failed to insert resource metadata: ${error.message}`);
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
try {
return await tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
} catch (error) {
throw new Error(`Failed to insert resource metadata: ${error.message}`);
}

@@ -0,0 +1,25 @@
import type { Tx } from "@ctrlplane/db";
import type { Resource } from "@ctrlplane/db/schema";
import _ from "lodash";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove unused lodash import

The lodash import is not being used in this file and should be removed.

-import _ from "lodash";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import _ from "lodash";

Comment on lines +15 to +22
const resourceMetadataValues = resources.flatMap((resource) => {
const { id, metadata = {} } = resource;
return Object.entries(metadata).map(([key, value]) => ({
resourceId: id,
key,
value,
}));
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for duplicate metadata keys

The current implementation doesn't handle the case where the same resource has duplicate metadata keys. Consider adding deduplication logic or throwing an error when duplicates are detected.

 const resourceMetadataValues = resources.flatMap((resource) => {
   const { id, metadata = {} } = resource;
+  const seenKeys = new Set<string>();
   return Object.entries(metadata).map(([key, value]) => {
+    if (seenKeys.has(key)) {
+      throw new Error(`Duplicate metadata key "${key}" for resource ${id}`);
+    }
+    seenKeys.add(key);
     return {
       resourceId: id,
       key,
       value,
     };
   });
 });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const resourceMetadataValues = resources.flatMap((resource) => {
const { id, metadata = {} } = resource;
return Object.entries(metadata).map(([key, value]) => ({
resourceId: id,
key,
value,
}));
});
const resourceMetadataValues = resources.flatMap((resource) => {
const { id, metadata = {} } = resource;
const seenKeys = new Set<string>();
return Object.entries(metadata).map(([key, value]) => {
if (seenKeys.has(key)) {
throw new Error(`Duplicate metadata key "${key}" for resource ${id}`);
}
seenKeys.add(key);
return {
resourceId: id,
key,
value,
};
});
});

Comment on lines +21 to +23
value: sensitive
? variablesAES256().encrypt(JSON.stringify(value))
: value,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling for encryption operation

The encryption operation could fail but lacks error handling. Consider wrapping it in a try-catch block to handle potential encryption failures gracefully.

-      value: sensitive
-        ? variablesAES256().encrypt(JSON.stringify(value))
-        : value,
+      value: sensitive
+        ? (() => {
+            try {
+              return variablesAES256().encrypt(JSON.stringify(value));
+            } catch (error) {
+              throw new Error(`Failed to encrypt variable ${key}: ${error.message}`);
+            }
+          })()
+        : value,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
value: sensitive
? variablesAES256().encrypt(JSON.stringify(value))
: value,
value: sensitive
? (() => {
try {
return variablesAES256().encrypt(JSON.stringify(value));
} catch (error) {
throw new Error(`Failed to encrypt variable ${key}: ${error.message}`);
}
})()
: value,

Comment on lines +17 to +26
const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) =>
variables.map(({ key, value, sensitive }) => ({
resourceId: id,
key,
value: sensitive
? variablesAES256().encrypt(JSON.stringify(value))
: value,
sensitive,
})),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider validating variable values before processing

The function processes values without validation. Consider adding checks for:

  1. Maximum value size before encryption
  2. Valid JSON structure for sensitive values
  3. Null/undefined handling
 export const insertResourceVariables = async (
   tx: Tx,
   resources: ResourceWithVariables[],
 ) => {
+  const MAX_VALUE_SIZE = 1024 * 1024; // 1MB
   const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) =>
-    variables.map(({ key, value, sensitive }) => ({
+    variables.map(({ key, value, sensitive }) => {
+      if (value === null || value === undefined) {
+        throw new Error(`Variable ${key} has null or undefined value`);
+      }
+      
+      const stringifiedValue = sensitive ? JSON.stringify(value) : value;
+      if (Buffer.from(String(stringifiedValue)).length > MAX_VALUE_SIZE) {
+        throw new Error(`Variable ${key} exceeds maximum size of ${MAX_VALUE_SIZE} bytes`);
+      }
+      
+      return {
       resourceId: id,
       key,
       value: sensitive
-        ? variablesAES256().encrypt(JSON.stringify(value))
-        : value,
+        ? variablesAES256().encrypt(stringifiedValue)
+        : stringifiedValue,
       sensitive,
-    })),
+    }}),
   );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) =>
variables.map(({ key, value, sensitive }) => ({
resourceId: id,
key,
value: sensitive
? variablesAES256().encrypt(JSON.stringify(value))
: value,
sensitive,
})),
);
const MAX_VALUE_SIZE = 1024 * 1024; // 1MB
const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) =>
variables.map(({ key, value, sensitive }) => {
if (value === null || value === undefined) {
throw new Error(`Variable ${key} has null or undefined value`);
}
const stringifiedValue = sensitive ? JSON.stringify(value) : value;
if (Buffer.from(String(stringifiedValue)).length > MAX_VALUE_SIZE) {
throw new Error(`Variable ${key} exceeds maximum size of ${MAX_VALUE_SIZE} bytes`);
}
return {
resourceId: id,
key,
value: sensitive
? variablesAES256().encrypt(stringifiedValue)
: stringifiedValue,
sensitive,
};
}),
);

Comment on lines +108 to +112
const deleted = _.differenceBy(
existingResources,
resources,
(t) => t.identifier,
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Reevaluate the calculation of deleted resources.

The deleted array is calculated as the difference between existingResources and the newly upserted resources. However, since this function does not perform deletions, the deleted array may not reflect actual deletions.

Consider the following options:

  • Option 1: Remove the deleted array from the return value if deletions are not part of this function's responsibility.
  • Option 2: Implement deletion logic to handle resources that exist in the database but are not present in resourcesToInsert.

Comment on lines +8 to +14
/**
* Gets resources for a specific provider
* @param tx - Database transaction
* @param providerId - ID of the provider to get resources for
* @param options - Options object
* @returns Promise resolving to array of resources
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove incorrect @param options from JSDoc comment.

The JSDoc comment for getResourcesByProvider includes a @param options parameter that does not exist in the function signature. This could cause confusion.

Apply the following diff to fix the documentation:

 /**
  * Gets resources for a specific provider
  * @param tx - Database transaction
  * @param providerId - ID of the provider to get resources for
- * @param options - Options object
  * @returns Promise resolving to array of resources
  */
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/**
* Gets resources for a specific provider
* @param tx - Database transaction
* @param providerId - ID of the provider to get resources for
* @param options - Options object
* @returns Promise resolving to array of resources
*/
/**
* Gets resources for a specific provider
* @param tx - Database transaction
* @param providerId - ID of the provider to get resources for
* @returns Promise resolving to array of resources
*/

Comment on lines 81 to 86
if (releaseJobTriggers.length === 0) return;

await dispatchReleaseJobTriggers(db)
.filter(isPassingAllPolicies)
.releaseTriggers(releaseJobTriggers)
.dispatch();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Ensure error handling during job dispatch

When dispatching release job triggers, any errors occurring during the dispatch process may not be caught. Wrap the dispatch call in a try-catch block to handle potential exceptions.

 try {
   await dispatchReleaseJobTriggers(db)
     .filter(isPassingAllPolicies)
     .releaseTriggers(releaseJobTriggers)
     .dispatch();
+} catch (error) {
+  console.error('Error dispatching release job triggers:', error);
+  // Handle the error accordingly
 }

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +69 to +71
const releases = await Promise.all(releasePromises).then((rows) =>
rows.filter(isPresent),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle rejections when awaiting multiple promises

Using Promise.all will reject immediately if any promise rejects, potentially causing unhandled promise rejections. To ensure robustness, consider handling possible rejections from the release queries.

You can use Promise.allSettled to handle each promise individually:

- const releases = await Promise.all(releasePromises).then((rows) =>
-   rows.filter(isPresent),
- );
+ const releaseResults = await Promise.allSettled(releasePromises);
+ const releases = releaseResults
+   .filter(result => result.status === 'fulfilled' && isPresent(result.value))
+   .map(result => result.value);

This way, you can process the fulfilled promises and decide how to handle any rejections.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const releases = await Promise.all(releasePromises).then((rows) =>
rows.filter(isPresent),
);
const releaseResults = await Promise.allSettled(releasePromises);
const releases = releaseResults
.filter(result => result.status === 'fulfilled' && isPresent(result.value))
.map(result => result.value);

Comment on lines 120 to 121
const handleEventPromises = events.map(handleEvent);
await Promise.allSettled(handleEventPromises);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Process results of Promise.allSettled to handle rejections

After using Promise.allSettled, it's important to handle any rejected promises to prevent silent failures.

 const handleEventPromises = events.map(handleEvent);
 await Promise.allSettled(handleEventPromises);
+handleEventPromises.forEach((result, index) => {
+  if (result.status === 'rejected') {
+    console.error(`Event handling failed for event ${JSON.stringify(events[index])}:`, result.reason);
+    // Additional error handling if necessary
+  }
+});

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Outside diff range and nitpick comments (7)
packages/job-dispatch/src/resource/upsert.ts (3)

21-21: Consider replacing any with a more specific type

Using any for variable values reduces type safety. Consider defining a union type of allowed value types or using unknown if the value type truly cannot be determined at compile time.

-  variables?: Array<{ key: string; value: any; sensitive: boolean }>;
+  variables?: Array<{ key: string; value: unknown; sensitive: boolean }>;

70-81: Consider optimizing resource comparison

The current implementation could be memory-intensive for large datasets. Consider using Sets for more efficient lookups.

+  const createResourceIdSet = (resources) => new Set(resources.map(r => r.id));
   const changedEnvs = envsAfterInsert.map((env) => {
     const beforeEnv = envsBeforeInsert.find((e) => e.id === env.id);
     const beforeResources = beforeEnv?.resources ?? [];
     const afterResources = env.resources;
+    const beforeIds = createResourceIdSet(beforeResources);
+    const afterIds = createResourceIdSet(afterResources);
     const removedResources = beforeResources.filter(
-      (br) => !afterResources.some((ar) => ar.id === br.id),
+      (br) => !afterIds.has(br.id),
     );
     const addedResources = afterResources.filter(
-      (ar) => !beforeResources.some((br) => br.id === ar.id),
+      (ar) => !beforeIds.has(ar.id),
     );
     return { ...env, removedResources, addedResources };
   });

92-118: Consider batch processing for job and event dispatching

For better performance with large datasets, consider implementing batch processing instead of processing one environment at a time.

+  const batchSize = 100; // Adjust based on your needs
+  const processBatch = async (items, processor) => {
+    for (let i = 0; i < items.length; i += batchSize) {
+      const batch = items.slice(i, i + batchSize);
+      await processor(batch);
+    }
+  };
+
+  await processBatch(changedEnvs, async (envBatch) => {
+    await Promise.all(envBatch.map(async (env) => {
       if (env.addedResources.length > 0) {
         // ... existing logic
       }
       if (env.removedResources.length > 0) {
         // ... existing logic
       }
+    }));
+  });
packages/job-dispatch/src/resource/dispatch-resource.ts (3)

58-71: Extract deployment filtering logic to a separate function

The deployment filtering logic is complex enough to warrant its own function for better maintainability and reusability.

+const filterDeploymentReleaseChannels = (
+  deployment: typeof deployments[number],
+  releaseChannels: typeof environment.releaseChannels,
+  policyReleaseChannels: typeof environment.policy.environmentPolicyReleaseChannels,
+) => {
+  const envReleaseChannel = releaseChannels.find(
+    (erc) => erc.deploymentId === deployment.id,
+  );
+  const policyReleaseChannel = policyReleaseChannels.find(
+    (prc) => prc.deploymentId === deployment.id,
+  );
+
+  const { releaseFilter } =
+    envReleaseChannel?.releaseChannel ??
+    policyReleaseChannel?.releaseChannel ??
+    {};
+  return { ...deployment, releaseFilter };
+};

 const deploymentsWithReleaseFilter = deployments.map((deployment) => 
-  // ... existing filtering logic ...
+  filterDeploymentReleaseChannels(deployment, releaseChannels, policyReleaseChannels)
 );

172-177: Improve type safety for event actions

Consider defining a union type for event actions to prevent potential typos and improve maintainability.

+type DeploymentEventAction = 'deployment.resource.removed' | 'deployment.resource.added';

 const events = resources.flatMap((resource) =>
   deployments.map((deployment) => ({
-    action: "deployment.resource.removed" as const,
+    action: 'deployment.resource.removed' satisfies DeploymentEventAction,
     payload: { deployment, resource },
   })),
 );

1-193: Consider implementing a Resource Dispatcher class

The file contains related functions that manage resource dispatch logic. Consider refactoring this into a class-based structure for better encapsulation and state management. This would make it easier to:

  • Share common configuration and dependencies
  • Manage error handling consistently
  • Test the functionality in isolation
  • Extend the functionality in the future

Example structure:

class ResourceDispatcher {
  constructor(
    private readonly db: Tx,
    private readonly logger: Logger
  ) {}

  async dispatchJobsForAddedResources(resourceIds: string[], envId: string): Promise<void> {
    // existing implementation
  }

  async dispatchEventsForRemovedResources(resourceIds: string[], envId: string): Promise<void> {
    // existing implementation
  }

  private async getEnvironmentWithReleaseChannels(envId: string) {
    // existing implementation
  }

  private async getEnvironmentDeployments(envId: string) {
    // existing implementation
  }
}
packages/job-dispatch/src/resource/utils.ts (1)

14-29: Refactor duplicated deletion condition into a helper function

The condition for filtering deleted and non-deleted resources based on options.deleted is duplicated in both getResourcesByProvider and getResourcesByWorkspaceIdAndIdentifier. Extracting this logic into a shared helper function will improve code maintainability and reduce redundancy.

Example of the helper function:

function deletedCondition(deleted: boolean) {
  return deleted
    ? isNotNull(schema.resource.deletedAt)
    : isNull(schema.resource.deletedAt);
}

Update the queries to use the helper:

 // In getResourcesByProvider
       and(
         eq(schema.resource.providerId, providerId),
-        options.deleted
-          ? isNotNull(schema.resource.deletedAt)
-          : isNull(schema.resource.deletedAt),
+        deletedCondition(options.deleted),
       ),

 // In getResourcesByWorkspaceIdAndIdentifier
-        options.deleted
-          ? isNotNull(schema.resource.deletedAt)
-          : isNull(schema.resource.deletedAt),
+        deletedCondition(options.deleted),

Also applies to: 41-61

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 5511f21 and ddc3648.

📒 Files selected for processing (3)
  • packages/job-dispatch/src/resource/dispatch-resource.ts (1 hunks)
  • packages/job-dispatch/src/resource/upsert.ts (1 hunks)
  • packages/job-dispatch/src/resource/utils.ts (1 hunks)
🔇 Additional comments (7)
packages/job-dispatch/src/resource/upsert.ts (3)

49-55: Previous review comment about handling undefined is still applicable


83-90: LGTM! Efficient resource deletion handling

The implementation uses Set for efficient lookups and includes proper error handling with logging.


119-129: LGTM! Comprehensive error handling and logging

The implementation includes proper error handling with detailed logging of both success and failure cases.

packages/job-dispatch/src/resource/dispatch-resource.ts (1)

92-94: Handle Promise rejections using Promise.allSettled

The previous review comment about using Promise.allSettled is still valid here. Using Promise.all will fail fast if any promise rejects, potentially leaving some releases unprocessed.

packages/job-dispatch/src/resource/utils.ts (3)

14-29: Code logic appears correct

The getResourcesByProvider function correctly retrieves resources based on the provider ID and handles the deleted option as expected.


63-99: Duplicate comment: Possible logical error in filtering deleted resources

The condition isNotNull(schema.resource.deletedAt) at line 94 selects deleted resources. If the intention is to select active resources, it should be isNull(schema.resource.deletedAt).


63-99: Duplicate comment: Potential N+1 query problem

The function getEnvironmentsByResourceWithIdentifiers performs a database query within a loop over environments, which may lead to performance issues due to multiple database calls (N+1 problem). Consider refactoring to fetch all necessary resources in a single query.

Comment on lines +33 to +36
const workspaceId = resourcesToInsert[0]?.workspaceId;
if (workspaceId == null) throw new Error("Workspace ID is required");
if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId))
throw new Error("All resources must belong to the same workspace");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Strengthen input validation

The current validation could be improved to handle edge cases:

  1. Check if resourcesToInsert array is empty before accessing first element
  2. Consider adding validation for required resource fields
+  if (resourcesToInsert.length === 0) {
+    throw new Error("At least one resource is required");
+  }
   const workspaceId = resourcesToInsert[0]?.workspaceId;
   if (workspaceId == null) throw new Error("Workspace ID is required");
   if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId))
     throw new Error("All resources must belong to the same workspace");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const workspaceId = resourcesToInsert[0]?.workspaceId;
if (workspaceId == null) throw new Error("Workspace ID is required");
if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId))
throw new Error("All resources must belong to the same workspace");
if (resourcesToInsert.length === 0) {
throw new Error("At least one resource is required");
}
const workspaceId = resourcesToInsert[0]?.workspaceId;
if (workspaceId == null) throw new Error("Workspace ID is required");
if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId))
throw new Error("All resources must belong to the same workspace");

Comment on lines +58 to +61
await Promise.all([
insertResourceMetadata(tx, resourcesWithId),
insertResourceVariables(tx, resourcesWithId),
]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for parallel operations

The Promise.all could fail silently if one operation fails. Consider adding specific error handling for metadata and variables insertion.

   log.debug("Inserting resource metadata and variables");
   await Promise.all([
-    insertResourceMetadata(tx, resourcesWithId),
-    insertResourceVariables(tx, resourcesWithId),
+    insertResourceMetadata(tx, resourcesWithId).catch(err => {
+      log.error("Failed to insert resource metadata", { error: err });
+      throw err;
+    }),
+    insertResourceVariables(tx, resourcesWithId).catch(err => {
+      log.error("Failed to insert resource variables", { error: err });
+      throw err;
+    }),
   ]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await Promise.all([
insertResourceMetadata(tx, resourcesWithId),
insertResourceVariables(tx, resourcesWithId),
]);
log.debug("Inserting resource metadata and variables");
await Promise.all([
insertResourceMetadata(tx, resourcesWithId).catch(err => {
log.error("Failed to insert resource metadata", { error: err });
throw err;
}),
insertResourceVariables(tx, resourcesWithId).catch(err => {
log.error("Failed to insert resource variables", { error: err });
throw err;
}),
]);

Comment on lines +133 to +143
const getEnvironmentDeployments = (db: Tx, envId: string) =>
db
.select()
.from(SCHEMA.deployment)
.innerJoin(SCHEMA.system, eq(SCHEMA.deployment.systemId, SCHEMA.system.id))
.innerJoin(
SCHEMA.environment,
eq(SCHEMA.system.id, SCHEMA.environment.systemId),
)
.where(eq(SCHEMA.environment.id, envId))
.then((rows) => rows.map((r) => r.deployment));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for complex database query

The complex join operation could fail due to various database issues. Consider adding error handling.

 const getEnvironmentDeployments = (db: Tx, envId: string) =>
   db
     .select()
     .from(SCHEMA.deployment)
     .innerJoin(SCHEMA.system, eq(SCHEMA.deployment.systemId, SCHEMA.system.id))
     .innerJoin(
       SCHEMA.environment,
       eq(SCHEMA.system.id, SCHEMA.environment.systemId),
     )
     .where(eq(SCHEMA.environment.id, envId))
-    .then((rows) => rows.map((r) => r.deployment));
+    .then((rows) => rows.map((r) => r.deployment))
+    .catch(error => {
+      log.error('Failed to fetch environment deployments:', { error, envId });
+      throw error;
+    });

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +22 to +34
const getEnvironmentWithReleaseChannels = (db: Tx, envId: string) =>
db.query.environment.findFirst({
where: eq(SCHEMA.environment.id, envId),
with: {
releaseChannels: { with: { releaseChannel: true } },
policy: {
with: {
environmentPolicyReleaseChannels: { with: { releaseChannel: true } },
},
},
system: { with: { deployments: true } },
},
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling for database query

The database query could fail due to connection issues or other database-related errors. Consider wrapping it in a try-catch block to handle potential errors gracefully.

 const getEnvironmentWithReleaseChannels = (db: Tx, envId: string) =>
-  db.query.environment.findFirst({
-    where: eq(SCHEMA.environment.id, envId),
-    with: {
-      releaseChannels: { with: { releaseChannel: true } },
-      policy: {
-        with: {
-          environmentPolicyReleaseChannels: { with: { releaseChannel: true } },
-        },
-      },
-      system: { with: { deployments: true } },
-    },
-  });
+  db.query.environment.findFirst({
+    where: eq(SCHEMA.environment.id, envId),
+    with: {
+      releaseChannels: { with: { releaseChannel: true } },
+      policy: {
+        with: {
+          environmentPolicyReleaseChannels: { with: { releaseChannel: true } },
+        },
+      },
+      system: { with: { deployments: true } },
+    },
+  }).catch(error => {
+    log.error('Failed to fetch environment with release channels:', { error, envId });
+    throw error;
+  });

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +41 to +61
export const getResourcesByWorkspaceIdAndIdentifier = (
tx: Tx,
resources: { workspaceId: string; identifier: string }[],
options: { deleted: boolean } = { deleted: false },
) =>
tx
.select()
.from(schema.resource)
.where(
or(
...resources.map((r) =>
and(
eq(schema.resource.workspaceId, r.workspaceId),
eq(schema.resource.identifier, r.identifier),
options.deleted
? isNotNull(schema.resource.deletedAt)
: isNull(schema.resource.deletedAt),
),
),
),
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider optimizing the query for large resource lists

Using or(...resources.map(...)) can lead to inefficient queries when the resources array is large, due to the generation of multiple OR conditions. Consider refactoring the query to utilize inArray for batch matching, which can improve performance.

Apply this diff to optimize the query:

 export const getResourcesByWorkspaceIdAndIdentifier = (
   tx: Tx,
   resources: { workspaceId: string; identifier: string }[],
   options: { deleted: boolean } = { deleted: false },
 ) =>
-  tx
-    .select()
-    .from(schema.resource)
-    .where(
-      or(
-        ...resources.map((r) =>
-          and(
-            eq(schema.resource.workspaceId, r.workspaceId),
-            eq(schema.resource.identifier, r.identifier),
-            options.deleted
-              ? isNotNull(schema.resource.deletedAt)
-              : isNull(schema.resource.deletedAt),
-          ),
-        ),
-      ),
-    );
+  const workspaceIds = resources.map((r) => r.workspaceId);
+  const identifiers = resources.map((r) => r.identifier);
+  tx
+    .select()
+    .from(schema.resource)
+    .where(
+      and(
+        inArray(schema.resource.workspaceId, workspaceIds),
+        inArray(schema.resource.identifier, identifiers),
+        options.deleted
+          ? isNotNull(schema.resource.deletedAt)
+          : isNull(schema.resource.deletedAt),
+      ),
+    );

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
apps/webservice/src/app/api/v1/resources/route.ts (2)

Line range hint 15-33: Consider adding size limits to request validation

The current schema validation could be enhanced with size limits to prevent potential abuse:

  1. Maximum number of resources in the array
  2. Maximum size for metadata object
  3. Length limits for variable keys and values

Consider updating the schema validation:

 const patchBodySchema = z.object({
   workspaceId: z.string().uuid(),
   resources: z.array(
     createResource
       .omit({ lockedAt: true, providerId: true, workspaceId: true })
       .extend({
-        metadata: z.record(z.string()).optional(),
+        metadata: z.record(z.string().max(1000)).max(50).optional(),
         variables: z
           .array(
             z.object({
-              key: z.string(),
+              key: z.string().max(255),
               value: z.union([z.string(), z.number(), z.boolean(), z.null()]),
               sensitive: z.boolean(),
             }),
-          )
+          ).max(100)
           .optional()
           .refine(
             (vars) =>
               vars == null ||
               new Set(vars.map((v) => v.key)).size === vars.length,
             "Duplicate variable keys are not allowed",
           ),
       }),
-  ),
+  ).max(1000),
 });

Line range hint 52-66: Enhance error handling for database operations

The current implementation lacks explicit error handling for database operations through upsertResources. Consider adding try-catch blocks and structured error responses.

Consider this improvement:

   .handle<{ user: schema.User; body: z.infer<typeof patchBodySchema> }>(
     async (ctx) => {
       if (ctx.body.resources.length === 0)
         return NextResponse.json(
-          { error: "No resources provided" },
+          { error: { code: "EMPTY_RESOURCES", message: "No resources provided" } },
           { status: 400 },
         );

+      try {
         const resources = await upsertResources(
           db,
           ctx.body.resources.map((t) => ({
             ...t,
             workspaceId: ctx.body.workspaceId,
           })),
         );
         return NextResponse.json({ count: resources.all.length });
+      } catch (error) {
+        console.error("Failed to upsert resources:", error);
+        return NextResponse.json(
+          { 
+            error: { 
+              code: "DATABASE_ERROR",
+              message: "Failed to update resources"
+            }
+          },
+          { status: 500 }
+        );
+      }
     },
   );
packages/job-dispatch/src/resource/dispatch-resource.ts (1)

49-53: Improve type safety after null check

Consider using type narrowing after the environment check to ensure type safety in subsequent code.

 const environment = await getEnvironmentWithReleaseChannels(db, envId);
 if (environment == null) {
   log.warn("Environment not found", { envId });
   return;
 }
+// Type assertion or type guard to narrow the type
+const env: NonNullable<typeof environment> = environment;
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between d2c4457 and 79f5b38.

📒 Files selected for processing (6)
  • apps/webservice/src/app/api/v1/resources/route.ts (1 hunks)
  • packages/job-dispatch/src/resource/dispatch-resource.ts (1 hunks)
  • packages/job-dispatch/src/resource/insert-resource-metadata.ts (1 hunks)
  • packages/job-dispatch/src/resource/insert-resource-variables.ts (1 hunks)
  • packages/job-dispatch/src/resource/upsert.ts (1 hunks)
  • packages/job-dispatch/src/resource/utils.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/job-dispatch/src/resource/insert-resource-metadata.ts
  • packages/job-dispatch/src/resource/insert-resource-variables.ts
  • packages/job-dispatch/src/resource/upsert.ts
🔇 Additional comments (9)
apps/webservice/src/app/api/v1/resources/route.ts (1)

66-66: Verify type safety and document API response change

The response structure has changed from returning a direct count of resources to accessing the count via resources.all.length. This change requires verification and documentation:

  1. Ensure type safety of the resources object structure
  2. Document the API response format change as it might affect clients

Let's verify the type safety and usage:

✅ Verification successful

Response structure change is valid and type-safe

The change from resources.length to resources.all.length is correct and type-safe. The upsertResources function explicitly returns an object with all and deleted arrays, where all contains the successfully upserted resources. This is a deliberate API design to provide more detailed information about the operation's results.

  • The implementation in packages/job-dispatch/src/resource/upsert.ts confirms the return type
  • The route handler in apps/webservice/src/app/api/v1/resources/route.ts correctly uses this structure
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check the return type of upsertResources
ast-grep --pattern 'function upsertResources($_): $return_type {
  $$$
}'

# Find other usages of upsertResources to ensure consistent handling
rg -A 3 'upsertResources\(' 

# Look for any API documentation that needs updating
fd -e md -e yaml -e json 'api|swagger|openapi'

Length of output: 1904


Script:

#!/bin/bash
# Check the implementation of upsertResources to understand its return type
ast-grep --pattern 'export async function upsertResources($_) {
  $$$
}'

# Check the type definition if it exists separately
rg -g '*.ts' -A 5 'type.*Resource'

# Look at the API documentation
cat openapi.v1.json | jq '.paths."/api/v1/resources".post'

Length of output: 26105


Script:

#!/bin/bash
# Check the implementation of upsertResources function
cat packages/job-dispatch/src/resource/upsert.ts

# Check the implementation of the resources route
cat apps/webservice/src/app/api/v1/resources/route.ts

Length of output: 6757

packages/job-dispatch/src/resource/dispatch-resource.ts (5)

1-14: LGTM! Clean imports and proper logger setup.

The imports are well-organized and the logger is properly initialized with a specific label for this module.


22-34: Add error handling for the complex database query

The past review comment about adding error handling for this complex database query is still valid and should be addressed.


92-94: Handle rejections when awaiting multiple promises

The past review comment about using Promise.allSettled for better error handling is still valid.


133-143: Add error handling for the database query

The past review comment about adding error handling for this database query is still valid.


179-191: LGTM! Good error handling practices.

Excellent use of Promise.allSettled for handling multiple events and proper logging of success/failure metrics.

packages/job-dispatch/src/resource/utils.ts (3)

14-29: Function getResourcesByProvider is well-implemented

The function correctly retrieves resources by provider ID, considering the deletion status.


41-61: Consider optimizing query for performance

The previous review comment regarding optimizing the query in getResourcesByWorkspaceIdAndIdentifier is still valid. Using or(...resources.map(...)) can result in inefficient queries for large arrays. Refactoring to use inArray can improve performance.


71-106: Optimize getEnvironmentsByResourceWithIdentifiers to prevent N+1 queries

The previous review comment about the potential N+1 query problem in this function remains applicable. The current implementation may lead to multiple database calls, affecting performance. Consider refactoring to fetch all necessary resources in a single query.

Comment on lines +42 to +46
export async function dispatchJobsForAddedResources(
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add input validation for resourceIds

The function should validate that the resourceIds array is not empty before proceeding.

 export async function dispatchJobsForAddedResources(
   db: Tx,
   resourceIds: string[],
   envId: string,
 ): Promise<void> {
+  if (resourceIds.length === 0) {
+    log.info("No resources to process");
+    return;
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export async function dispatchJobsForAddedResources(
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> {
export async function dispatchJobsForAddedResources(
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> {
if (resourceIds.length === 0) {
log.info("No resources to process");
return;
}

Comment on lines +151 to +155
export const dispatchEventsForRemovedResources = async (
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add input validation for resourceIds

Similar to dispatchJobsForAddedResources, validate that the resourceIds array is not empty.

 export const dispatchEventsForRemovedResources = async (
   db: Tx,
   resourceIds: string[],
   envId: string,
 ): Promise<void> => {
+  if (resourceIds.length === 0) {
+    log.info("No resources to process");
+    return;
+  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const dispatchEventsForRemovedResources = async (
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> => {
export const dispatchEventsForRemovedResources = async (
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> => {
if (resourceIds.length === 0) {
log.info("No resources to process");
return;
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
apps/pty-proxy/src/controller/agent-socket.ts (2)

Line range hint 128-139: Enhance error handling and logging for resource updates.

The changes look good, but consider these improvements:

  1. Add more descriptive error messages
  2. Add logging for successful resource updates
  3. Add type checking for the response

Consider applying these changes:

    const { updated } = await upsertResources(db, [
      {
        ...resource,
        name: this.name,
        version: "ctrlplane.access/v1",
        kind: "AccessNode",
        identifier: `ctrlplane/access/access-node/${this.name}`,
        workspaceId: this.workspaceId,
        updatedAt: new Date(),
      },
    ]);
    const res = updated.at(0);
-   if (res == null) throw new Error("Failed to create resource");
+   if (res == null) {
+     logger.error("Failed to create/update resource", {
+       name: this.name,
+       workspaceId: this.workspaceId,
+     });
+     throw new Error(`Failed to create/update resource for agent ${this.name}`);
+   }
+   logger.debug("Resource updated successfully", {
+     resourceId: res.id,
+     name: this.name,
+   });
    this.resource = res;
    agents.set(res.id, { lastSync: new Date(), agent: this });

Line range hint 128-139: Consider standardizing resource update patterns across the system.

The changes to updateResource introduce a new pattern for handling resource updates. Consider:

  1. Documenting this pattern for consistency
  2. Ensuring similar error handling across resource-related operations
  3. Standardizing logging patterns for resource operations

Consider creating a shared utility for resource operations that encapsulates:

  • Standard error handling
  • Consistent logging
  • Type-safe response handling
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 79f5b38 and 7ae4a38.

📒 Files selected for processing (1)
  • apps/pty-proxy/src/controller/agent-socket.ts (2 hunks)
🔇 Additional comments (1)
apps/pty-proxy/src/controller/agent-socket.ts (1)

Line range hint 128-139: Verify the impact on agent initialization and heartbeat handling.

The updateResource method is called during agent initialization, connection, and heartbeat. Please verify that these scenarios handle the updated response structure correctly.

Let's check the usage patterns:

✅ Verification successful

The changes to updateResource are safely handled across all call sites

Based on the search results, I can verify that the updateResource method is used in three scenarios in agent-socket.ts:

  1. During initialization: await agent.updateResource({});
  2. During agent connect: Called with config and metadata from the connect message
  3. During heartbeat: Called with empty object this.updateResource({});

All these cases handle the response structure correctly as they don't depend on the return value from updateResource. The method is consistently used for upserting the resource state, and the updated response structure from upsertResources doesn't impact the calling code.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage patterns of updateResource method
# and ensure error handling is consistent across all call sites

# Check for error handling patterns in agent initialization
ast-grep --pattern 'static async from($$$) {
  $$$
  await agent.updateResource($$$);
  $$$
}'

# Check for error handling in message handlers
ast-grep --pattern '.on("message",
  ifMessage()
    .is(agentConnect, ($$$) => {
      $$$updateResource($$$)
      $$$
    })
    $$$
)'

Length of output: 237


Script:

#!/bin/bash
# Let's try a different approach to find the usage patterns

# Search for files containing updateResource
rg -l "updateResource" 

# Then look for specific usage patterns
rg "updateResource" -B 3 -A 3

# Also search for message handlers related to agent connections
rg "agentConnect|agentHeartbeat" -B 3 -A 3

Length of output: 13601

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (1)
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (1)

96-99: Consider standardizing error response patterns

The error handling pattern here differs from the GET/DELETE handlers. Consider standardizing the error response format across all endpoints for consistency.

Consider adopting the same pattern used in GET/DELETE handlers:

     const { updated } = await upsertResources(db, [_.merge(resource, body)]);
     const res = updated.at(0);
-    if (res == null) throw new Error("Failed to update resource");
+    if (res == null) {
+      return NextResponse.json(
+        { error: "Failed to update resource" },
+        { status: 404 }
+      );
+    }
     return NextResponse.json(res);
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 7ae4a38 and 959d44f.

📒 Files selected for processing (1)
  • apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (1 hunks)
🔇 Additional comments (1)
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (1)

96-96: Verify the merge behavior and its impact

The use of _.merge could modify the original resource object. Consider verifying if this has any unintended side effects in the codebase.

✅ Verification successful

Let me gather more context about the resource object and error handling patterns.


Based on the gathered evidence, I can now provide the final response:

The use of _.merge is safe in this context

The merge operation is used correctly as part of the upsertResources pattern across the codebase. The code:

  • Properly validates resource existence before merge
  • Has consistent error handling (404 for missing resources, throw for update failures)
  • Uses the merge result only for the database update without keeping references
  • Follows the same pattern as other resource updates in the codebase
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for other uses of the resource object after merge
rg "_.merge.*resource" --type ts

# Look for any error handling or validation after resource modifications
ast-grep --pattern 'const { updated } = await upsertResources($$$)'

Length of output: 1265


Script:

#!/bin/bash
# Check how resource object is used before merge
ast-grep --pattern 'const resource = $$$'

# Look for error handling patterns in the same file
rg "throw|catch|error" apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts -B2 -A2

# Check other usages of upsertResources to understand patterns
ast-grep --pattern 'upsertResources($$$)'

Length of output: 10783

Comment on lines +96 to +99
const { updated } = await upsertResources(db, [_.merge(resource, body)]);
const res = updated.at(0);
if (res == null) throw new Error("Failed to update resource");
return NextResponse.json(res);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve error handling for resource update failures

The current implementation throws an error when the update fails, which will result in a 500 Internal Server Error. Consider returning a more appropriate HTTP response instead.

Consider this improvement:

-    const { updated } = await upsertResources(db, [_.merge(resource, body)]);
-    const res = updated.at(0);
-    if (res == null) throw new Error("Failed to update resource");
-    return NextResponse.json(res);
+    const { updated } = await upsertResources(db, [_.merge(resource, body)]);
+    const res = updated.at(0);
+    if (res == null) {
+      return NextResponse.json(
+        { error: "Failed to update resource" },
+        { status: 422 }
+      );
+    }
+    return NextResponse.json(res);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const { updated } = await upsertResources(db, [_.merge(resource, body)]);
const res = updated.at(0);
if (res == null) throw new Error("Failed to update resource");
return NextResponse.json(res);
const { updated } = await upsertResources(db, [_.merge(resource, body)]);
const res = updated.at(0);
if (res == null) {
return NextResponse.json(
{ error: "Failed to update resource" },
{ status: 422 }
);
}
return NextResponse.json(res);

@adityachoudhari26 adityachoudhari26 merged commit c4f2532 into main Nov 22, 2024
8 of 9 checks passed
@adityachoudhari26 adityachoudhari26 deleted the refactor branch November 22, 2024 20:12
@coderabbitai coderabbitai bot mentioned this pull request Jan 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants