-
Notifications
You must be signed in to change notification settings - Fork 11
init refactor #225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
init refactor #225
Conversation
WalkthroughThe pull request implements extensive modifications to the resource management system, including renaming functions for clarity, enhancing mutation procedures for resource updates and deletions, and introducing new functionalities for inserting resource metadata and variables. It also adds new methods for managing job dispatching related to resource additions and removals, while several files have been added or deleted to streamline the overall structure and functionality of resource handling. Changes
Possibly related PRs
Warning Rate limit exceeded@adityachoudhari26 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 5 minutes and 41 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
🧹 Outside diff range and nitpick comments (16)
packages/job-dispatch/src/resource/insert-resource-metadata.ts (1)
24-24: Consider batching large metadata insertionsFor resources with large amounts of metadata, consider implementing batch processing to avoid potential performance issues or database constraints.
Consider implementing a batch size limit:
const BATCH_SIZE = 1000; const batches = _.chunk(resourceMetadataValues, BATCH_SIZE); for (const batch of batches) { await tx.insert(schema.resourceMetadata).values(batch); }packages/job-dispatch/src/index.ts (1)
25-25: Consider documenting breaking changesThe restructuring of resource-related exports (
new-resource.js→resource/index.js) and function renaming (dispatchJobsForNewResources→dispatchJobsForAddedResources) constitute breaking changes that may affect package consumers.Consider:
- Adding a CHANGELOG.md entry if not already present
- Bumping the major version number following semver
- Providing migration guidelines for users
packages/job-dispatch/src/resource/insert-resource-variables.ts (2)
9-11: Consider strengthening type safety for variable valuesThe
anytype for thevaluefield reduces type safety. Consider creating a more specific type or using generics to better type-check variable values.+type VariableValue = string | number | boolean | object; export type ResourceWithVariables = Resource & { - variables?: Array<{ key: string; value: any; sensitive: boolean }>; + variables?: Array<{ key: string; value: VariableValue; sensitive: boolean }>; };
28-38: Consider batch size limits for database operationsThe function processes all resources in a single transaction without considering batch size limits. This could lead to performance issues with large datasets.
Consider implementing batch processing:
const BATCH_SIZE = 1000; const batches = _.chunk(resourceVariablesValues, BATCH_SIZE); for (const batch of batches) { await tx .insert(schema.resourceVariable) .values(batch) .onConflictDoUpdate({ target: [schema.resourceVariable.key, schema.resourceVariable.resourceId], set: buildConflictUpdateColumns(schema.resourceVariable, ["value", "sensitive"]), }); }packages/api/src/router/environment.ts (2)
Line range hint
200-326: Consider extracting complex resource filter logic into separate functionsThe resource filter update logic is complex with multiple nested conditions and database operations. Consider breaking it down into smaller, focused functions for better maintainability and testability:
handleResourceFilterUpdate(oldFilter, newFilter)findAffectedResources(oldFilter, newFilter)processRemovedResources(resources, otherEnvFilters)Example refactor for the resource filter handling:
async function handleResourceFilterUpdate( db: Tx, oldEnv: Environment, newFilter: ResourceCondition, ) { if (!hasResourceFiltersChanged(oldEnv.resourceFilter, newFilter)) { return; } const { addedResources, removedResources } = await findAffectedResources( db, oldEnv, newFilter ); await processRemovedResources(db, removedResources, oldEnv.system); await processAddedResources(db, addedResources, oldEnv.id); }
Line range hint
327-333: Enhance error handling and loggingTwo improvements needed:
- Replace console.log with proper structured logging
- Add error handling for the dispatch operation
- await dispatchJobsForAddedResources( - ctx.db, - newResources.map((r) => r.id), - input.id, - ); - console.log( - `Found ${newResources.length} new resources for environment ${input.id}`, - ); + try { + await dispatchJobsForAddedResources( + ctx.db, + newResources.map((r) => r.id), + input.id, + ); + ctx.logger.info('Resources added to environment', { + count: newResources.length, + environmentId: input.id + }); + } catch (error) { + ctx.logger.error('Failed to dispatch jobs for added resources', { + error, + resourceCount: newResources.length, + environmentId: input.id + }); + throw error; + }packages/job-dispatch/src/resource/delete.ts (1)
34-39: Update the@paramdocumentation to match the function signatureThe JSDoc comment for
deleteResourcesmentions@param resourceIds, but the function actually acceptsresources: Resource[]as a parameter. Please update the documentation to accurately reflect the parameter name and description.packages/job-dispatch/src/resource/utils.ts (1)
24-29: Extract common resource filtering logic into a helper functionThe logic for filtering resources based on the
deletedflag is repeated across multiple functions. Extracting this logic into a helper function will improve code maintainability and reduce duplication.Consider adding a utility function like this:
const deletedFilter = (deleted: boolean) => deleted ? isNotNull(schema.resource.deletedAt) : isNull(schema.resource.deletedAt);Then update the functions accordingly:
where( and( eq(schema.resource.providerId, providerId), - options.deleted - ? isNotNull(schema.resource.deletedAt) - : isNull(schema.resource.deletedAt), + deletedFilter(options.deleted), ), );packages/job-dispatch/src/resource/upsert.ts (3)
75-79: Usetry-catchinstead of.catch()withawaitfor consistencyWhile inside an
asyncfunction, it's preferable to usetry-catchblocks over chaining.catch()withawait. This enhances readability and maintains consistent error-handling patterns throughout the codebase.Refactor the code as follows:
if (resources.deleted.length > 0) { - await deleteResources(tx, resources.deleted).catch((err) => { - log.error("Error deleting resources", { error: err }); - throw err; - }); + try { + await deleteResources(tx, resources.deleted); + } catch (err) { + log.error("Error deleting resources", { error: err }); + throw err; + } }
102-104: Enhance error logging with additional contextWhen logging errors during the upsert operation, consider adding more contextual information such as
workspaceIdand resource identifiers. This will facilitate debugging and provide clearer insights into the error's origin.Update the log statement:
log.error("Error upserting resources", { error: err, + workspaceId, + resourceIdentifiers: resourcesToInsert.map((r) => r.identifier), });
61-72: Optimize resource difference calculation using SetsThe current method of finding added and removed resources uses array filtering, which can be inefficient for large datasets. Using
Setdata structures can improve performance by providing faster lookup times.Refactor the code as follows:
const changedEnvs = envsAfterInsert.map((env) => { const beforeEnv = envsBeforeInsert.find((e) => e.id === env.id); - const beforeResources = beforeEnv?.resources ?? []; - const afterResources = env.resources; - - const removedResources = beforeResources.filter( - (br) => !afterResources.some((ar) => ar.id === br.id), - ); - const addedResources = afterResources.filter( - (ar) => !beforeResources.some((br) => br.id === ar.id), - ); + const beforeResourceIds = new Set( + (beforeEnv?.resources ?? []).map((r) => r.id) + ); + const afterResourceIds = new Set(env.resources.map((r) => r.id)); + + const removedResourceIds = [...beforeResourceIds].filter( + (id) => !afterResourceIds.has(id) + ); + const addedResourceIds = [...afterResourceIds].filter( + (id) => !beforeResourceIds.has(id) + ); + + const removedResources = (beforeEnv?.resources ?? []).filter((r) => + removedResourceIds.includes(r.id) + ); + const addedResources = env.resources.filter((r) => + addedResourceIds.includes(r.id) + ); return { ...env, removedResources, addedResources }; });packages/job-dispatch/src/resource/insert-resources.ts (1)
18-35: Add missing JSDoc comment forgetResourcesByWorkspaceIdAndIdentifier.The function
getResourcesByWorkspaceIdAndIdentifierlacks a JSDoc comment, which reduces code readability and maintainability.Consider adding a JSDoc comment like:
/** * Retrieves resources based on workspace ID and identifier pairs. * @param tx - Database transaction. * @param resources - Array of objects containing `workspaceId` and `identifier`. * @returns Promise resolving to an array of resources matching the given pairs. */packages/job-dispatch/src/resource/dispatch-resource.ts (4)
32-34: Consider logging when environment is not foundCurrently, if the environment is
null, the function returns silently. Adding a log statement can help with debugging and tracking issues when an environment with the givenenvIdis not found.if (environment == null) { + console.warn(`Environment with id ${envId} not found.`); return; }
72-80: Add handling for empty releases before creating job triggersIf
releasesis empty, the subsequent calls to create job triggers may not behave as expected. Ensure that you handle this scenario appropriately.Consider adding logging or throwing an error if this is an unexpected state.
if (releases.length === 0) { + console.warn('No releases found for deployments. Job triggers will not be created.'); return; }
27-87: Ensure input validation forresourceIdsIf
resourceIdsis empty, the function may proceed unnecessarily. Validate inputs early to avoid redundant processing.export async function dispatchJobsForAddedResources( db: Tx, resourceIds: string[], envId: string, ): Promise<void> { + if (resourceIds.length === 0) { + console.warn('No resource IDs provided. Exiting function.'); + return; + } const environment = await getEnvironmentWithReleaseChannels(db, envId); // Rest of the code...
13-26: Ensure consistency in function declarationsCurrently, some functions are defined using arrow functions assigned to constants, and others use the
functionkeyword. For consistency and readability, consider standardizing function declarations.For example, change
getEnvironmentWithReleaseChannelsto a named function:-const getEnvironmentWithReleaseChannels = (db: Tx, envId: string) => +function getEnvironmentWithReleaseChannels(db: Tx, envId: string) { db.query.environment.findFirst({ // Function body... }); +}Similarly, adjust
getEnvironmentDeploymentsaccordingly.Also applies to: 89-99
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (13)
packages/api/src/router/environment.ts(2 hunks)packages/job-dispatch/src/events/triggers/resource-deleted.ts(0 hunks)packages/job-dispatch/src/index.ts(1 hunks)packages/job-dispatch/src/new-resource.ts(0 hunks)packages/job-dispatch/src/resource.ts(0 hunks)packages/job-dispatch/src/resource/delete.ts(1 hunks)packages/job-dispatch/src/resource/dispatch-resource.ts(1 hunks)packages/job-dispatch/src/resource/index.ts(1 hunks)packages/job-dispatch/src/resource/insert-resource-metadata.ts(1 hunks)packages/job-dispatch/src/resource/insert-resource-variables.ts(1 hunks)packages/job-dispatch/src/resource/insert-resources.ts(1 hunks)packages/job-dispatch/src/resource/upsert.ts(1 hunks)packages/job-dispatch/src/resource/utils.ts(1 hunks)
💤 Files with no reviewable changes (3)
- packages/job-dispatch/src/events/triggers/resource-deleted.ts
- packages/job-dispatch/src/new-resource.ts
- packages/job-dispatch/src/resource.ts
✅ Files skipped from review due to trivial changes (1)
- packages/job-dispatch/src/resource/index.ts
🔇 Additional comments (7)
packages/job-dispatch/src/resource/insert-resource-metadata.ts (2)
7-9: LGTM! Well-defined type with proper TypeScript practices
The type definition is clear, properly extends the base Resource type, and uses appropriate TypeScript constructs for the metadata field.
11-14: Consider adding input validation
The function should validate the metadata values before insertion. Consider adding checks for:
- Maximum key/value string lengths
- Invalid characters in keys
- Maximum number of metadata entries per resource
packages/job-dispatch/src/index.ts (1)
25-25: Verify the migration of resource management functionality
The addition of ./resource/index.js export replaces the removed exports (./new-resource.js and ./resource.js). Let's verify that all previously exported functionality is still available through the new path.
✅ Verification successful
Migration of resource functionality appears complete and consistent
The verification shows that:
- All resource functionality is now properly organized under
resource/directory with clear modular exports (upsert.js,delete.js,dispatch-resource.js) - The function rename from
dispatchJobsForNewResourcestodispatchJobsForAddedResourcesis consistently applied across the codebase - References to
resource.jsin db schema files are part of a different package (packages/db) and unrelated to this refactoring
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all resource-related functionality is properly exported
# through the new resource/index.js file
# Check the exports in the new resource/index.js
echo "Checking new exports in resource/index.js:"
ast-grep --pattern 'export $_' packages/job-dispatch/src/resource/index.ts
# Look for any remaining references to old paths
echo -e "\nChecking for references to old paths:"
rg -l "new-resource|/resource\.js" --type ts
Length of output: 485
Script:
#!/bin/bash
# Let's check the content of the old files in git history to compare with new structure
# First, get the content of resource/index.ts to see current exports
echo "Current exports in resource/index.ts:"
cat packages/job-dispatch/src/resource/index.ts
# Check references in db schema files to understand usage
echo -e "\nChecking usage in schema files:"
rg "new-resource|/resource\.js" --type ts -C 2 packages/db/src/schema/
# Look for dispatchJobsForNewResources references to verify rename
echo -e "\nChecking for function rename references:"
rg "dispatchJobsForNewResources|dispatchJobsForAddedResources" --type ts
Length of output: 2577
packages/job-dispatch/src/resource/insert-resource-variables.ts (1)
13-16: Verify resource existence before inserting variables
The function assumes that all resource IDs are valid. Consider adding a verification step to ensure resources exist before inserting their variables.
✅ Verification successful
Based on the gathered information, I can now provide a final response:
Resource existence is already verified in the upsert flow
The code is actually safe as the insertResourceVariables function is called within a transaction where resources are first inserted via insertResources. The insertResources function includes validation through findExistingResources, and the variables are only inserted after the resources are confirmed to exist in the same transaction. This transactional approach ensures data integrity.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the Resource table has a foreign key constraint on resourceVariable
# This helps verify if database-level integrity is enforced
ast-grep --pattern 'createTable("resourceVariable", (table) => {
$$$
table.foreign("resourceId").$$$
$$$
})'
Length of output: 116
Script:
#!/bin/bash
# Let's check for any migration files that might define the resourceVariable table structure
fd migration rg "resourceVariable"
# Also check the implementation to see if there's any validation
rg -A 5 "insertResourceVariables"
Length of output: 1812
Script:
#!/bin/bash
# Let's check for migration files correctly and also look at insert-resources implementation
fd -e sql -e ts migration
# Check insert-resources implementation
rg -A 10 "insertResources = async"
# Check if there's any resource validation in the upsert flow
rg -B 5 "insertResourceVariables" packages/job-dispatch/src/resource/upsert.ts
Length of output: 1496
Script:
#!/bin/bash
# Let's check the findExistingResources implementation and the resource table schema
rg -A 5 "findExistingResources = async"
# Check if there's a foreign key in the database schema
rg -A 10 "createTable.*resource"
rg -A 10 "createTable.*resourceVariable"
Length of output: 628
packages/api/src/router/environment.ts (1)
31-31: Verify all usages of the renamed function across the codebase
The function rename from dispatchJobsForNewResources to dispatchJobsForAddedResources improves clarity but requires verification of all usage locations.
✅ Verification successful
Function rename has been properly implemented across the codebase
The verification shows that:
- No references to the old function name
dispatchJobsForNewResourcesexist in the codebase - The new function name
dispatchJobsForAddedResourcesis consistently used in:- Definition:
packages/job-dispatch/src/resource/dispatch-resource.ts - Usage:
packages/job-dispatch/src/resource/upsert.tsandpackages/api/src/router/environment.ts
- Definition:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for any remaining references to the old function name
# and verify all occurrences of the new name
echo "Checking for old function name references..."
rg "dispatchJobsForNewResources"
echo "Verifying new function name usage..."
rg "dispatchJobsForAddedResources"
Length of output: 686
packages/job-dispatch/src/resource/upsert.ts (1)
74-74: Verify initialization of resources.deleted
Ensure that resources.deleted is always initialized as an array, even when no resources are deleted. This prevents potential undefined errors when accessing resources.deleted.length or iterating over it.
Run the following script to confirm the initialization:
packages/job-dispatch/src/resource/insert-resources.ts (1)
88-100: Confirm conflict target and updated columns in onConflictDoUpdate.
Verify that the conflict target [resource.identifier, resource.workspaceId] correctly identifies unique resources, and that all necessary fields are included in the set clause for updates.
Ensure that any additional fields that should be updated on conflict are included, and that the conflict target aligns with the database constraints.
| })); | ||
| }); | ||
|
|
||
| return tx.insert(schema.resourceMetadata).values(resourceMetadataValues); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for the database operation
The function should handle potential database errors gracefully and provide meaningful error messages.
- return tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
+ try {
+ return await tx.insert(schema.resourceMetadata).values(resourceMetadataValues);
+ } catch (error) {
+ throw new Error(`Failed to insert resource metadata: ${error.message}`);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return tx.insert(schema.resourceMetadata).values(resourceMetadataValues); | |
| try { | |
| return await tx.insert(schema.resourceMetadata).values(resourceMetadataValues); | |
| } catch (error) { | |
| throw new Error(`Failed to insert resource metadata: ${error.message}`); | |
| } |
| @@ -0,0 +1,25 @@ | |||
| import type { Tx } from "@ctrlplane/db"; | |||
| import type { Resource } from "@ctrlplane/db/schema"; | |||
| import _ from "lodash"; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Remove unused lodash import
The lodash import is not being used in this file and should be removed.
-import _ from "lodash";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import _ from "lodash"; |
| const resourceMetadataValues = resources.flatMap((resource) => { | ||
| const { id, metadata = {} } = resource; | ||
| return Object.entries(metadata).map(([key, value]) => ({ | ||
| resourceId: id, | ||
| key, | ||
| value, | ||
| })); | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for duplicate metadata keys
The current implementation doesn't handle the case where the same resource has duplicate metadata keys. Consider adding deduplication logic or throwing an error when duplicates are detected.
const resourceMetadataValues = resources.flatMap((resource) => {
const { id, metadata = {} } = resource;
+ const seenKeys = new Set<string>();
return Object.entries(metadata).map(([key, value]) => {
+ if (seenKeys.has(key)) {
+ throw new Error(`Duplicate metadata key "${key}" for resource ${id}`);
+ }
+ seenKeys.add(key);
return {
resourceId: id,
key,
value,
};
});
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const resourceMetadataValues = resources.flatMap((resource) => { | |
| const { id, metadata = {} } = resource; | |
| return Object.entries(metadata).map(([key, value]) => ({ | |
| resourceId: id, | |
| key, | |
| value, | |
| })); | |
| }); | |
| const resourceMetadataValues = resources.flatMap((resource) => { | |
| const { id, metadata = {} } = resource; | |
| const seenKeys = new Set<string>(); | |
| return Object.entries(metadata).map(([key, value]) => { | |
| if (seenKeys.has(key)) { | |
| throw new Error(`Duplicate metadata key "${key}" for resource ${id}`); | |
| } | |
| seenKeys.add(key); | |
| return { | |
| resourceId: id, | |
| key, | |
| value, | |
| }; | |
| }); | |
| }); |
| value: sensitive | ||
| ? variablesAES256().encrypt(JSON.stringify(value)) | ||
| : value, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for encryption operation
The encryption operation could fail but lacks error handling. Consider wrapping it in a try-catch block to handle potential encryption failures gracefully.
- value: sensitive
- ? variablesAES256().encrypt(JSON.stringify(value))
- : value,
+ value: sensitive
+ ? (() => {
+ try {
+ return variablesAES256().encrypt(JSON.stringify(value));
+ } catch (error) {
+ throw new Error(`Failed to encrypt variable ${key}: ${error.message}`);
+ }
+ })()
+ : value,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| value: sensitive | |
| ? variablesAES256().encrypt(JSON.stringify(value)) | |
| : value, | |
| value: sensitive | |
| ? (() => { | |
| try { | |
| return variablesAES256().encrypt(JSON.stringify(value)); | |
| } catch (error) { | |
| throw new Error(`Failed to encrypt variable ${key}: ${error.message}`); | |
| } | |
| })() | |
| : value, |
| const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) => | ||
| variables.map(({ key, value, sensitive }) => ({ | ||
| resourceId: id, | ||
| key, | ||
| value: sensitive | ||
| ? variablesAES256().encrypt(JSON.stringify(value)) | ||
| : value, | ||
| sensitive, | ||
| })), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider validating variable values before processing
The function processes values without validation. Consider adding checks for:
- Maximum value size before encryption
- Valid JSON structure for sensitive values
- Null/undefined handling
export const insertResourceVariables = async (
tx: Tx,
resources: ResourceWithVariables[],
) => {
+ const MAX_VALUE_SIZE = 1024 * 1024; // 1MB
const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) =>
- variables.map(({ key, value, sensitive }) => ({
+ variables.map(({ key, value, sensitive }) => {
+ if (value === null || value === undefined) {
+ throw new Error(`Variable ${key} has null or undefined value`);
+ }
+
+ const stringifiedValue = sensitive ? JSON.stringify(value) : value;
+ if (Buffer.from(String(stringifiedValue)).length > MAX_VALUE_SIZE) {
+ throw new Error(`Variable ${key} exceeds maximum size of ${MAX_VALUE_SIZE} bytes`);
+ }
+
+ return {
resourceId: id,
key,
value: sensitive
- ? variablesAES256().encrypt(JSON.stringify(value))
- : value,
+ ? variablesAES256().encrypt(stringifiedValue)
+ : stringifiedValue,
sensitive,
- })),
+ }}),
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) => | |
| variables.map(({ key, value, sensitive }) => ({ | |
| resourceId: id, | |
| key, | |
| value: sensitive | |
| ? variablesAES256().encrypt(JSON.stringify(value)) | |
| : value, | |
| sensitive, | |
| })), | |
| ); | |
| const MAX_VALUE_SIZE = 1024 * 1024; // 1MB | |
| const resourceVariablesValues = resources.flatMap(({ id, variables = [] }) => | |
| variables.map(({ key, value, sensitive }) => { | |
| if (value === null || value === undefined) { | |
| throw new Error(`Variable ${key} has null or undefined value`); | |
| } | |
| const stringifiedValue = sensitive ? JSON.stringify(value) : value; | |
| if (Buffer.from(String(stringifiedValue)).length > MAX_VALUE_SIZE) { | |
| throw new Error(`Variable ${key} exceeds maximum size of ${MAX_VALUE_SIZE} bytes`); | |
| } | |
| return { | |
| resourceId: id, | |
| key, | |
| value: sensitive | |
| ? variablesAES256().encrypt(stringifiedValue) | |
| : stringifiedValue, | |
| sensitive, | |
| }; | |
| }), | |
| ); |
| const deleted = _.differenceBy( | ||
| existingResources, | ||
| resources, | ||
| (t) => t.identifier, | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reevaluate the calculation of deleted resources.
The deleted array is calculated as the difference between existingResources and the newly upserted resources. However, since this function does not perform deletions, the deleted array may not reflect actual deletions.
Consider the following options:
- Option 1: Remove the
deletedarray from the return value if deletions are not part of this function's responsibility. - Option 2: Implement deletion logic to handle resources that exist in the database but are not present in
resourcesToInsert.
| /** | ||
| * Gets resources for a specific provider | ||
| * @param tx - Database transaction | ||
| * @param providerId - ID of the provider to get resources for | ||
| * @param options - Options object | ||
| * @returns Promise resolving to array of resources | ||
| */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove incorrect @param options from JSDoc comment.
The JSDoc comment for getResourcesByProvider includes a @param options parameter that does not exist in the function signature. This could cause confusion.
Apply the following diff to fix the documentation:
/**
* Gets resources for a specific provider
* @param tx - Database transaction
* @param providerId - ID of the provider to get resources for
- * @param options - Options object
* @returns Promise resolving to array of resources
*/📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** | |
| * Gets resources for a specific provider | |
| * @param tx - Database transaction | |
| * @param providerId - ID of the provider to get resources for | |
| * @param options - Options object | |
| * @returns Promise resolving to array of resources | |
| */ | |
| /** | |
| * Gets resources for a specific provider | |
| * @param tx - Database transaction | |
| * @param providerId - ID of the provider to get resources for | |
| * @returns Promise resolving to array of resources | |
| */ |
| if (releaseJobTriggers.length === 0) return; | ||
|
|
||
| await dispatchReleaseJobTriggers(db) | ||
| .filter(isPassingAllPolicies) | ||
| .releaseTriggers(releaseJobTriggers) | ||
| .dispatch(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure error handling during job dispatch
When dispatching release job triggers, any errors occurring during the dispatch process may not be caught. Wrap the dispatch call in a try-catch block to handle potential exceptions.
try {
await dispatchReleaseJobTriggers(db)
.filter(isPassingAllPolicies)
.releaseTriggers(releaseJobTriggers)
.dispatch();
+} catch (error) {
+ console.error('Error dispatching release job triggers:', error);
+ // Handle the error accordingly
}Committable suggestion skipped: line range outside the PR's diff.
| const releases = await Promise.all(releasePromises).then((rows) => | ||
| rows.filter(isPresent), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle rejections when awaiting multiple promises
Using Promise.all will reject immediately if any promise rejects, potentially causing unhandled promise rejections. To ensure robustness, consider handling possible rejections from the release queries.
You can use Promise.allSettled to handle each promise individually:
- const releases = await Promise.all(releasePromises).then((rows) =>
- rows.filter(isPresent),
- );
+ const releaseResults = await Promise.allSettled(releasePromises);
+ const releases = releaseResults
+ .filter(result => result.status === 'fulfilled' && isPresent(result.value))
+ .map(result => result.value);This way, you can process the fulfilled promises and decide how to handle any rejections.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const releases = await Promise.all(releasePromises).then((rows) => | |
| rows.filter(isPresent), | |
| ); | |
| const releaseResults = await Promise.allSettled(releasePromises); | |
| const releases = releaseResults | |
| .filter(result => result.status === 'fulfilled' && isPresent(result.value)) | |
| .map(result => result.value); |
| const handleEventPromises = events.map(handleEvent); | ||
| await Promise.allSettled(handleEventPromises); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Process results of Promise.allSettled to handle rejections
After using Promise.allSettled, it's important to handle any rejected promises to prevent silent failures.
const handleEventPromises = events.map(handleEvent);
await Promise.allSettled(handleEventPromises);
+handleEventPromises.forEach((result, index) => {
+ if (result.status === 'rejected') {
+ console.error(`Event handling failed for event ${JSON.stringify(events[index])}:`, result.reason);
+ // Additional error handling if necessary
+ }
+});Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (7)
packages/job-dispatch/src/resource/upsert.ts (3)
21-21: Consider replacinganywith a more specific typeUsing
anyfor variable values reduces type safety. Consider defining a union type of allowed value types or usingunknownif the value type truly cannot be determined at compile time.- variables?: Array<{ key: string; value: any; sensitive: boolean }>; + variables?: Array<{ key: string; value: unknown; sensitive: boolean }>;
70-81: Consider optimizing resource comparisonThe current implementation could be memory-intensive for large datasets. Consider using Sets for more efficient lookups.
+ const createResourceIdSet = (resources) => new Set(resources.map(r => r.id)); const changedEnvs = envsAfterInsert.map((env) => { const beforeEnv = envsBeforeInsert.find((e) => e.id === env.id); const beforeResources = beforeEnv?.resources ?? []; const afterResources = env.resources; + const beforeIds = createResourceIdSet(beforeResources); + const afterIds = createResourceIdSet(afterResources); const removedResources = beforeResources.filter( - (br) => !afterResources.some((ar) => ar.id === br.id), + (br) => !afterIds.has(br.id), ); const addedResources = afterResources.filter( - (ar) => !beforeResources.some((br) => br.id === ar.id), + (ar) => !beforeIds.has(ar.id), ); return { ...env, removedResources, addedResources }; });
92-118: Consider batch processing for job and event dispatchingFor better performance with large datasets, consider implementing batch processing instead of processing one environment at a time.
+ const batchSize = 100; // Adjust based on your needs + const processBatch = async (items, processor) => { + for (let i = 0; i < items.length; i += batchSize) { + const batch = items.slice(i, i + batchSize); + await processor(batch); + } + }; + + await processBatch(changedEnvs, async (envBatch) => { + await Promise.all(envBatch.map(async (env) => { if (env.addedResources.length > 0) { // ... existing logic } if (env.removedResources.length > 0) { // ... existing logic } + })); + });packages/job-dispatch/src/resource/dispatch-resource.ts (3)
58-71: Extract deployment filtering logic to a separate functionThe deployment filtering logic is complex enough to warrant its own function for better maintainability and reusability.
+const filterDeploymentReleaseChannels = ( + deployment: typeof deployments[number], + releaseChannels: typeof environment.releaseChannels, + policyReleaseChannels: typeof environment.policy.environmentPolicyReleaseChannels, +) => { + const envReleaseChannel = releaseChannels.find( + (erc) => erc.deploymentId === deployment.id, + ); + const policyReleaseChannel = policyReleaseChannels.find( + (prc) => prc.deploymentId === deployment.id, + ); + + const { releaseFilter } = + envReleaseChannel?.releaseChannel ?? + policyReleaseChannel?.releaseChannel ?? + {}; + return { ...deployment, releaseFilter }; +}; const deploymentsWithReleaseFilter = deployments.map((deployment) => - // ... existing filtering logic ... + filterDeploymentReleaseChannels(deployment, releaseChannels, policyReleaseChannels) );
172-177: Improve type safety for event actionsConsider defining a union type for event actions to prevent potential typos and improve maintainability.
+type DeploymentEventAction = 'deployment.resource.removed' | 'deployment.resource.added'; const events = resources.flatMap((resource) => deployments.map((deployment) => ({ - action: "deployment.resource.removed" as const, + action: 'deployment.resource.removed' satisfies DeploymentEventAction, payload: { deployment, resource }, })), );
1-193: Consider implementing a Resource Dispatcher classThe file contains related functions that manage resource dispatch logic. Consider refactoring this into a class-based structure for better encapsulation and state management. This would make it easier to:
- Share common configuration and dependencies
- Manage error handling consistently
- Test the functionality in isolation
- Extend the functionality in the future
Example structure:
class ResourceDispatcher { constructor( private readonly db: Tx, private readonly logger: Logger ) {} async dispatchJobsForAddedResources(resourceIds: string[], envId: string): Promise<void> { // existing implementation } async dispatchEventsForRemovedResources(resourceIds: string[], envId: string): Promise<void> { // existing implementation } private async getEnvironmentWithReleaseChannels(envId: string) { // existing implementation } private async getEnvironmentDeployments(envId: string) { // existing implementation } }packages/job-dispatch/src/resource/utils.ts (1)
14-29: Refactor duplicated deletion condition into a helper functionThe condition for filtering deleted and non-deleted resources based on
options.deletedis duplicated in bothgetResourcesByProviderandgetResourcesByWorkspaceIdAndIdentifier. Extracting this logic into a shared helper function will improve code maintainability and reduce redundancy.Example of the helper function:
function deletedCondition(deleted: boolean) { return deleted ? isNotNull(schema.resource.deletedAt) : isNull(schema.resource.deletedAt); }Update the queries to use the helper:
// In getResourcesByProvider and( eq(schema.resource.providerId, providerId), - options.deleted - ? isNotNull(schema.resource.deletedAt) - : isNull(schema.resource.deletedAt), + deletedCondition(options.deleted), ), // In getResourcesByWorkspaceIdAndIdentifier - options.deleted - ? isNotNull(schema.resource.deletedAt) - : isNull(schema.resource.deletedAt), + deletedCondition(options.deleted),Also applies to: 41-61
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (3)
packages/job-dispatch/src/resource/dispatch-resource.ts(1 hunks)packages/job-dispatch/src/resource/upsert.ts(1 hunks)packages/job-dispatch/src/resource/utils.ts(1 hunks)
🔇 Additional comments (7)
packages/job-dispatch/src/resource/upsert.ts (3)
49-55: Previous review comment about handling undefined is still applicable
83-90: LGTM! Efficient resource deletion handling
The implementation uses Set for efficient lookups and includes proper error handling with logging.
119-129: LGTM! Comprehensive error handling and logging
The implementation includes proper error handling with detailed logging of both success and failure cases.
packages/job-dispatch/src/resource/dispatch-resource.ts (1)
92-94: Handle Promise rejections using Promise.allSettled
The previous review comment about using Promise.allSettled is still valid here. Using Promise.all will fail fast if any promise rejects, potentially leaving some releases unprocessed.
packages/job-dispatch/src/resource/utils.ts (3)
14-29: Code logic appears correct
The getResourcesByProvider function correctly retrieves resources based on the provider ID and handles the deleted option as expected.
63-99: Duplicate comment: Possible logical error in filtering deleted resources
The condition isNotNull(schema.resource.deletedAt) at line 94 selects deleted resources. If the intention is to select active resources, it should be isNull(schema.resource.deletedAt).
63-99: Duplicate comment: Potential N+1 query problem
The function getEnvironmentsByResourceWithIdentifiers performs a database query within a loop over environments, which may lead to performance issues due to multiple database calls (N+1 problem). Consider refactoring to fetch all necessary resources in a single query.
| const workspaceId = resourcesToInsert[0]?.workspaceId; | ||
| if (workspaceId == null) throw new Error("Workspace ID is required"); | ||
| if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId)) | ||
| throw new Error("All resources must belong to the same workspace"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Strengthen input validation
The current validation could be improved to handle edge cases:
- Check if resourcesToInsert array is empty before accessing first element
- Consider adding validation for required resource fields
+ if (resourcesToInsert.length === 0) {
+ throw new Error("At least one resource is required");
+ }
const workspaceId = resourcesToInsert[0]?.workspaceId;
if (workspaceId == null) throw new Error("Workspace ID is required");
if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId))
throw new Error("All resources must belong to the same workspace");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const workspaceId = resourcesToInsert[0]?.workspaceId; | |
| if (workspaceId == null) throw new Error("Workspace ID is required"); | |
| if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId)) | |
| throw new Error("All resources must belong to the same workspace"); | |
| if (resourcesToInsert.length === 0) { | |
| throw new Error("At least one resource is required"); | |
| } | |
| const workspaceId = resourcesToInsert[0]?.workspaceId; | |
| if (workspaceId == null) throw new Error("Workspace ID is required"); | |
| if (!resourcesToInsert.every((r) => r.workspaceId === workspaceId)) | |
| throw new Error("All resources must belong to the same workspace"); |
| await Promise.all([ | ||
| insertResourceMetadata(tx, resourcesWithId), | ||
| insertResourceVariables(tx, resourcesWithId), | ||
| ]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for parallel operations
The Promise.all could fail silently if one operation fails. Consider adding specific error handling for metadata and variables insertion.
log.debug("Inserting resource metadata and variables");
await Promise.all([
- insertResourceMetadata(tx, resourcesWithId),
- insertResourceVariables(tx, resourcesWithId),
+ insertResourceMetadata(tx, resourcesWithId).catch(err => {
+ log.error("Failed to insert resource metadata", { error: err });
+ throw err;
+ }),
+ insertResourceVariables(tx, resourcesWithId).catch(err => {
+ log.error("Failed to insert resource variables", { error: err });
+ throw err;
+ }),
]);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await Promise.all([ | |
| insertResourceMetadata(tx, resourcesWithId), | |
| insertResourceVariables(tx, resourcesWithId), | |
| ]); | |
| log.debug("Inserting resource metadata and variables"); | |
| await Promise.all([ | |
| insertResourceMetadata(tx, resourcesWithId).catch(err => { | |
| log.error("Failed to insert resource metadata", { error: err }); | |
| throw err; | |
| }), | |
| insertResourceVariables(tx, resourcesWithId).catch(err => { | |
| log.error("Failed to insert resource variables", { error: err }); | |
| throw err; | |
| }), | |
| ]); |
| const getEnvironmentDeployments = (db: Tx, envId: string) => | ||
| db | ||
| .select() | ||
| .from(SCHEMA.deployment) | ||
| .innerJoin(SCHEMA.system, eq(SCHEMA.deployment.systemId, SCHEMA.system.id)) | ||
| .innerJoin( | ||
| SCHEMA.environment, | ||
| eq(SCHEMA.system.id, SCHEMA.environment.systemId), | ||
| ) | ||
| .where(eq(SCHEMA.environment.id, envId)) | ||
| .then((rows) => rows.map((r) => r.deployment)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for complex database query
The complex join operation could fail due to various database issues. Consider adding error handling.
const getEnvironmentDeployments = (db: Tx, envId: string) =>
db
.select()
.from(SCHEMA.deployment)
.innerJoin(SCHEMA.system, eq(SCHEMA.deployment.systemId, SCHEMA.system.id))
.innerJoin(
SCHEMA.environment,
eq(SCHEMA.system.id, SCHEMA.environment.systemId),
)
.where(eq(SCHEMA.environment.id, envId))
- .then((rows) => rows.map((r) => r.deployment));
+ .then((rows) => rows.map((r) => r.deployment))
+ .catch(error => {
+ log.error('Failed to fetch environment deployments:', { error, envId });
+ throw error;
+ });Committable suggestion skipped: line range outside the PR's diff.
| const getEnvironmentWithReleaseChannels = (db: Tx, envId: string) => | ||
| db.query.environment.findFirst({ | ||
| where: eq(SCHEMA.environment.id, envId), | ||
| with: { | ||
| releaseChannels: { with: { releaseChannel: true } }, | ||
| policy: { | ||
| with: { | ||
| environmentPolicyReleaseChannels: { with: { releaseChannel: true } }, | ||
| }, | ||
| }, | ||
| system: { with: { deployments: true } }, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for database query
The database query could fail due to connection issues or other database-related errors. Consider wrapping it in a try-catch block to handle potential errors gracefully.
const getEnvironmentWithReleaseChannels = (db: Tx, envId: string) =>
- db.query.environment.findFirst({
- where: eq(SCHEMA.environment.id, envId),
- with: {
- releaseChannels: { with: { releaseChannel: true } },
- policy: {
- with: {
- environmentPolicyReleaseChannels: { with: { releaseChannel: true } },
- },
- },
- system: { with: { deployments: true } },
- },
- });
+ db.query.environment.findFirst({
+ where: eq(SCHEMA.environment.id, envId),
+ with: {
+ releaseChannels: { with: { releaseChannel: true } },
+ policy: {
+ with: {
+ environmentPolicyReleaseChannels: { with: { releaseChannel: true } },
+ },
+ },
+ system: { with: { deployments: true } },
+ },
+ }).catch(error => {
+ log.error('Failed to fetch environment with release channels:', { error, envId });
+ throw error;
+ });Committable suggestion skipped: line range outside the PR's diff.
| export const getResourcesByWorkspaceIdAndIdentifier = ( | ||
| tx: Tx, | ||
| resources: { workspaceId: string; identifier: string }[], | ||
| options: { deleted: boolean } = { deleted: false }, | ||
| ) => | ||
| tx | ||
| .select() | ||
| .from(schema.resource) | ||
| .where( | ||
| or( | ||
| ...resources.map((r) => | ||
| and( | ||
| eq(schema.resource.workspaceId, r.workspaceId), | ||
| eq(schema.resource.identifier, r.identifier), | ||
| options.deleted | ||
| ? isNotNull(schema.resource.deletedAt) | ||
| : isNull(schema.resource.deletedAt), | ||
| ), | ||
| ), | ||
| ), | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider optimizing the query for large resource lists
Using or(...resources.map(...)) can lead to inefficient queries when the resources array is large, due to the generation of multiple OR conditions. Consider refactoring the query to utilize inArray for batch matching, which can improve performance.
Apply this diff to optimize the query:
export const getResourcesByWorkspaceIdAndIdentifier = (
tx: Tx,
resources: { workspaceId: string; identifier: string }[],
options: { deleted: boolean } = { deleted: false },
) =>
- tx
- .select()
- .from(schema.resource)
- .where(
- or(
- ...resources.map((r) =>
- and(
- eq(schema.resource.workspaceId, r.workspaceId),
- eq(schema.resource.identifier, r.identifier),
- options.deleted
- ? isNotNull(schema.resource.deletedAt)
- : isNull(schema.resource.deletedAt),
- ),
- ),
- ),
- );
+ const workspaceIds = resources.map((r) => r.workspaceId);
+ const identifiers = resources.map((r) => r.identifier);
+ tx
+ .select()
+ .from(schema.resource)
+ .where(
+ and(
+ inArray(schema.resource.workspaceId, workspaceIds),
+ inArray(schema.resource.identifier, identifiers),
+ options.deleted
+ ? isNotNull(schema.resource.deletedAt)
+ : isNull(schema.resource.deletedAt),
+ ),
+ );Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (3)
apps/webservice/src/app/api/v1/resources/route.ts (2)
Line range hint
15-33: Consider adding size limits to request validationThe current schema validation could be enhanced with size limits to prevent potential abuse:
- Maximum number of resources in the array
- Maximum size for metadata object
- Length limits for variable keys and values
Consider updating the schema validation:
const patchBodySchema = z.object({ workspaceId: z.string().uuid(), resources: z.array( createResource .omit({ lockedAt: true, providerId: true, workspaceId: true }) .extend({ - metadata: z.record(z.string()).optional(), + metadata: z.record(z.string().max(1000)).max(50).optional(), variables: z .array( z.object({ - key: z.string(), + key: z.string().max(255), value: z.union([z.string(), z.number(), z.boolean(), z.null()]), sensitive: z.boolean(), }), - ) + ).max(100) .optional() .refine( (vars) => vars == null || new Set(vars.map((v) => v.key)).size === vars.length, "Duplicate variable keys are not allowed", ), }), - ), + ).max(1000), });
Line range hint
52-66: Enhance error handling for database operationsThe current implementation lacks explicit error handling for database operations through
upsertResources. Consider adding try-catch blocks and structured error responses.Consider this improvement:
.handle<{ user: schema.User; body: z.infer<typeof patchBodySchema> }>( async (ctx) => { if (ctx.body.resources.length === 0) return NextResponse.json( - { error: "No resources provided" }, + { error: { code: "EMPTY_RESOURCES", message: "No resources provided" } }, { status: 400 }, ); + try { const resources = await upsertResources( db, ctx.body.resources.map((t) => ({ ...t, workspaceId: ctx.body.workspaceId, })), ); return NextResponse.json({ count: resources.all.length }); + } catch (error) { + console.error("Failed to upsert resources:", error); + return NextResponse.json( + { + error: { + code: "DATABASE_ERROR", + message: "Failed to update resources" + } + }, + { status: 500 } + ); + } }, );packages/job-dispatch/src/resource/dispatch-resource.ts (1)
49-53: Improve type safety after null checkConsider using type narrowing after the environment check to ensure type safety in subsequent code.
const environment = await getEnvironmentWithReleaseChannels(db, envId); if (environment == null) { log.warn("Environment not found", { envId }); return; } +// Type assertion or type guard to narrow the type +const env: NonNullable<typeof environment> = environment;
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (6)
apps/webservice/src/app/api/v1/resources/route.ts(1 hunks)packages/job-dispatch/src/resource/dispatch-resource.ts(1 hunks)packages/job-dispatch/src/resource/insert-resource-metadata.ts(1 hunks)packages/job-dispatch/src/resource/insert-resource-variables.ts(1 hunks)packages/job-dispatch/src/resource/upsert.ts(1 hunks)packages/job-dispatch/src/resource/utils.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/job-dispatch/src/resource/insert-resource-metadata.ts
- packages/job-dispatch/src/resource/insert-resource-variables.ts
- packages/job-dispatch/src/resource/upsert.ts
🔇 Additional comments (9)
apps/webservice/src/app/api/v1/resources/route.ts (1)
66-66: Verify type safety and document API response change
The response structure has changed from returning a direct count of resources to accessing the count via resources.all.length. This change requires verification and documentation:
- Ensure type safety of the
resourcesobject structure - Document the API response format change as it might affect clients
Let's verify the type safety and usage:
✅ Verification successful
Response structure change is valid and type-safe
The change from resources.length to resources.all.length is correct and type-safe. The upsertResources function explicitly returns an object with all and deleted arrays, where all contains the successfully upserted resources. This is a deliberate API design to provide more detailed information about the operation's results.
- The implementation in
packages/job-dispatch/src/resource/upsert.tsconfirms the return type - The route handler in
apps/webservice/src/app/api/v1/resources/route.tscorrectly uses this structure
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check the return type of upsertResources
ast-grep --pattern 'function upsertResources($_): $return_type {
$$$
}'
# Find other usages of upsertResources to ensure consistent handling
rg -A 3 'upsertResources\('
# Look for any API documentation that needs updating
fd -e md -e yaml -e json 'api|swagger|openapi'
Length of output: 1904
Script:
#!/bin/bash
# Check the implementation of upsertResources to understand its return type
ast-grep --pattern 'export async function upsertResources($_) {
$$$
}'
# Check the type definition if it exists separately
rg -g '*.ts' -A 5 'type.*Resource'
# Look at the API documentation
cat openapi.v1.json | jq '.paths."/api/v1/resources".post'
Length of output: 26105
Script:
#!/bin/bash
# Check the implementation of upsertResources function
cat packages/job-dispatch/src/resource/upsert.ts
# Check the implementation of the resources route
cat apps/webservice/src/app/api/v1/resources/route.ts
Length of output: 6757
packages/job-dispatch/src/resource/dispatch-resource.ts (5)
1-14: LGTM! Clean imports and proper logger setup.
The imports are well-organized and the logger is properly initialized with a specific label for this module.
22-34: Add error handling for the complex database query
The past review comment about adding error handling for this complex database query is still valid and should be addressed.
92-94: Handle rejections when awaiting multiple promises
The past review comment about using Promise.allSettled for better error handling is still valid.
133-143: Add error handling for the database query
The past review comment about adding error handling for this database query is still valid.
179-191: LGTM! Good error handling practices.
Excellent use of Promise.allSettled for handling multiple events and proper logging of success/failure metrics.
packages/job-dispatch/src/resource/utils.ts (3)
14-29: Function getResourcesByProvider is well-implemented
The function correctly retrieves resources by provider ID, considering the deletion status.
41-61: Consider optimizing query for performance
The previous review comment regarding optimizing the query in getResourcesByWorkspaceIdAndIdentifier is still valid. Using or(...resources.map(...)) can result in inefficient queries for large arrays. Refactoring to use inArray can improve performance.
71-106: Optimize getEnvironmentsByResourceWithIdentifiers to prevent N+1 queries
The previous review comment about the potential N+1 query problem in this function remains applicable. The current implementation may lead to multiple database calls, affecting performance. Consider refactoring to fetch all necessary resources in a single query.
| export async function dispatchJobsForAddedResources( | ||
| db: Tx, | ||
| resourceIds: string[], | ||
| envId: string, | ||
| ): Promise<void> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation for resourceIds
The function should validate that the resourceIds array is not empty before proceeding.
export async function dispatchJobsForAddedResources(
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> {
+ if (resourceIds.length === 0) {
+ log.info("No resources to process");
+ return;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function dispatchJobsForAddedResources( | |
| db: Tx, | |
| resourceIds: string[], | |
| envId: string, | |
| ): Promise<void> { | |
| export async function dispatchJobsForAddedResources( | |
| db: Tx, | |
| resourceIds: string[], | |
| envId: string, | |
| ): Promise<void> { | |
| if (resourceIds.length === 0) { | |
| log.info("No resources to process"); | |
| return; | |
| } |
| export const dispatchEventsForRemovedResources = async ( | ||
| db: Tx, | ||
| resourceIds: string[], | ||
| envId: string, | ||
| ): Promise<void> => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add input validation for resourceIds
Similar to dispatchJobsForAddedResources, validate that the resourceIds array is not empty.
export const dispatchEventsForRemovedResources = async (
db: Tx,
resourceIds: string[],
envId: string,
): Promise<void> => {
+ if (resourceIds.length === 0) {
+ log.info("No resources to process");
+ return;
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const dispatchEventsForRemovedResources = async ( | |
| db: Tx, | |
| resourceIds: string[], | |
| envId: string, | |
| ): Promise<void> => { | |
| export const dispatchEventsForRemovedResources = async ( | |
| db: Tx, | |
| resourceIds: string[], | |
| envId: string, | |
| ): Promise<void> => { | |
| if (resourceIds.length === 0) { | |
| log.info("No resources to process"); | |
| return; | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (2)
apps/pty-proxy/src/controller/agent-socket.ts (2)
Line range hint
128-139: Enhance error handling and logging for resource updates.The changes look good, but consider these improvements:
- Add more descriptive error messages
- Add logging for successful resource updates
- Add type checking for the response
Consider applying these changes:
const { updated } = await upsertResources(db, [ { ...resource, name: this.name, version: "ctrlplane.access/v1", kind: "AccessNode", identifier: `ctrlplane/access/access-node/${this.name}`, workspaceId: this.workspaceId, updatedAt: new Date(), }, ]); const res = updated.at(0); - if (res == null) throw new Error("Failed to create resource"); + if (res == null) { + logger.error("Failed to create/update resource", { + name: this.name, + workspaceId: this.workspaceId, + }); + throw new Error(`Failed to create/update resource for agent ${this.name}`); + } + logger.debug("Resource updated successfully", { + resourceId: res.id, + name: this.name, + }); this.resource = res; agents.set(res.id, { lastSync: new Date(), agent: this });
Line range hint
128-139: Consider standardizing resource update patterns across the system.The changes to
updateResourceintroduce a new pattern for handling resource updates. Consider:
- Documenting this pattern for consistency
- Ensuring similar error handling across resource-related operations
- Standardizing logging patterns for resource operations
Consider creating a shared utility for resource operations that encapsulates:
- Standard error handling
- Consistent logging
- Type-safe response handling
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
apps/pty-proxy/src/controller/agent-socket.ts(2 hunks)
🔇 Additional comments (1)
apps/pty-proxy/src/controller/agent-socket.ts (1)
Line range hint 128-139: Verify the impact on agent initialization and heartbeat handling.
The updateResource method is called during agent initialization, connection, and heartbeat. Please verify that these scenarios handle the updated response structure correctly.
Let's check the usage patterns:
✅ Verification successful
The changes to updateResource are safely handled across all call sites
Based on the search results, I can verify that the updateResource method is used in three scenarios in agent-socket.ts:
- During initialization:
await agent.updateResource({}); - During agent connect: Called with config and metadata from the connect message
- During heartbeat: Called with empty object
this.updateResource({});
All these cases handle the response structure correctly as they don't depend on the return value from updateResource. The method is consistently used for upserting the resource state, and the updated response structure from upsertResources doesn't impact the calling code.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the usage patterns of updateResource method
# and ensure error handling is consistent across all call sites
# Check for error handling patterns in agent initialization
ast-grep --pattern 'static async from($$$) {
$$$
await agent.updateResource($$$);
$$$
}'
# Check for error handling in message handlers
ast-grep --pattern '.on("message",
ifMessage()
.is(agentConnect, ($$$) => {
$$$updateResource($$$)
$$$
})
$$$
)'
Length of output: 237
Script:
#!/bin/bash
# Let's try a different approach to find the usage patterns
# Search for files containing updateResource
rg -l "updateResource"
# Then look for specific usage patterns
rg "updateResource" -B 3 -A 3
# Also search for message handlers related to agent connections
rg "agentConnect|agentHeartbeat" -B 3 -A 3
Length of output: 13601
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (1)
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (1)
96-99: Consider standardizing error response patternsThe error handling pattern here differs from the GET/DELETE handlers. Consider standardizing the error response format across all endpoints for consistency.
Consider adopting the same pattern used in GET/DELETE handlers:
const { updated } = await upsertResources(db, [_.merge(resource, body)]); const res = updated.at(0); - if (res == null) throw new Error("Failed to update resource"); + if (res == null) { + return NextResponse.json( + { error: "Failed to update resource" }, + { status: 404 } + ); + } return NextResponse.json(res);
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts(1 hunks)
🔇 Additional comments (1)
apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts (1)
96-96: Verify the merge behavior and its impact
The use of _.merge could modify the original resource object. Consider verifying if this has any unintended side effects in the codebase.
✅ Verification successful
Let me gather more context about the resource object and error handling patterns.
Based on the gathered evidence, I can now provide the final response:
The use of _.merge is safe in this context
The merge operation is used correctly as part of the upsertResources pattern across the codebase. The code:
- Properly validates resource existence before merge
- Has consistent error handling (404 for missing resources, throw for update failures)
- Uses the merge result only for the database update without keeping references
- Follows the same pattern as other resource updates in the codebase
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for other uses of the resource object after merge
rg "_.merge.*resource" --type ts
# Look for any error handling or validation after resource modifications
ast-grep --pattern 'const { updated } = await upsertResources($$$)'
Length of output: 1265
Script:
#!/bin/bash
# Check how resource object is used before merge
ast-grep --pattern 'const resource = $$$'
# Look for error handling patterns in the same file
rg "throw|catch|error" apps/webservice/src/app/api/v1/resources/[resourceId]/route.ts -B2 -A2
# Check other usages of upsertResources to understand patterns
ast-grep --pattern 'upsertResources($$$)'
Length of output: 10783
| const { updated } = await upsertResources(db, [_.merge(resource, body)]); | ||
| const res = updated.at(0); | ||
| if (res == null) throw new Error("Failed to update resource"); | ||
| return NextResponse.json(res); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling for resource update failures
The current implementation throws an error when the update fails, which will result in a 500 Internal Server Error. Consider returning a more appropriate HTTP response instead.
Consider this improvement:
- const { updated } = await upsertResources(db, [_.merge(resource, body)]);
- const res = updated.at(0);
- if (res == null) throw new Error("Failed to update resource");
- return NextResponse.json(res);
+ const { updated } = await upsertResources(db, [_.merge(resource, body)]);
+ const res = updated.at(0);
+ if (res == null) {
+ return NextResponse.json(
+ { error: "Failed to update resource" },
+ { status: 422 }
+ );
+ }
+ return NextResponse.json(res);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const { updated } = await upsertResources(db, [_.merge(resource, body)]); | |
| const res = updated.at(0); | |
| if (res == null) throw new Error("Failed to update resource"); | |
| return NextResponse.json(res); | |
| const { updated } = await upsertResources(db, [_.merge(resource, body)]); | |
| const res = updated.at(0); | |
| if (res == null) { | |
| return NextResponse.json( | |
| { error: "Failed to update resource" }, | |
| { status: 422 } | |
| ); | |
| } | |
| return NextResponse.json(res); |
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Refactor