-
Notifications
You must be signed in to change notification settings - Fork 11
init target variables #166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@jsbroks has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 27 minutes and 30 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. WalkthroughThe pull request introduces several changes across multiple Dockerfiles and additional files within the project. The primary modification involves adding the Changes
Possibly related PRs
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (5)
packages/secrets/package.json (1)
1-12: LGTM! Consider starting at version 0.0.1The package metadata and exports are well-structured. Since this is a new package that's being integrated into the system, consider starting at version 0.0.1 to better reflect its initial development status.
apps/jobs/Dockerfile (1)
29-29: Consider adding documentation for the secrets package.Since this is initializing a new secrets package, it would be helpful to document its purpose and usage in a README.md file within the package directory.
Consider adding a README.md in the packages/secrets directory that explains:
- The purpose of this package
- How it integrates with the jobs application
- Any security considerations or best practices
apps/event-worker/Dockerfile (1)
30-30: Consider documenting secrets management requirements.Since this package likely handles sensitive information, consider adding documentation about:
- Required environment variables
- Security considerations
- Runtime configuration requirements
Consider implementing these security best practices if not already in place:
- Use environment variables for sensitive values
- Implement secret rotation mechanisms
- Add proper error handling for missing secrets
- Consider using a secrets management service in production
packages/secrets/src/index.ts (1)
5-42: Consider enhancing the encryption implementation security.While the basic implementation is functional, consider these security improvements:
- Use authenticated encryption (AES-GCM) instead of CBC mode to prevent tampering
- Add HMAC verification
- Implement padding oracle attack protection
- Add key rotation support
Here's an example of using AES-GCM:
class AES256EncryptionService { private algorithm = "aes-256-gcm"; constructor(private key: string) { if (key.length !== 64 || !/^[0-9a-fA-F]+$/.test(key)) throw new Error( "Invalid key: must be a 64-character hexadecimal string (32 bytes / 256 bits)", ); } encrypt(data: string): string { const iv = crypto.randomBytes(12); // GCM recommended IV length const cipher = crypto.createCipheriv( this.algorithm, Buffer.from(this.key, "hex"), iv ); let encrypted = cipher.update(data, "utf8", "hex"); encrypted += cipher.final("hex"); const authTag = cipher.getAuthTag(); return `${iv.toString("hex")}:${authTag.toString("hex")}:${encrypted}`; } decrypt(encryptedData: string): string { const [ivHex, authTagHex, encryptedText] = encryptedData.split(":"); if (!ivHex || !authTagHex || !encryptedText) throw new Error("Invalid encrypted data format"); const iv = Buffer.from(ivHex, "hex"); const authTag = Buffer.from(authTagHex, "hex"); const decipher = crypto.createDecipheriv( this.algorithm, Buffer.from(this.key, "hex"), iv ); decipher.setAuthTag(authTag); let decrypted = decipher.update(encryptedText, "hex", "utf8"); decrypted += decipher.final("utf8"); return decrypted; } }packages/db/src/schema/target.ts (1)
220-233: Consider adding audit fields for better traceability.The table definition could benefit from including audit fields like
created_atandupdated_at, which are present in other tables (e.g., thetargettable). This would help track when variables are created and modified.Apply this diff to add audit fields:
export const targetVariable = pgTable( "target_variable", { id: uuid("id").primaryKey().defaultRandom(), targetId: uuid("target_id") .references(() => target.id, { onDelete: "cascade" }) .notNull(), key: text("key").notNull(), value: jsonb("value").notNull(), sensitive: boolean("sensitive").notNull().default(false), + createdAt: timestamp("created_at", { withTimezone: true }) + .notNull() + .defaultNow(), + updatedAt: timestamp("updated_at", { withTimezone: true }) + .$onUpdate(() => new Date()), }, (t) => ({ uniq: uniqueIndex().on(t.targetId, t.key) }), );
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
- apps/event-worker/Dockerfile (1 hunks)
- apps/jobs/Dockerfile (1 hunks)
- apps/webservice/Dockerfile (1 hunks)
- packages/db/drizzle/0021_first_ikaris.sql (1 hunks)
- packages/db/drizzle/meta/_journal.json (1 hunks)
- packages/db/src/schema/target.ts (2 hunks)
- packages/job-dispatch/package.json (1 hunks)
- packages/job-dispatch/src/target.ts (6 hunks)
- packages/secrets/eslint.config.js (1 hunks)
- packages/secrets/package.json (1 hunks)
- packages/secrets/src/index.ts (1 hunks)
- packages/secrets/tsconfig.json (1 hunks)
- packages/storage/src/index.ts (1 hunks)
✅ Files skipped from review due to trivial changes (3)
- packages/secrets/eslint.config.js
- packages/secrets/tsconfig.json
- packages/storage/src/index.ts
🧰 Additional context used
🪛 Biome
packages/job-dispatch/src/target.ts
[error] 258-258: Comparing to itself is potentially pointless.
(lint/suspicious/noSelfCompare)
[error] 258-258: Comparing to itself is potentially pointless.
(lint/suspicious/noSelfCompare)
🔇 Additional comments (14)
packages/db/drizzle/0021_first_ikaris.sql (2)
9-13: LGTM! Foreign key constraint is well implemented.The constraint implementation follows best practices:
- Proper exception handling for idempotency
- Appropriate CASCADE on DELETE behavior
- Safe NO ACTION on UPDATE for UUID references
15-15: Verify index coverage for common query patterns.The unique composite index is well-implemented and prevents duplicate keys per target. However, let's verify if additional indexes are needed based on your query patterns.
✅ Verification successful
Let me gather more information about the query patterns by searching for SQL queries or ORM operations involving the target_variable table.
This will help us understand how the table is being queried and what access patterns are common in the codebase.
Let me try one more search with different patterns to find query usage:
Let me try one final search focusing on the Drizzle schema and relations:
Let me check the content of these specific files to understand the table usage and access patterns:
Based on the analysis of the schema and migrations, I can now provide a final response:
The composite index implementation is appropriate and sufficient
The current unique btree index on (target_id, key) is well-designed and sufficient because:
- It enforces the uniqueness constraint for variables per target
- It supports efficient lookups when querying variables by target_id (prefix of the index)
- The table structure shows this is the primary access pattern, as variables are always queried in the context of their target
- The schema shows a direct foreign key relationship to the target table with cascade delete, which this index supports efficiently
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for queries accessing target_variable table to verify index coverage rg -A 5 "target_variable" --type ts # Look for potential JOIN operations with the target table ast-grep --pattern 'SELECT $_$ FROM target_variable JOIN $_$'Length of output: 481
Script:
#!/bin/bash # Search for drizzle query operations on target_variable table ast-grep --pattern 'db.select().from(target_variable)' # Search for any other query patterns involving target_variable ast-grep --pattern 'db.query().from(target_variable)' # Look for any eq/where clauses to understand access patterns ast-grep --pattern 'eq(target_variable.$_$, $_$)' # Check if there are any direct imports of target_variable rg "import.*target_variable" --type tsLength of output: 205
Script:
#!/bin/bash # Search for any SQL operations involving target_variable using broader patterns rg -B 2 -A 4 "from.*target_variable|join.*target_variable|where.*target_variable" --type ts # Look for drizzle query builder usage with different syntax ast-grep --pattern '$_$.from(target_variable)' # Check the schema definition and relations more thoroughly rg -A 10 "createTable.*target_variable" --type ts # Look for any file containing both target and target_variable references rg "target_variable.*target|target.*target_variable" --type tsLength of output: 259
Script:
#!/bin/bash # Search for the table definition and any relations rg -A 15 "export const target_variable" --type ts # Look for any references to the table in migrations fd -e sql . | xargs rg -l "target_variable" # Check for any query-related files that might use this table fd -e ts -e js . | xargs rg -l "target_variable"Length of output: 262
Script:
#!/bin/bash # Check the schema definition in target.ts cat packages/db/src/schema/target.ts # Look at both migration files cat packages/db/drizzle/0000_busy_firebird.sql cat packages/db/drizzle/0021_first_ikaris.sqlLength of output: 46965
packages/secrets/package.json (2)
34-34: LGTM! Good use of shared Prettier configUsing a shared Prettier configuration helps maintain consistent code formatting across the monorepo.
21-33:⚠️ Potential issueReview dependency versioning and add test framework
- The
catalog:version specifier is non-standard. Consider using explicit versions or ranges:"dependencies": { - "@t3-oss/env-core": "catalog:", - "zod": "catalog:" + "@t3-oss/env-core": "^0.7.1", + "zod": "^3.22.4" },
- Consider adding test framework dependencies (e.g., Jest or Vitest) to ensure code quality.
apps/jobs/Dockerfile (1)
29-29: LGTM, but let's verify the secrets package implementation.The addition of the secrets package follows the correct pattern and ordering in the Dockerfile. However, given that this package handles sensitive information, we should verify its implementation.
✅ Verification successful
Let me analyze the implementation further by checking for any security concerns in the usage patterns.
Based on the gathered information, I can now provide the final response:
The secrets package implementation follows secure practices
The implementation shows a well-structured secrets management approach with several security-positive aspects:
- Uses AES-256 encryption with proper key validation (64-char hex requirement)
- Environment variable-based key management with zod validation
- No hardcoded secrets or keys in the codebase
- Limited and controlled usage (only imported in job-dispatch/target.ts)
- Proper encapsulation through a dedicated encryption service
The addition of this package to the Dockerfile is safe and follows security best practices.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the secrets package implementation and its usage # Look for potential security concerns in the secrets package # Check the package.json contents echo "Checking secrets package.json contents:" cat packages/secrets/package.json # Look for any hardcoded secrets or sensitive patterns echo -e "\nChecking for potential sensitive patterns in the secrets package:" rg -g '!package-lock.json' -g '!pnpm-lock.yaml' -g 'packages/secrets/**' -i '(password|secret|key|token|credential)'Length of output: 2006
Script:
#!/bin/bash # Check how the secrets package is being used across the codebase echo "Checking imports and usage of the secrets package:" rg -g '!*.json' -g '!*.yaml' -g '!*.md' '@ctrlplane/secrets' -B 2 -A 2 echo -e "\nChecking for any direct key assignments or hardcoded values:" ast-grep --pattern 'VARIABLES_AES_256_KEY = $_' echo -e "\nChecking environment variable usage:" rg 'VARIABLES_AES_256_KEY' -B 2 -A 2Length of output: 1338
packages/job-dispatch/package.json (1)
30-30: LGTM! Verify the secrets package exists.The dependency addition follows the correct workspace pattern and is properly placed.
Let's verify the secrets package exists and has the expected structure:
✅ Verification successful
The results show that the
@ctrlplane/secretspackage exists and contains anAES256EncryptionServiceclass. Let me verify the package name in package.json to ensure it matches the dependency being added.
Dependency verified and correctly configured
The
@ctrlplane/secretspackage exists in the monorepo with the correct package name and exports anAES256EncryptionServiceclass, confirming that this is a valid workspace dependency addition.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the secrets package exists and exports the expected encryption service # Test 1: Check if the package exists if [ ! -d "packages/secrets" ]; then echo "Error: packages/secrets directory not found" exit 1 fi # Test 2: Verify package.json exists if [ ! -f "packages/secrets/package.json" ]; then echo "Error: packages/secrets/package.json not found" exit 1 fi # Test 3: Check for the encryption service in the source rg -t ts "class.*EncryptionService" "packages/secrets/src/"Length of output: 202
Script:
#!/bin/bash # Verify the package name in secrets/package.json matches the dependency cat packages/secrets/package.json | jq -r .nameLength of output: 68
packages/secrets/src/index.ts (2)
1-4: LGTM! Dependencies are well-chosen.The selected dependencies provide a robust foundation for cryptographic operations and type-safe environment management.
1-50: Verify the encryption service integration with target variables.Let's ensure the encryption service is properly integrated with the target variables functionality.
✅ Verification successful
Let me try a different approach to verify the integration by using more generic file search and pattern matching.
This script removes the
--type typescriptflag which was causing issues and uses more generic approaches to find the relevant files and patterns. It will help us verify:
- The available TypeScript/JavaScript files in the codebase
- Usage of the encryption service
- Any encryption/decryption related patterns in the code
Let me check the specific integration in the target.ts file to understand how the encryption service is being used.
This will help us verify the exact usage of the encryption service in the target variables functionality.
Based on the code analysis, I can now provide a final response about the encryption service integration.
Encryption service is properly integrated with target variables
The encryption service is correctly integrated in
packages/job-dispatch/src/target.tsfor handling sensitive target variables:
- It's imported from
@ctrlplane/secretsasvariablesAES256- Used in
upsertTargetVariablesfunction to encrypt sensitive values before storing them in the database- Sensitive values are encrypted using
variablesAES256().encrypt(value)while non-sensitive values are stored as-is- The encryption is applied selectively based on the
sensitiveflag in the variable definition🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the encryption service integration # Expected: Find usage in target variables handling code # Check for encryption service usage echo "Checking encryption service usage:" rg -l "AES256EncryptionService|variablesAES256" --type typescript # Check for sensitive data handling echo -e "\nChecking sensitive data handling patterns:" rg -A 5 "encrypt|decrypt" --type typescriptLength of output: 363
Script:
#!/bin/bash # Search for TypeScript/JavaScript files echo "Finding relevant source files:" fd "\.(ts|js)$" echo -e "\nChecking for encryption service usage:" rg -l "AES256EncryptionService|variablesAES256" echo -e "\nChecking for encrypt/decrypt patterns:" rg "encrypt|decrypt" -A 5Length of output: 15488
Script:
#!/bin/bash # Check the target.ts implementation echo "Checking target.ts implementation:" cat packages/job-dispatch/src/target.tsLength of output: 8355
apps/webservice/Dockerfile (2)
37-37: LGTM! The secrets package.json is properly integrated.The addition maintains the Dockerfile's existing structure and follows best practices:
- Preserves layer caching strategy
- Consistent with other package.json copying
- Maintains alphabetical ordering
37-37: Verify the secrets package implementation.Given that this package likely handles sensitive information, let's verify its implementation:
✅ Verification successful
Let me gather more information about the implementation details of the encryption service to ensure it follows security best practices.
The secrets package implementation has been verified and follows security best practices
The implementation has been reviewed and shows proper security measures:
- Uses AES-256-CBC encryption with secure key validation (64-char hex requirement)
- Implements proper IV (Initialization Vector) handling with crypto.randomBytes(16)
- Uses Node's native 'crypto' module with appropriate cipher methods
- Includes proper error handling for invalid keys and encrypted data
- Environment variables are properly validated using zod schema
- The encryption service is well-encapsulated with private fields
The package appears to be focused solely on encryption/decryption functionality with no obvious security concerns in its implementation.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the secrets package implementation and usage # Check if the secrets package has proper access controls echo "Checking secrets package access controls..." rg -l "export|public" packages/secrets/ # Look for potential security-related patterns echo "Checking for security patterns..." rg -i "secret|password|key|token|credential" packages/secrets/ # Check for proper environment variable usage echo "Checking environment variable handling..." rg -i "process\.env" packages/secrets/Length of output: 1291
Script:
#!/bin/bash # Get the full implementation of AES256EncryptionService echo "Checking AES256EncryptionService implementation..." ast-grep --pattern 'class AES256EncryptionService { $$$ }' # Check for crypto imports and usage echo "Checking crypto usage..." rg -A 5 "crypto" packages/secrets/ # Check for test coverage echo "Checking test files..." fd "test|spec" packages/secrets/Length of output: 3881
packages/db/drizzle/meta/_journal.json (1)
151-157: LGTM! Migration entry properly documented.The new migration entry follows the established format and maintains consistency with previous entries. This entry corresponds to the creation of the
target_variabletable and its foreign key relationship with thepublic.targettable.packages/db/src/schema/target.ts (1)
228-230: Consider adding validation constraints for key and value fields.The table definition could benefit from additional constraints:
- The
keyfield might need a pattern constraint or size limit to ensure valid variable names.- Consider adding a check constraint or application-level validation for the
valuefield structure.Let's check if there are any validation patterns used in similar tables:
Consider implementing these validations at both the database and application levels:
- Database: Add check constraints for the key format
- Application: Use zod schema to validate the value structure
packages/job-dispatch/src/target.ts (2)
81-148:upsertTargetVariablesfunction is well-implementedThe
upsertTargetVariablesfunction effectively handles the upsertion of target variables, including:
- Encrypting sensitive variable values using
variablesAES256().encrypt(value).- Inserting or updating variables with proper conflict resolution.
- Deleting obsolete variables that no longer exist in the input.
The use of transactions and comprehensive error handling enhances the reliability of the operation.
150-213:upsertTargetMetadatafunction correctly manages metadataThe
upsertTargetMetadatafunction properly manages the insertion and updating of target metadata. It includes:
- Efficiently handling existing metadata retrieval.
- Inserting or updating metadata entries with conflict resolution.
- Cleaning up obsolete metadata entries that are no longer needed.
- Robust error handling throughout the process.
This implementation ensures metadata consistency across targets.
| CREATE TABLE IF NOT EXISTS "target_variable" ( | ||
| "id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL, | ||
| "target_id" uuid NOT NULL, | ||
| "key" text NOT NULL, | ||
| "value" jsonb NOT NULL, | ||
| "sensitive" boolean DEFAULT false NOT NULL | ||
| ); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding constraints for data integrity.
While the basic structure is sound, consider adding these constraints for better data integrity:
- Add a length limit to the
keyfield (e.g.,text(100)) to prevent excessive storage - Add a CHECK constraint to validate the JSONB structure
CREATE TABLE IF NOT EXISTS "target_variable" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"target_id" uuid NOT NULL,
- "key" text NOT NULL,
+ "key" text NOT NULL CHECK (length(key) <= 100),
"value" jsonb NOT NULL,
"sensitive" boolean DEFAULT false NOT NULL
+ CONSTRAINT "value_is_valid" CHECK (jsonb_typeof(value) IN ('object', 'array', 'string', 'number', 'boolean'))
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CREATE TABLE IF NOT EXISTS "target_variable" ( | |
| "id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL, | |
| "target_id" uuid NOT NULL, | |
| "key" text NOT NULL, | |
| "value" jsonb NOT NULL, | |
| "sensitive" boolean DEFAULT false NOT NULL | |
| ); | |
| CREATE TABLE IF NOT EXISTS "target_variable" ( | |
| "id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL, | |
| "target_id" uuid NOT NULL, | |
| "key" text NOT NULL CHECK (length(key) <= 100), | |
| "value" jsonb NOT NULL, | |
| "sensitive" boolean DEFAULT false NOT NULL | |
| CONSTRAINT "value_is_valid" CHECK (jsonb_typeof(value) IN ('object', 'array', 'string', 'number', 'boolean')) | |
| ); |
| "scripts": { | ||
| "build": "tsc", | ||
| "dev": "tsc --watch", | ||
| "clean": "rm -rf .turbo node_modules", | ||
| "format": "prettier --check . --ignore-path ../../.gitignore", | ||
| "lint": "eslint", | ||
| "typecheck": "tsc --noEmit --emitDeclarationOnly false" | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance script configurations
A few improvements could be made to the scripts:
- The
lintscript needs a target path:"lint": "eslint ." - Consider adding a
format:fixscript:"format:fix": "prettier --write . --ignore-path ../../.gitignore"
"scripts": {
"build": "tsc",
"dev": "tsc --watch",
"clean": "rm -rf .turbo node_modules",
"format": "prettier --check . --ignore-path ../../.gitignore",
- "lint": "eslint",
+ "lint": "eslint .",
+ "format:fix": "prettier --write . --ignore-path ../../.gitignore",
"typecheck": "tsc --noEmit --emitDeclarationOnly false"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "scripts": { | |
| "build": "tsc", | |
| "dev": "tsc --watch", | |
| "clean": "rm -rf .turbo node_modules", | |
| "format": "prettier --check . --ignore-path ../../.gitignore", | |
| "lint": "eslint", | |
| "typecheck": "tsc --noEmit --emitDeclarationOnly false" | |
| }, | |
| "scripts": { | |
| "build": "tsc", | |
| "dev": "tsc --watch", | |
| "clean": "rm -rf .turbo node_modules", | |
| "format": "prettier --check . --ignore-path ../../.gitignore", | |
| "lint": "eslint .", | |
| "format:fix": "prettier --write . --ignore-path ../../.gitignore", | |
| "typecheck": "tsc --noEmit --emitDeclarationOnly false" | |
| }, |
| COPY packages/validators/package.json ./packages/validators/package.json | ||
| COPY packages/logger/package.json ./packages/logger/package.json | ||
| COPY packages/job-dispatch/package.json ./packages/job-dispatch/package.json | ||
| COPY packages/secrets/package.json ./packages/secrets/package.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Environment variable VARIABLES_AES_256_KEY required for secrets package
The secrets package implements AES-256 encryption and requires a specific environment variable to be set:
VARIABLES_AES_256_KEY: A 64-character hexadecimal string (32 bytes / 256 bits) required for encryption/decryption operations- The package uses
@t3-oss/env-corefor environment variable validation - The encryption service is properly implemented with secure defaults (AES-256-CBC with random IV)
Make sure to set the required environment variable in your event-worker service configuration.
🔗 Analysis chain
LGTM! Verify secrets package configuration.
The addition of the secrets package follows the established pattern for shared packages. The package is correctly copied before dependency installation.
Let's verify the secrets package setup:
Let me try to gather more information about the secrets package to better understand its configuration and requirements.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if the secrets package requires specific environment variables
rg -g 'packages/secrets/**' 'process\.env\.' -A 2
# Check for any security-related configuration files
fd -g 'packages/secrets/**' -e config.ts -e config.js
Length of output: 107
Script:
#!/bin/bash
# Check the contents of the secrets package
fd . packages/secrets -t f -x cat {}
# Look for any environment variable patterns in a more generic way
rg -g 'packages/secrets/**' -i '(env|secret|config|environment)' -A 2
# Check package.json for dependencies and scripts
cat packages/secrets/package.json
Length of output: 5895
| export const env = createEnv({ | ||
| server: { VARIABLES_AES_256_KEY: z.string().min(64).max(64) }, | ||
| runtimeEnv: process.env, | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider enhancing environment variable validation.
The current validation is basic. Consider creating a custom zod validator for the AES key:
const aesKeySchema = z
.string()
.length(64)
.regex(/^[0-9a-fA-F]+$/, "Key must be hexadecimal")
.transform((val) => val.toLowerCase());
export const env = createEnv({
server: {
VARIABLES_AES_256_KEY: aesKeySchema
},
runtimeEnv: process.env as Record<string, string>,
});| export const variablesAES256 = (key = env.VARIABLES_AES_256_KEY) => | ||
| new AES256EncryptionService(key); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider implementing a singleton pattern with proper initialization checks.
The current initialization doesn't prevent multiple instances or handle missing environment variables gracefully.
export class EncryptionService {
private static instance: AES256EncryptionService;
static initialize(key = env.VARIABLES_AES_256_KEY): void {
if (this.instance) {
throw new Error('EncryptionService already initialized');
}
this.instance = new AES256EncryptionService(key);
}
static getInstance(): AES256EncryptionService {
if (!this.instance) {
throw new Error('EncryptionService not initialized');
}
return this.instance;
}
}| export const createTargetVariable = createInsertSchema(targetVariable).omit({ | ||
| id: true, | ||
| }); | ||
|
|
||
| export const updateTargetVariable = createTargetVariable.partial(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance type safety for the value field.
Consider adding type definitions and validation rules for the value field to ensure type safety and consistent data structure.
Apply this diff to add type safety:
+type TargetVariableValue = {
+ type: string;
+ data: unknown;
+};
+
export const createTargetVariable = createInsertSchema(targetVariable, {
+ value: z.object({
+ type: z.string(),
+ data: z.unknown(),
+ }),
}).omit({
id: true,
});
export const updateTargetVariable = createTargetVariable.partial();Committable suggestion was skipped due to low confidence.
| .then((targets) => | ||
| targets.map((t) => ({ | ||
| ...t, | ||
| ...targetsToInsert.find( | ||
| (t) => | ||
| t.identifier === t.identifier && t.workspaceId === t.workspaceId, | ||
| ), | ||
| })), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve variable shadowing causing self-comparison in upsertTargets
In the upsertTargets function, there's a variable shadowing issue leading to a self-comparison:
...targetsToInsert.find(
(t) =>
t.identifier === t.identifier && t.workspaceId === t.workspaceId,
),Here, the same variable t is used in both the outer and inner scopes, causing t.identifier === t.identifier, which is always true. This likely prevents the correct merging of target data.
To fix this, rename the inner function's parameter to avoid shadowing:
.then((targets) =>
targets.map((t) => ({
...t,
...targetsToInsert.find(
- (t) =>
- t.identifier === t.identifier && t.workspaceId === t.workspaceId,
+ (ti) =>
+ ti.identifier === t.identifier && ti.workspaceId === t.workspaceId,
),
})),
)This change ensures that ti from targetsToInsert is correctly compared with t from targets, allowing proper data merging based on identifier and workspaceId.
🧰 Tools
🪛 Biome
[error] 258-258: Comparing to itself is potentially pointless.
(lint/suspicious/noSelfCompare)
[error] 258-258: Comparing to itself is potentially pointless.
(lint/suspicious/noSelfCompare)
| if (variablesToDelete.length > 0) | ||
| await tx | ||
| .delete(targetMetadata) | ||
| .where( | ||
| inArray( | ||
| targetMetadata.id, | ||
| variablesToDelete.map((m) => m.id), | ||
| ), | ||
| ) | ||
| .catch((err) => { | ||
| log.error("Error deleting target variables", { error: err }); | ||
| throw err; | ||
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect deletion from targetMetadata instead of targetVariable in upsertTargetVariables
In the upsertTargetVariables function, when deleting variables that no longer exist, the deletion is incorrectly targeting the targetMetadata table instead of the targetVariable table. This seems to be an oversight and should be corrected to avoid unintended data modifications.
Apply this diff to fix the deletion target:
if (variablesToDelete.length > 0)
await tx
- .delete(targetMetadata)
+ .delete(targetVariable)
.where(
inArray(
- targetMetadata.id,
+ targetVariable.id,
variablesToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target variables", { error: err });
throw err;
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (variablesToDelete.length > 0) | |
| await tx | |
| .delete(targetMetadata) | |
| .where( | |
| inArray( | |
| targetMetadata.id, | |
| variablesToDelete.map((m) => m.id), | |
| ), | |
| ) | |
| .catch((err) => { | |
| log.error("Error deleting target variables", { error: err }); | |
| throw err; | |
| }); | |
| if (variablesToDelete.length > 0) | |
| await tx | |
| .delete(targetVariable) | |
| .where( | |
| inArray( | |
| targetVariable.id, | |
| variablesToDelete.map((m) => m.id), | |
| ), | |
| ) | |
| .catch((err) => { | |
| log.error("Error deleting target variables", { error: err }); | |
| throw err; | |
| }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Outside diff range and nitpick comments (1)
.env.example (1)
1-3: Add documentation for required environment variables.The removal of comments has made it less clear which variables are required and how they should be configured. Consider adding comments to explain:
- The purpose of each variable
- Required format/constraints
- Whether the variable is optional or required
Example improvement:
+# Base URL for the application (required) BASE_URL="http://localhost:3000" + +# PostgreSQL connection string (required) +# Format: postgresql://<user>:<password>@<host>:<port>/<database> POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane" + +# Redis connection string (required) REDIS_URL="redis://127.0.0.1:6379"
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (5)
- .env.example (1 hunks)
- apps/webservice/src/app/api/v1/target-providers/[providerId]/set/route.ts (2 hunks)
- apps/webservice/src/app/api/v1/workspaces/[workspaceId]/targets/route.ts (2 hunks)
- packages/job-dispatch/src/target.ts (6 hunks)
- packages/secrets/src/index.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/secrets/src/index.ts
🔇 Additional comments (5)
apps/webservice/src/app/api/v1/target-providers/[providerId]/set/route.ts (1)
70-79: 🛠️ Refactor suggestionVerify sensitive data handling and improve value transformation.
The current transformation only handles undefined values, but there might be other edge cases to consider. Additionally, there's no explicit handling of sensitive data.
Let's verify the sensitive data handling in related files:
Consider enhancing the transformation:
const targets = await upsertTargets( db, targetsToInsert.map((t) => ({ ...t, - variables: t.variables?.map((v) => ({ - ...v, - value: v.value ?? null, - })), + variables: t.variables?.map((v) => { + // Validate and sanitize sensitive values + if (v.sensitive && typeof v.value === 'string') { + // Add validation for sensitive string values + if (v.value.length > 1024) { + throw new Error(`Sensitive value for key ${v.key} exceeds maximum length`); + } + } + return { + key: v.key, + value: v.value ?? null, + sensitive: v.sensitive + }; + }), })), );apps/webservice/src/app/api/v1/workspaces/[workspaceId]/targets/route.ts (1)
182-189: 🛠️ Refactor suggestionVerify workspace existence and improve type safety in target mapping.
The current implementation has potential issues:
- No validation that workspaceId exists before using it
- The null coalescing for variable values might mask undefined values
- Type safety could be improved for variable values
Let's verify the workspace validation in the codebase:
Consider these improvements:
const targets = await upsertTargets( db, parsedTargets.map((t) => ({ ...t, workspaceId, variables: t.variables?.map((v) => ({ ...v, - value: v.value ?? null, + value: v.value === undefined ? null : v.value, })), })), );packages/job-dispatch/src/target.ts (3)
19-22: LGTM: Import changes are appropriate.The new imports for
targetVariableandvariablesAES256are correctly added to support the target variables functionality.
270-273: Verify transaction isolation for parallel upserts.The parallel execution of metadata and variables upsert operations could potentially lead to race conditions if the transaction isolation level is not properly set.
#!/bin/bash # Search for transaction isolation level configuration rg -A 5 "isolation|BEGIN|START TRANSACTION"
81-150:⚠️ Potential issueFix incorrect error message in target variables fetch.
The error message mentions "metadata" instead of "variables" which could be confusing for debugging.
- log.error("Error fetching existing target metadata", { error: err }); + log.error("Error fetching existing target variables", { error: err });Consider additional security measures for sensitive variables.
While encryption is correctly implemented for sensitive values, consider adding these security enhancements:
- Ensure sensitive values are not logged
- Add input validation for the
sensitiveflag✅ Verification successful
Let me try another search to find potential sensitive data logging:
Based on the code review and verification results, I can now provide the final response:
Fix incorrect error message and enhance error logging security
The error messages need to be consistent and secure:
- Fix "metadata" to "variables" in the fetch error message
- Current error logging is secure as it only includes the error object without exposing sensitive variable values
- log.error("Error fetching existing target metadata", { error: err }); + log.error("Error fetching existing target variables", { error: err });The encryption implementation for sensitive values is correctly handled, and the error logging pattern is secure as it doesn't log any variable values or sensitive data, only error objects. No additional security measures are required.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for potential sensitive data logging rg -A 2 "log\." | rg "variables|value"Length of output: 53120
Script:
#!/bin/bash # Check for potential sensitive data logging specifically around variables and values rg -A 2 "log\." --glob "*.{ts,js}" | rg -i "variable|value|sensitive"Length of output: 284
| # Since .env is gitignored, you can use .env.example to build a new `.env` file when you clone the repo. | ||
| # Keep this file up-to-date when you add new variables to \`.env\`. | ||
| BASE_URL="http://localhost:3000" | ||
| POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Remove credentials from example PostgreSQL URL.
The PostgreSQL URL contains default credentials which could be accidentally used in production. Consider using placeholder values instead.
-POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane"
+POSTGRES_URL="postgresql://<username>:<password>@127.0.0.1:5432/<database>"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane" | |
| POSTGRES_URL="postgresql://<username>:<password>@127.0.0.1:5432/<database>" |
| AUTH_TRUST_HOST=false | ||
|
|
||
| VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000 | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Replace example encryption key with placeholder.
The AES key is currently set to all zeros, which is insecure. Replace with a placeholder and add guidance on generating a secure key.
-VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000
+# AES-256 encryption key for variable encryption (required)
+# Generate a secure 64-character hex key for production use
+# Example command: openssl rand -hex 32
+VARIABLES_AES_256_KEY=<generated-hex-key>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000 | |
| # AES-256 encryption key for variable encryption (required) | |
| # Generate a secure 64-character hex key for production use | |
| # Example command: openssl rand -hex 32 | |
| VARIABLES_AES_256_KEY=<generated-hex-key> |
| # This file will be committed to version control, so make sure not to have any secrets in it. | ||
| # If you are cloning this repo, create a copy of this file named `.env` and populate it with your secrets. | ||
|
|
||
| # The database URL is used to connect to your Supabase database. | ||
| POSTGRES_URL="postgres://postgres.[USERNAME]:[PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?workaround=supabase-pooler.vercel" | ||
|
|
||
| # You can generate the secret via 'openssl rand -base64 32' on Unix | ||
| # @see https://next-auth.js.org/configuration/options#secret | ||
| AUTH_SECRET='supersecret' | ||
| AUTH_TRUST_HOST=false | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security: Add warning about AUTH_SECRET configuration.
The example includes a weak default secret that could be accidentally used in production. Add a warning and use a placeholder instead.
-AUTH_SECRET='supersecret'
+# WARNING: Generate a strong random secret in production!
+# Minimum length: 32 characters
+AUTH_SECRET='<your-secure-random-secret>'
AUTH_TRUST_HOST=false📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # This file will be committed to version control, so make sure not to have any secrets in it. | |
| # If you are cloning this repo, create a copy of this file named `.env` and populate it with your secrets. | |
| # The database URL is used to connect to your Supabase database. | |
| POSTGRES_URL="postgres://postgres.[USERNAME]:[PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?workaround=supabase-pooler.vercel" | |
| # You can generate the secret via 'openssl rand -base64 32' on Unix | |
| # @see https://next-auth.js.org/configuration/options#secret | |
| AUTH_SECRET='supersecret' | |
| AUTH_TRUST_HOST=false | |
| # WARNING: Generate a strong random secret in production! | |
| # Minimum length: 32 characters | |
| AUTH_SECRET='<your-secure-random-secret>' | |
| AUTH_TRUST_HOST=false |
| POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane" | ||
| REDIS_URL="redis://127.0.0.1:6379" | ||
|
|
||
| # This file will be committed to version control, so make sure not to have any secrets in it. | ||
| # If you are cloning this repo, create a copy of this file named `.env` and populate it with your secrets. | ||
|
|
||
| # The database URL is used to connect to your Supabase database. | ||
| POSTGRES_URL="postgres://postgres.[USERNAME]:[PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?workaround=supabase-pooler.vercel" | ||
|
|
||
| # You can generate the secret via 'openssl rand -base64 32' on Unix | ||
| # @see https://next-auth.js.org/configuration/options#secret | ||
| AUTH_SECRET='supersecret' | ||
| AUTH_TRUST_HOST=false | ||
|
|
||
| VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000 | ||
|
|
||
| JOB_AGENT_WORKSPACE="ctrlplane" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the search results, I can now provide a comprehensive response about the environment variables.
The .env.example file is missing some required environment variables.
The following required variables should be added to the .env.example file:
GITHUB_BOT_NAME- GitHub bot name (optional)GITHUB_BOT_CLIENT_ID- GitHub bot client ID (optional)GITHUB_BOT_CLIENT_SECRET- GitHub bot client secret (optional)GITHUB_BOT_APP_ID- GitHub bot app ID (optional)GITHUB_BOT_PRIVATE_KEY- GitHub bot private key (optional)GITHUB_WEBHOOK_SECRET- GitHub webhook secret (optional)GITHUB_URL- GitHub URL (optional)OTEL_SAMPLER_RATIO- OpenTelemetry sampling ratio (optional, defaults to 1)
These variables are defined in the environment schema (apps/webservice/src/env.ts) but missing from the example configuration file.
🔗 Analysis chain
Add missing configuration variables.
Based on the AI summary, this PR introduces new functionality for target variables and secrets management. However, some potentially required variables might be missing from the example configuration.
Let's check for any missing required environment variables:
Let me gather more information about environment variables by checking the schema definitions and configuration files.
Let me search for all environment variable schema definitions to identify required variables.
Let me try a different search pattern to find environment variable definitions.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for environment variable usage in the codebase
# Look for process.env references in TypeScript/JavaScript files
rg -t ts -t js 'process\.env\.' -A 1
# Look for @env decorator usage in TypeScript files
rg -t ts '@env' -A 1
Length of output: 3847
Script:
#!/bin/bash
# Search for environment schema definitions
rg -t ts "z.object\({" -A 20
# Look for environment variable validation
fd "env" --type f --exec cat {}
Length of output: 67900
Script:
#!/bin/bash
# Search for environment variable schema definitions in zod objects
rg -t ts "createEnv\({" -B 2 -A 20
Length of output: 124
Script:
#!/bin/bash
# Search for environment variable schemas
rg -t ts "server: \{" -B 2 -A 20
# Also search for any environment variable validation
rg -t ts "z\.string\(\)\.min\(1\)" -B 2 -A 2
Length of output: 34031
| .extend({ | ||
| metadata: z.record(z.string()).optional(), | ||
| variables: z | ||
| .array( | ||
| z.object({ | ||
| key: z.string(), | ||
| value: z.any(), | ||
| sensitive: z.boolean(), | ||
| }), | ||
| ) | ||
| .optional(), | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance schema validation for variables array.
The current schema validation could be strengthened in several ways:
- Replace
z.any()with more specific type validation - Add constraints for key length and format
- Prevent duplicate keys
- Limit the total number of variables
Consider applying these improvements:
variables: z
.array(
z.object({
- key: z.string(),
+ key: z.string()
+ .min(1)
+ .max(255)
+ .regex(/^[a-zA-Z][a-zA-Z0-9_]*$/),
- value: z.any(),
+ value: z.union([
+ z.string(),
+ z.number(),
+ z.boolean(),
+ z.null()
+ ]),
sensitive: z.boolean(),
}),
)
+ .max(50) // Limit total variables
+ .refine(
+ (vars) => new Set(vars.map(v => v.key)).size === vars.length,
+ "Duplicate keys are not allowed"
+ )
.optional(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .extend({ | |
| metadata: z.record(z.string()).optional(), | |
| variables: z | |
| .array( | |
| z.object({ | |
| key: z.string(), | |
| value: z.any(), | |
| sensitive: z.boolean(), | |
| }), | |
| ) | |
| .optional(), | |
| }), | |
| .extend({ | |
| metadata: z.record(z.string()).optional(), | |
| variables: z | |
| .array( | |
| z.object({ | |
| key: z.string() | |
| .min(1) | |
| .max(255) | |
| .regex(/^[a-zA-Z][a-zA-Z0-9_]*$/), | |
| value: z.union([ | |
| z.string(), | |
| z.number(), | |
| z.boolean(), | |
| z.null() | |
| ]), | |
| sensitive: z.boolean(), | |
| }), | |
| ) | |
| .max(50) // Limit total variables | |
| .refine( | |
| (vars) => new Set(vars.map(v => v.key)).size === vars.length, | |
| "Duplicate keys are not allowed" | |
| ) | |
| .optional(), | |
| }), |
| .extend({ | ||
| metadata: z.record(z.string()).optional(), | ||
| variables: z | ||
| .array( | ||
| z.object({ | ||
| key: z.string(), | ||
| value: z.any(), | ||
| sensitive: z.boolean(), | ||
| }), | ||
| ) | ||
| .optional(), | ||
| }), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strengthen the schema validation for target variables.
The schema could benefit from additional validation constraints:
- The
valuefield usesz.any()which is too permissive and could allow complex objects or functions. - No validation prevents duplicate keys in the variables array.
- No specific validation for sensitive values.
Consider applying these improvements:
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
- key: z.string(),
+ key: z.string().min(1).max(255).regex(/^[a-zA-Z][a-zA-Z0-9_]*$/),
- value: z.any(),
+ value: z.union([z.string(), z.number(), z.boolean(), z.null()]),
sensitive: z.boolean(),
}),
)
+ .refine(
+ (vars) => new Set(vars?.map((v) => v.key)).size === vars?.length,
+ "Duplicate variable keys are not allowed"
+ )
.optional(),
}),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .extend({ | |
| metadata: z.record(z.string()).optional(), | |
| variables: z | |
| .array( | |
| z.object({ | |
| key: z.string(), | |
| value: z.any(), | |
| sensitive: z.boolean(), | |
| }), | |
| ) | |
| .optional(), | |
| }), | |
| .extend({ | |
| metadata: z.record(z.string()).optional(), | |
| variables: z | |
| .array( | |
| z.object({ | |
| key: z.string().min(1).max(255).regex(/^[a-zA-Z][a-zA-Z0-9_]*$/), | |
| value: z.union([z.string(), z.number(), z.boolean(), z.null()]), | |
| sensitive: z.boolean(), | |
| }), | |
| ) | |
| .refine( | |
| (vars) => new Set(vars?.map((v) => v.key)).size === vars?.length, | |
| "Duplicate variable keys are not allowed" | |
| ) | |
| .optional(), | |
| }), |
| const upsertTargetMetadata = async ( | ||
| tx: Tx, | ||
| targets: Array<Target & { metadata?: Record<string, string> }>, | ||
| ) => { | ||
| const existingTargetMetadata = await tx | ||
| .select() | ||
| .from(targetMetadata) | ||
| .where( | ||
| inArray( | ||
| targetMetadata.targetId, | ||
| targets.map((t) => t.id), | ||
| ), | ||
| ) | ||
| .catch((err) => { | ||
| log.error("Error fetching existing target metadata", { error: err }); | ||
| throw err; | ||
| }); | ||
|
|
||
| const targetMetadataValues = targets.flatMap((target) => { | ||
| const { id, metadata = [] } = target; | ||
|
|
||
| return Object.entries(metadata).map(([key, value]) => ({ | ||
| targetId: id, | ||
| key, | ||
| value, | ||
| })); | ||
| }); | ||
|
|
||
| if (targetMetadataValues.length > 0) | ||
| await tx | ||
| .insert(targetMetadata) | ||
| .values(targetMetadataValues) | ||
| .onConflictDoUpdate({ | ||
| target: [targetMetadata.targetId, targetMetadata.key], | ||
| set: buildConflictUpdateColumns(targetMetadata, ["value"]), | ||
| }) | ||
| .catch((err) => { | ||
| log.error("Error inserting target metadata", { error: err }); | ||
| throw err; | ||
| }); | ||
|
|
||
| const metadataToDelete = existingTargetMetadata.filter( | ||
| (metadata) => | ||
| !targetMetadataValues.some( | ||
| (newMetadata) => | ||
| newMetadata.targetId === metadata.targetId && | ||
| newMetadata.key === metadata.key, | ||
| ), | ||
| ); | ||
|
|
||
| if (metadataToDelete.length > 0) | ||
| await tx | ||
| .delete(targetMetadata) | ||
| .where( | ||
| inArray( | ||
| targetMetadata.id, | ||
| metadataToDelete.map((m) => m.id), | ||
| ), | ||
| ) | ||
| .catch((err) => { | ||
| log.error("Error deleting target metadata", { error: err }); | ||
| throw err; | ||
| }); | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect metadata default value initialization.
The metadata default value is initialized as an empty array but used with Object.entries which expects an object.
- const { id, metadata = [] } = target;
+ const { id, metadata = {} } = target;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const upsertTargetMetadata = async ( | |
| tx: Tx, | |
| targets: Array<Target & { metadata?: Record<string, string> }>, | |
| ) => { | |
| const existingTargetMetadata = await tx | |
| .select() | |
| .from(targetMetadata) | |
| .where( | |
| inArray( | |
| targetMetadata.targetId, | |
| targets.map((t) => t.id), | |
| ), | |
| ) | |
| .catch((err) => { | |
| log.error("Error fetching existing target metadata", { error: err }); | |
| throw err; | |
| }); | |
| const targetMetadataValues = targets.flatMap((target) => { | |
| const { id, metadata = [] } = target; | |
| return Object.entries(metadata).map(([key, value]) => ({ | |
| targetId: id, | |
| key, | |
| value, | |
| })); | |
| }); | |
| if (targetMetadataValues.length > 0) | |
| await tx | |
| .insert(targetMetadata) | |
| .values(targetMetadataValues) | |
| .onConflictDoUpdate({ | |
| target: [targetMetadata.targetId, targetMetadata.key], | |
| set: buildConflictUpdateColumns(targetMetadata, ["value"]), | |
| }) | |
| .catch((err) => { | |
| log.error("Error inserting target metadata", { error: err }); | |
| throw err; | |
| }); | |
| const metadataToDelete = existingTargetMetadata.filter( | |
| (metadata) => | |
| !targetMetadataValues.some( | |
| (newMetadata) => | |
| newMetadata.targetId === metadata.targetId && | |
| newMetadata.key === metadata.key, | |
| ), | |
| ); | |
| if (metadataToDelete.length > 0) | |
| await tx | |
| .delete(targetMetadata) | |
| .where( | |
| inArray( | |
| targetMetadata.id, | |
| metadataToDelete.map((m) => m.id), | |
| ), | |
| ) | |
| .catch((err) => { | |
| log.error("Error deleting target metadata", { error: err }); | |
| throw err; | |
| }); | |
| }; | |
| const upsertTargetMetadata = async ( | |
| tx: Tx, | |
| targets: Array<Target & { metadata?: Record<string, string> }>, | |
| ) => { | |
| const existingTargetMetadata = await tx | |
| .select() | |
| .from(targetMetadata) | |
| .where( | |
| inArray( | |
| targetMetadata.targetId, | |
| targets.map((t) => t.id), | |
| ), | |
| ) | |
| .catch((err) => { | |
| log.error("Error fetching existing target metadata", { error: err }); | |
| throw err; | |
| }); | |
| const targetMetadataValues = targets.flatMap((target) => { | |
| const { id, metadata = {} } = target; | |
| return Object.entries(metadata).map(([key, value]) => ({ | |
| targetId: id, | |
| key, | |
| value, | |
| })); | |
| }); | |
| if (targetMetadataValues.length > 0) | |
| await tx | |
| .insert(targetMetadata) | |
| .values(targetMetadataValues) | |
| .onConflictDoUpdate({ | |
| target: [targetMetadata.targetId, targetMetadata.key], | |
| set: buildConflictUpdateColumns(targetMetadata, ["value"]), | |
| }) | |
| .catch((err) => { | |
| log.error("Error inserting target metadata", { error: err }); | |
| throw err; | |
| }); | |
| const metadataToDelete = existingTargetMetadata.filter( | |
| (metadata) => | |
| !targetMetadataValues.some( | |
| (newMetadata) => | |
| newMetadata.targetId === metadata.targetId && | |
| newMetadata.key === metadata.key, | |
| ), | |
| ); | |
| if (metadataToDelete.length > 0) | |
| await tx | |
| .delete(targetMetadata) | |
| .where( | |
| inArray( | |
| targetMetadata.id, | |
| metadataToDelete.map((m) => m.id), | |
| ), | |
| ) | |
| .catch((err) => { | |
| log.error("Error deleting target metadata", { error: err }); | |
| throw err; | |
| }); | |
| }; |
Summary by CodeRabbit
Release Notes
New Features
target_variabletable for enhanced data management.AES256EncryptionServicefor secure data encryption and decryption.variablesfield.Improvements
@ctrlplane/secretspackage for better dependency management.Chores
@ctrlplane/secretspackage..env.examplefile for clarity and relevance.