Skip to content

Conversation

@jsbroks
Copy link
Member

@jsbroks jsbroks commented Oct 24, 2024

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a new target_variable table for enhanced data management.
    • Added AES256EncryptionService for secure data encryption and decryption.
    • New functions for managing target variables and metadata.
    • Enhanced API to support additional metadata with targets through a new variables field.
  • Improvements

    • Updated Dockerfiles to include the @ctrlplane/secrets package for better dependency management.
    • Enhanced error handling and logging in target management functions.
    • Improved type safety for filter values in API procedures.
  • Chores

    • Added ESLint configuration and TypeScript setup for the @ctrlplane/secrets package.
    • Removed unnecessary logging functionality from the storage package.
    • Updated environment variables in the .env.example file for clarity and relevance.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 24, 2024

Warning

Rate limit exceeded

@jsbroks has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 27 minutes and 30 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Files that changed from the base of the PR and between d5adfe2 and 21895c6.

Walkthrough

The pull request introduces several changes across multiple Dockerfiles and additional files within the project. The primary modification involves adding the packages/secrets/package.json file into the build process of the Docker images for apps/event-worker, apps/jobs, and apps/webservice. Additionally, a new SQL script creates a target_variable table in the database, and several new functions are added to manage target variables and metadata in the packages/job-dispatch module. New configurations and classes are also introduced in the packages/secrets directory.

Changes

File Path Change Summary
apps/event-worker/Dockerfile Added COPY packages/secrets/package.json ./packages/secrets/package.json
apps/jobs/Dockerfile Added COPY packages/secrets/package.json ./packages/secrets/package.json
apps/webservice/Dockerfile Added COPY packages/secrets/package.json ./packages/secrets/package.json
packages/db/drizzle/0021_first_ikaris.sql Created target_variable table with columns id, target_id, key, value, sensitive; added foreign key constraint and unique index.
packages/db/drizzle/meta/_journal.json Added new journal entry for migration with index 21.
packages/db/src/schema/target.ts Added table definition for target_variable and new methods createTargetVariable, updateTargetVariable.
packages/job-dispatch/package.json Added dependency "@ctrlplane/secrets": "workspace:*" in dependencies.
packages/job-dispatch/src/target.ts Introduced upsertTargetVariables and upsertTargetMetadata functions; updated upsertTargets method signature.
packages/secrets/eslint.config.js Introduced new ESLint configuration file.
packages/secrets/package.json Created package.json for @ctrlplane/secrets with metadata, scripts, and dependencies.
packages/secrets/src/index.ts Added AES256EncryptionService class for data encryption/decryption; exported environment configuration.
packages/secrets/tsconfig.json Created new TypeScript configuration file.
packages/storage/src/index.ts Removed logger creation function and related imports; exported entities from Storage.js and StorageManager.js.
.env.example Updated environment variables; removed old variables and comments.
apps/webservice/src/app/api/v1/target-providers/[providerId]/set/route.ts Updated bodySchema to include new optional variables field in targets.
apps/webservice/src/app/api/v1/workspaces/[workspaceId]/targets/route.ts Updated bodySchema to include new optional variables field in targets.
packages/api/src/router/system.ts Updated value property type in filters array from z.any() to z.string() in list procedure.
packages/db/src/schema/variable-sets.ts Updated value field type in createVariableSet and updateVariableSet schemas from z.any() to z.union([z.string(), z.number(), z.boolean(), z.null()]).

Possibly related PRs

  • fix: Variable sets #150: The changes in this PR involve adding a package.json file from the packages/secrets directory to the Docker build process, which is similar to the main PR's addition of the package.json file from the same directory in the apps/event-worker/Dockerfile.
  • fix: Check existence from identifiers #155: This PR also modifies the upsertTargets function, which is relevant as the main PR's Dockerfile changes may affect how targets are managed in the application, particularly with the inclusion of the secrets package.
  • init target variables #166: This PR also adds a new package from the packages/secrets directory to the Dockerfile, similar to the changes made in the main PR.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Outside diff range and nitpick comments (5)
packages/secrets/package.json (1)

1-12: LGTM! Consider starting at version 0.0.1

The package metadata and exports are well-structured. Since this is a new package that's being integrated into the system, consider starting at version 0.0.1 to better reflect its initial development status.

apps/jobs/Dockerfile (1)

29-29: Consider adding documentation for the secrets package.

Since this is initializing a new secrets package, it would be helpful to document its purpose and usage in a README.md file within the package directory.

Consider adding a README.md in the packages/secrets directory that explains:

  • The purpose of this package
  • How it integrates with the jobs application
  • Any security considerations or best practices
apps/event-worker/Dockerfile (1)

30-30: Consider documenting secrets management requirements.

Since this package likely handles sensitive information, consider adding documentation about:

  • Required environment variables
  • Security considerations
  • Runtime configuration requirements

Consider implementing these security best practices if not already in place:

  • Use environment variables for sensitive values
  • Implement secret rotation mechanisms
  • Add proper error handling for missing secrets
  • Consider using a secrets management service in production
packages/secrets/src/index.ts (1)

5-42: Consider enhancing the encryption implementation security.

While the basic implementation is functional, consider these security improvements:

  1. Use authenticated encryption (AES-GCM) instead of CBC mode to prevent tampering
  2. Add HMAC verification
  3. Implement padding oracle attack protection
  4. Add key rotation support

Here's an example of using AES-GCM:

class AES256EncryptionService {
  private algorithm = "aes-256-gcm";

  constructor(private key: string) {
    if (key.length !== 64 || !/^[0-9a-fA-F]+$/.test(key))
      throw new Error(
        "Invalid key: must be a 64-character hexadecimal string (32 bytes / 256 bits)",
      );
  }

  encrypt(data: string): string {
    const iv = crypto.randomBytes(12); // GCM recommended IV length
    const cipher = crypto.createCipheriv(
      this.algorithm,
      Buffer.from(this.key, "hex"),
      iv
    );
    let encrypted = cipher.update(data, "utf8", "hex");
    encrypted += cipher.final("hex");
    const authTag = cipher.getAuthTag();
    return `${iv.toString("hex")}:${authTag.toString("hex")}:${encrypted}`;
  }

  decrypt(encryptedData: string): string {
    const [ivHex, authTagHex, encryptedText] = encryptedData.split(":");
    if (!ivHex || !authTagHex || !encryptedText)
      throw new Error("Invalid encrypted data format");

    const iv = Buffer.from(ivHex, "hex");
    const authTag = Buffer.from(authTagHex, "hex");
    const decipher = crypto.createDecipheriv(
      this.algorithm,
      Buffer.from(this.key, "hex"),
      iv
    );
    decipher.setAuthTag(authTag);
    let decrypted = decipher.update(encryptedText, "hex", "utf8");
    decrypted += decipher.final("utf8");
    return decrypted;
  }
}
packages/db/src/schema/target.ts (1)

220-233: Consider adding audit fields for better traceability.

The table definition could benefit from including audit fields like created_at and updated_at, which are present in other tables (e.g., the target table). This would help track when variables are created and modified.

Apply this diff to add audit fields:

 export const targetVariable = pgTable(
   "target_variable",
   {
     id: uuid("id").primaryKey().defaultRandom(),
     targetId: uuid("target_id")
       .references(() => target.id, { onDelete: "cascade" })
       .notNull(),
     key: text("key").notNull(),
     value: jsonb("value").notNull(),
     sensitive: boolean("sensitive").notNull().default(false),
+    createdAt: timestamp("created_at", { withTimezone: true })
+      .notNull()
+      .defaultNow(),
+    updatedAt: timestamp("updated_at", { withTimezone: true })
+      .$onUpdate(() => new Date()),
   },
   (t) => ({ uniq: uniqueIndex().on(t.targetId, t.key) }),
 );
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 2899756 and cab9e82.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (13)
  • apps/event-worker/Dockerfile (1 hunks)
  • apps/jobs/Dockerfile (1 hunks)
  • apps/webservice/Dockerfile (1 hunks)
  • packages/db/drizzle/0021_first_ikaris.sql (1 hunks)
  • packages/db/drizzle/meta/_journal.json (1 hunks)
  • packages/db/src/schema/target.ts (2 hunks)
  • packages/job-dispatch/package.json (1 hunks)
  • packages/job-dispatch/src/target.ts (6 hunks)
  • packages/secrets/eslint.config.js (1 hunks)
  • packages/secrets/package.json (1 hunks)
  • packages/secrets/src/index.ts (1 hunks)
  • packages/secrets/tsconfig.json (1 hunks)
  • packages/storage/src/index.ts (1 hunks)
✅ Files skipped from review due to trivial changes (3)
  • packages/secrets/eslint.config.js
  • packages/secrets/tsconfig.json
  • packages/storage/src/index.ts
🧰 Additional context used
🪛 Biome
packages/job-dispatch/src/target.ts

[error] 258-258: Comparing to itself is potentially pointless.

(lint/suspicious/noSelfCompare)


[error] 258-258: Comparing to itself is potentially pointless.

(lint/suspicious/noSelfCompare)

🔇 Additional comments (14)
packages/db/drizzle/0021_first_ikaris.sql (2)

9-13: LGTM! Foreign key constraint is well implemented.

The constraint implementation follows best practices:

  • Proper exception handling for idempotency
  • Appropriate CASCADE on DELETE behavior
  • Safe NO ACTION on UPDATE for UUID references

15-15: Verify index coverage for common query patterns.

The unique composite index is well-implemented and prevents duplicate keys per target. However, let's verify if additional indexes are needed based on your query patterns.

✅ Verification successful

Let me gather more information about the query patterns by searching for SQL queries or ORM operations involving the target_variable table.

This will help us understand how the table is being queried and what access patterns are common in the codebase.


Let me try one more search with different patterns to find query usage:


Let me try one final search focusing on the Drizzle schema and relations:


Let me check the content of these specific files to understand the table usage and access patterns:


Based on the analysis of the schema and migrations, I can now provide a final response:

The composite index implementation is appropriate and sufficient

The current unique btree index on (target_id, key) is well-designed and sufficient because:

  • It enforces the uniqueness constraint for variables per target
  • It supports efficient lookups when querying variables by target_id (prefix of the index)
  • The table structure shows this is the primary access pattern, as variables are always queried in the context of their target
  • The schema shows a direct foreign key relationship to the target table with cascade delete, which this index supports efficiently
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for queries accessing target_variable table to verify index coverage
rg -A 5 "target_variable" --type ts

# Look for potential JOIN operations with the target table
ast-grep --pattern 'SELECT $_$ FROM target_variable JOIN $_$'

Length of output: 481


Script:

#!/bin/bash
# Search for drizzle query operations on target_variable table
ast-grep --pattern 'db.select().from(target_variable)'

# Search for any other query patterns involving target_variable
ast-grep --pattern 'db.query().from(target_variable)'

# Look for any eq/where clauses to understand access patterns
ast-grep --pattern 'eq(target_variable.$_$, $_$)'

# Check if there are any direct imports of target_variable
rg "import.*target_variable" --type ts

Length of output: 205


Script:

#!/bin/bash
# Search for any SQL operations involving target_variable using broader patterns
rg -B 2 -A 4 "from.*target_variable|join.*target_variable|where.*target_variable" --type ts

# Look for drizzle query builder usage with different syntax
ast-grep --pattern '$_$.from(target_variable)'

# Check the schema definition and relations more thoroughly
rg -A 10 "createTable.*target_variable" --type ts

# Look for any file containing both target and target_variable references
rg "target_variable.*target|target.*target_variable" --type ts

Length of output: 259


Script:

#!/bin/bash
# Search for the table definition and any relations
rg -A 15 "export const target_variable" --type ts

# Look for any references to the table in migrations
fd -e sql . | xargs rg -l "target_variable"

# Check for any query-related files that might use this table
fd -e ts -e js . | xargs rg -l "target_variable"

Length of output: 262


Script:

#!/bin/bash
# Check the schema definition in target.ts
cat packages/db/src/schema/target.ts

# Look at both migration files
cat packages/db/drizzle/0000_busy_firebird.sql
cat packages/db/drizzle/0021_first_ikaris.sql

Length of output: 46965

packages/secrets/package.json (2)

34-34: LGTM! Good use of shared Prettier config

Using a shared Prettier configuration helps maintain consistent code formatting across the monorepo.


21-33: ⚠️ Potential issue

Review dependency versioning and add test framework

  1. The catalog: version specifier is non-standard. Consider using explicit versions or ranges:
   "dependencies": {
-    "@t3-oss/env-core": "catalog:",
-    "zod": "catalog:"
+    "@t3-oss/env-core": "^0.7.1",
+    "zod": "^3.22.4"
   },
  1. Consider adding test framework dependencies (e.g., Jest or Vitest) to ensure code quality.
apps/jobs/Dockerfile (1)

29-29: LGTM, but let's verify the secrets package implementation.

The addition of the secrets package follows the correct pattern and ordering in the Dockerfile. However, given that this package handles sensitive information, we should verify its implementation.

✅ Verification successful

Let me analyze the implementation further by checking for any security concerns in the usage patterns.


Based on the gathered information, I can now provide the final response:

The secrets package implementation follows secure practices

The implementation shows a well-structured secrets management approach with several security-positive aspects:

  • Uses AES-256 encryption with proper key validation (64-char hex requirement)
  • Environment variable-based key management with zod validation
  • No hardcoded secrets or keys in the codebase
  • Limited and controlled usage (only imported in job-dispatch/target.ts)
  • Proper encapsulation through a dedicated encryption service

The addition of this package to the Dockerfile is safe and follows security best practices.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the secrets package implementation and its usage
# Look for potential security concerns in the secrets package

# Check the package.json contents
echo "Checking secrets package.json contents:"
cat packages/secrets/package.json

# Look for any hardcoded secrets or sensitive patterns
echo -e "\nChecking for potential sensitive patterns in the secrets package:"
rg -g '!package-lock.json' -g '!pnpm-lock.yaml' -g 'packages/secrets/**' -i '(password|secret|key|token|credential)'

Length of output: 2006


Script:

#!/bin/bash
# Check how the secrets package is being used across the codebase
echo "Checking imports and usage of the secrets package:"
rg -g '!*.json' -g '!*.yaml' -g '!*.md' '@ctrlplane/secrets' -B 2 -A 2

echo -e "\nChecking for any direct key assignments or hardcoded values:"
ast-grep --pattern 'VARIABLES_AES_256_KEY = $_'

echo -e "\nChecking environment variable usage:"
rg 'VARIABLES_AES_256_KEY' -B 2 -A 2

Length of output: 1338

packages/job-dispatch/package.json (1)

30-30: LGTM! Verify the secrets package exists.

The dependency addition follows the correct workspace pattern and is properly placed.

Let's verify the secrets package exists and has the expected structure:

✅ Verification successful

The results show that the @ctrlplane/secrets package exists and contains an AES256EncryptionService class. Let me verify the package name in package.json to ensure it matches the dependency being added.


Dependency verified and correctly configured

The @ctrlplane/secrets package exists in the monorepo with the correct package name and exports an AES256EncryptionService class, confirming that this is a valid workspace dependency addition.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the secrets package exists and exports the expected encryption service

# Test 1: Check if the package exists
if [ ! -d "packages/secrets" ]; then
    echo "Error: packages/secrets directory not found"
    exit 1
fi

# Test 2: Verify package.json exists
if [ ! -f "packages/secrets/package.json" ]; then
    echo "Error: packages/secrets/package.json not found"
    exit 1
fi

# Test 3: Check for the encryption service in the source
rg -t ts "class.*EncryptionService" "packages/secrets/src/"

Length of output: 202


Script:

#!/bin/bash
# Verify the package name in secrets/package.json matches the dependency
cat packages/secrets/package.json | jq -r .name

Length of output: 68

packages/secrets/src/index.ts (2)

1-4: LGTM! Dependencies are well-chosen.

The selected dependencies provide a robust foundation for cryptographic operations and type-safe environment management.


1-50: Verify the encryption service integration with target variables.

Let's ensure the encryption service is properly integrated with the target variables functionality.

✅ Verification successful

Let me try a different approach to verify the integration by using more generic file search and pattern matching.

This script removes the --type typescript flag which was causing issues and uses more generic approaches to find the relevant files and patterns. It will help us verify:

  1. The available TypeScript/JavaScript files in the codebase
  2. Usage of the encryption service
  3. Any encryption/decryption related patterns in the code

Let me check the specific integration in the target.ts file to understand how the encryption service is being used.

This will help us verify the exact usage of the encryption service in the target variables functionality.


Based on the code analysis, I can now provide a final response about the encryption service integration.

Encryption service is properly integrated with target variables

The encryption service is correctly integrated in packages/job-dispatch/src/target.ts for handling sensitive target variables:

  • It's imported from @ctrlplane/secrets as variablesAES256
  • Used in upsertTargetVariables function to encrypt sensitive values before storing them in the database
  • Sensitive values are encrypted using variablesAES256().encrypt(value) while non-sensitive values are stored as-is
  • The encryption is applied selectively based on the sensitive flag in the variable definition
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the encryption service integration
# Expected: Find usage in target variables handling code

# Check for encryption service usage
echo "Checking encryption service usage:"
rg -l "AES256EncryptionService|variablesAES256" --type typescript

# Check for sensitive data handling
echo -e "\nChecking sensitive data handling patterns:"
rg -A 5 "encrypt|decrypt" --type typescript

Length of output: 363


Script:

#!/bin/bash
# Search for TypeScript/JavaScript files
echo "Finding relevant source files:"
fd "\.(ts|js)$"

echo -e "\nChecking for encryption service usage:"
rg -l "AES256EncryptionService|variablesAES256"

echo -e "\nChecking for encrypt/decrypt patterns:"
rg "encrypt|decrypt" -A 5

Length of output: 15488


Script:

#!/bin/bash
# Check the target.ts implementation
echo "Checking target.ts implementation:"
cat packages/job-dispatch/src/target.ts

Length of output: 8355

apps/webservice/Dockerfile (2)

37-37: LGTM! The secrets package.json is properly integrated.

The addition maintains the Dockerfile's existing structure and follows best practices:

  • Preserves layer caching strategy
  • Consistent with other package.json copying
  • Maintains alphabetical ordering

37-37: Verify the secrets package implementation.

Given that this package likely handles sensitive information, let's verify its implementation:

✅ Verification successful

Let me gather more information about the implementation details of the encryption service to ensure it follows security best practices.


The secrets package implementation has been verified and follows security best practices

The implementation has been reviewed and shows proper security measures:

  • Uses AES-256-CBC encryption with secure key validation (64-char hex requirement)
  • Implements proper IV (Initialization Vector) handling with crypto.randomBytes(16)
  • Uses Node's native 'crypto' module with appropriate cipher methods
  • Includes proper error handling for invalid keys and encrypted data
  • Environment variables are properly validated using zod schema
  • The encryption service is well-encapsulated with private fields

The package appears to be focused solely on encryption/decryption functionality with no obvious security concerns in its implementation.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the secrets package implementation and usage

# Check if the secrets package has proper access controls
echo "Checking secrets package access controls..."
rg -l "export|public" packages/secrets/

# Look for potential security-related patterns
echo "Checking for security patterns..."
rg -i "secret|password|key|token|credential" packages/secrets/

# Check for proper environment variable usage
echo "Checking environment variable handling..."
rg -i "process\.env" packages/secrets/

Length of output: 1291


Script:

#!/bin/bash
# Get the full implementation of AES256EncryptionService
echo "Checking AES256EncryptionService implementation..."
ast-grep --pattern 'class AES256EncryptionService {
  $$$
}'

# Check for crypto imports and usage
echo "Checking crypto usage..."
rg -A 5 "crypto" packages/secrets/

# Check for test coverage
echo "Checking test files..."
fd "test|spec" packages/secrets/

Length of output: 3881

packages/db/drizzle/meta/_journal.json (1)

151-157: LGTM! Migration entry properly documented.

The new migration entry follows the established format and maintains consistency with previous entries. This entry corresponds to the creation of the target_variable table and its foreign key relationship with the public.target table.

packages/db/src/schema/target.ts (1)

228-230: Consider adding validation constraints for key and value fields.

The table definition could benefit from additional constraints:

  1. The key field might need a pattern constraint or size limit to ensure valid variable names.
  2. Consider adding a check constraint or application-level validation for the value field structure.

Let's check if there are any validation patterns used in similar tables:

Consider implementing these validations at both the database and application levels:

  1. Database: Add check constraints for the key format
  2. Application: Use zod schema to validate the value structure
packages/job-dispatch/src/target.ts (2)

81-148: upsertTargetVariables function is well-implemented

The upsertTargetVariables function effectively handles the upsertion of target variables, including:

  • Encrypting sensitive variable values using variablesAES256().encrypt(value).
  • Inserting or updating variables with proper conflict resolution.
  • Deleting obsolete variables that no longer exist in the input.

The use of transactions and comprehensive error handling enhances the reliability of the operation.


150-213: upsertTargetMetadata function correctly manages metadata

The upsertTargetMetadata function properly manages the insertion and updating of target metadata. It includes:

  • Efficiently handling existing metadata retrieval.
  • Inserting or updating metadata entries with conflict resolution.
  • Cleaning up obsolete metadata entries that are no longer needed.
  • Robust error handling throughout the process.

This implementation ensures metadata consistency across targets.

Comment on lines +1 to +7
CREATE TABLE IF NOT EXISTS "target_variable" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"target_id" uuid NOT NULL,
"key" text NOT NULL,
"value" jsonb NOT NULL,
"sensitive" boolean DEFAULT false NOT NULL
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding constraints for data integrity.

While the basic structure is sound, consider adding these constraints for better data integrity:

  1. Add a length limit to the key field (e.g., text(100)) to prevent excessive storage
  2. Add a CHECK constraint to validate the JSONB structure
 CREATE TABLE IF NOT EXISTS "target_variable" (
 	"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
 	"target_id" uuid NOT NULL,
-	"key" text NOT NULL,
+	"key" text NOT NULL CHECK (length(key) <= 100),
 	"value" jsonb NOT NULL,
 	"sensitive" boolean DEFAULT false NOT NULL
+	CONSTRAINT "value_is_valid" CHECK (jsonb_typeof(value) IN ('object', 'array', 'string', 'number', 'boolean'))
 );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE IF NOT EXISTS "target_variable" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"target_id" uuid NOT NULL,
"key" text NOT NULL,
"value" jsonb NOT NULL,
"sensitive" boolean DEFAULT false NOT NULL
);
CREATE TABLE IF NOT EXISTS "target_variable" (
"id" uuid PRIMARY KEY DEFAULT gen_random_uuid() NOT NULL,
"target_id" uuid NOT NULL,
"key" text NOT NULL CHECK (length(key) <= 100),
"value" jsonb NOT NULL,
"sensitive" boolean DEFAULT false NOT NULL
CONSTRAINT "value_is_valid" CHECK (jsonb_typeof(value) IN ('object', 'array', 'string', 'number', 'boolean'))
);

Comment on lines +13 to +20
"scripts": {
"build": "tsc",
"dev": "tsc --watch",
"clean": "rm -rf .turbo node_modules",
"format": "prettier --check . --ignore-path ../../.gitignore",
"lint": "eslint",
"typecheck": "tsc --noEmit --emitDeclarationOnly false"
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enhance script configurations

A few improvements could be made to the scripts:

  1. The lint script needs a target path: "lint": "eslint ."
  2. Consider adding a format:fix script: "format:fix": "prettier --write . --ignore-path ../../.gitignore"
   "scripts": {
     "build": "tsc",
     "dev": "tsc --watch",
     "clean": "rm -rf .turbo node_modules",
     "format": "prettier --check . --ignore-path ../../.gitignore",
-    "lint": "eslint",
+    "lint": "eslint .",
+    "format:fix": "prettier --write . --ignore-path ../../.gitignore",
     "typecheck": "tsc --noEmit --emitDeclarationOnly false"
   },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"scripts": {
"build": "tsc",
"dev": "tsc --watch",
"clean": "rm -rf .turbo node_modules",
"format": "prettier --check . --ignore-path ../../.gitignore",
"lint": "eslint",
"typecheck": "tsc --noEmit --emitDeclarationOnly false"
},
"scripts": {
"build": "tsc",
"dev": "tsc --watch",
"clean": "rm -rf .turbo node_modules",
"format": "prettier --check . --ignore-path ../../.gitignore",
"lint": "eslint .",
"format:fix": "prettier --write . --ignore-path ../../.gitignore",
"typecheck": "tsc --noEmit --emitDeclarationOnly false"
},

COPY packages/validators/package.json ./packages/validators/package.json
COPY packages/logger/package.json ./packages/logger/package.json
COPY packages/job-dispatch/package.json ./packages/job-dispatch/package.json
COPY packages/secrets/package.json ./packages/secrets/package.json
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Environment variable VARIABLES_AES_256_KEY required for secrets package

The secrets package implements AES-256 encryption and requires a specific environment variable to be set:

  • VARIABLES_AES_256_KEY: A 64-character hexadecimal string (32 bytes / 256 bits) required for encryption/decryption operations
  • The package uses @t3-oss/env-core for environment variable validation
  • The encryption service is properly implemented with secure defaults (AES-256-CBC with random IV)

Make sure to set the required environment variable in your event-worker service configuration.

🔗 Analysis chain

LGTM! Verify secrets package configuration.

The addition of the secrets package follows the established pattern for shared packages. The package is correctly copied before dependency installation.

Let's verify the secrets package setup:


Let me try to gather more information about the secrets package to better understand its configuration and requirements.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the secrets package requires specific environment variables
rg -g 'packages/secrets/**' 'process\.env\.' -A 2

# Check for any security-related configuration files
fd -g 'packages/secrets/**' -e config.ts -e config.js

Length of output: 107


Script:

#!/bin/bash
# Check the contents of the secrets package
fd . packages/secrets -t f -x cat {}

# Look for any environment variable patterns in a more generic way
rg -g 'packages/secrets/**' -i '(env|secret|config|environment)' -A 2

# Check package.json for dependencies and scripts
cat packages/secrets/package.json

Length of output: 5895

Comment on lines 44 to 47
export const env = createEnv({
server: { VARIABLES_AES_256_KEY: z.string().min(64).max(64) },
runtimeEnv: process.env,
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider enhancing environment variable validation.

The current validation is basic. Consider creating a custom zod validator for the AES key:

const aesKeySchema = z
  .string()
  .length(64)
  .regex(/^[0-9a-fA-F]+$/, "Key must be hexadecimal")
  .transform((val) => val.toLowerCase());

export const env = createEnv({
  server: { 
    VARIABLES_AES_256_KEY: aesKeySchema
  },
  runtimeEnv: process.env as Record<string, string>,
});

Comment on lines +49 to +50
export const variablesAES256 = (key = env.VARIABLES_AES_256_KEY) =>
new AES256EncryptionService(key);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider implementing a singleton pattern with proper initialization checks.

The current initialization doesn't prevent multiple instances or handle missing environment variables gracefully.

export class EncryptionService {
  private static instance: AES256EncryptionService;

  static initialize(key = env.VARIABLES_AES_256_KEY): void {
    if (this.instance) {
      throw new Error('EncryptionService already initialized');
    }
    this.instance = new AES256EncryptionService(key);
  }

  static getInstance(): AES256EncryptionService {
    if (!this.instance) {
      throw new Error('EncryptionService not initialized');
    }
    return this.instance;
  }
}

Comment on lines +235 to +239
export const createTargetVariable = createInsertSchema(targetVariable).omit({
id: true,
});

export const updateTargetVariable = createTargetVariable.partial();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance type safety for the value field.

Consider adding type definitions and validation rules for the value field to ensure type safety and consistent data structure.

Apply this diff to add type safety:

+type TargetVariableValue = {
+  type: string;
+  data: unknown;
+};
+
 export const createTargetVariable = createInsertSchema(targetVariable, {
+  value: z.object({
+    type: z.string(),
+    data: z.unknown(),
+  }),
 }).omit({
   id: true,
 });

 export const updateTargetVariable = createTargetVariable.partial();

Committable suggestion was skipped due to low confidence.

Comment on lines 253 to 260
.then((targets) =>
targets.map((t) => ({
...t,
...targetsToInsert.find(
(t) =>
t.identifier === t.identifier && t.workspaceId === t.workspaceId,
),
})),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Resolve variable shadowing causing self-comparison in upsertTargets

In the upsertTargets function, there's a variable shadowing issue leading to a self-comparison:

...targetsToInsert.find(
  (t) =>
    t.identifier === t.identifier && t.workspaceId === t.workspaceId,
),

Here, the same variable t is used in both the outer and inner scopes, causing t.identifier === t.identifier, which is always true. This likely prevents the correct merging of target data.

To fix this, rename the inner function's parameter to avoid shadowing:

     .then((targets) =>
       targets.map((t) => ({
         ...t,
         ...targetsToInsert.find(
-          (t) =>
-            t.identifier === t.identifier && t.workspaceId === t.workspaceId,
+          (ti) =>
+            ti.identifier === t.identifier && ti.workspaceId === t.workspaceId,
         ),
       })),
     )

This change ensures that ti from targetsToInsert is correctly compared with t from targets, allowing proper data merging based on identifier and workspaceId.

🧰 Tools
🪛 Biome

[error] 258-258: Comparing to itself is potentially pointless.

(lint/suspicious/noSelfCompare)


[error] 258-258: Comparing to itself is potentially pointless.

(lint/suspicious/noSelfCompare)

Comment on lines 135 to 147
if (variablesToDelete.length > 0)
await tx
.delete(targetMetadata)
.where(
inArray(
targetMetadata.id,
variablesToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target variables", { error: err });
throw err;
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect deletion from targetMetadata instead of targetVariable in upsertTargetVariables

In the upsertTargetVariables function, when deleting variables that no longer exist, the deletion is incorrectly targeting the targetMetadata table instead of the targetVariable table. This seems to be an oversight and should be corrected to avoid unintended data modifications.

Apply this diff to fix the deletion target:

     if (variablesToDelete.length > 0)
       await tx
-        .delete(targetMetadata)
+        .delete(targetVariable)
         .where(
           inArray(
-            targetMetadata.id,
+            targetVariable.id,
             variablesToDelete.map((m) => m.id),
           ),
         )
         .catch((err) => {
           log.error("Error deleting target variables", { error: err });
           throw err;
         });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (variablesToDelete.length > 0)
await tx
.delete(targetMetadata)
.where(
inArray(
targetMetadata.id,
variablesToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target variables", { error: err });
throw err;
});
if (variablesToDelete.length > 0)
await tx
.delete(targetVariable)
.where(
inArray(
targetVariable.id,
variablesToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target variables", { error: err });
throw err;
});

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Outside diff range and nitpick comments (1)
.env.example (1)

1-3: Add documentation for required environment variables.

The removal of comments has made it less clear which variables are required and how they should be configured. Consider adding comments to explain:

  • The purpose of each variable
  • Required format/constraints
  • Whether the variable is optional or required

Example improvement:

+# Base URL for the application (required)
 BASE_URL="http://localhost:3000"
+
+# PostgreSQL connection string (required)
+# Format: postgresql://<user>:<password>@<host>:<port>/<database>
 POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane"
+
+# Redis connection string (required)
 REDIS_URL="redis://127.0.0.1:6379"
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between cab9e82 and d5adfe2.

📒 Files selected for processing (5)
  • .env.example (1 hunks)
  • apps/webservice/src/app/api/v1/target-providers/[providerId]/set/route.ts (2 hunks)
  • apps/webservice/src/app/api/v1/workspaces/[workspaceId]/targets/route.ts (2 hunks)
  • packages/job-dispatch/src/target.ts (6 hunks)
  • packages/secrets/src/index.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/secrets/src/index.ts
🔇 Additional comments (5)
apps/webservice/src/app/api/v1/target-providers/[providerId]/set/route.ts (1)

70-79: 🛠️ Refactor suggestion

Verify sensitive data handling and improve value transformation.

The current transformation only handles undefined values, but there might be other edge cases to consider. Additionally, there's no explicit handling of sensitive data.

Let's verify the sensitive data handling in related files:

Consider enhancing the transformation:

   const targets = await upsertTargets(
     db,
     targetsToInsert.map((t) => ({
       ...t,
-      variables: t.variables?.map((v) => ({
-        ...v,
-        value: v.value ?? null,
-      })),
+      variables: t.variables?.map((v) => {
+        // Validate and sanitize sensitive values
+        if (v.sensitive && typeof v.value === 'string') {
+          // Add validation for sensitive string values
+          if (v.value.length > 1024) {
+            throw new Error(`Sensitive value for key ${v.key} exceeds maximum length`);
+          }
+        }
+        return {
+          key: v.key,
+          value: v.value ?? null,
+          sensitive: v.sensitive
+        };
+      }),
     })),
   );
apps/webservice/src/app/api/v1/workspaces/[workspaceId]/targets/route.ts (1)

182-189: 🛠️ Refactor suggestion

Verify workspace existence and improve type safety in target mapping.

The current implementation has potential issues:

  1. No validation that workspaceId exists before using it
  2. The null coalescing for variable values might mask undefined values
  3. Type safety could be improved for variable values

Let's verify the workspace validation in the codebase:

Consider these improvements:

 const targets = await upsertTargets(
   db,
   parsedTargets.map((t) => ({
     ...t,
     workspaceId,
     variables: t.variables?.map((v) => ({
       ...v,
-      value: v.value ?? null,
+      value: v.value === undefined ? null : v.value,
     })),
   })),
 );
packages/job-dispatch/src/target.ts (3)

19-22: LGTM: Import changes are appropriate.

The new imports for targetVariable and variablesAES256 are correctly added to support the target variables functionality.


270-273: Verify transaction isolation for parallel upserts.

The parallel execution of metadata and variables upsert operations could potentially lead to race conditions if the transaction isolation level is not properly set.

#!/bin/bash
# Search for transaction isolation level configuration
rg -A 5 "isolation|BEGIN|START TRANSACTION" 

81-150: ⚠️ Potential issue

Fix incorrect error message in target variables fetch.

The error message mentions "metadata" instead of "variables" which could be confusing for debugging.

-      log.error("Error fetching existing target metadata", { error: err });
+      log.error("Error fetching existing target variables", { error: err });

Consider additional security measures for sensitive variables.

While encryption is correctly implemented for sensitive values, consider adding these security enhancements:

  1. Ensure sensitive values are not logged
  2. Add input validation for the sensitive flag
✅ Verification successful

Let me try another search to find potential sensitive data logging:


Based on the code review and verification results, I can now provide the final response:

Fix incorrect error message and enhance error logging security

The error messages need to be consistent and secure:

  • Fix "metadata" to "variables" in the fetch error message
  • Current error logging is secure as it only includes the error object without exposing sensitive variable values
-      log.error("Error fetching existing target metadata", { error: err });
+      log.error("Error fetching existing target variables", { error: err });

The encryption implementation for sensitive values is correctly handled, and the error logging pattern is secure as it doesn't log any variable values or sensitive data, only error objects. No additional security measures are required.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for potential sensitive data logging
rg -A 2 "log\." | rg "variables|value"

Length of output: 53120


Script:

#!/bin/bash
# Check for potential sensitive data logging specifically around variables and values
rg -A 2 "log\." --glob "*.{ts,js}" | rg -i "variable|value|sensitive"

Length of output: 284

# Since .env is gitignored, you can use .env.example to build a new `.env` file when you clone the repo.
# Keep this file up-to-date when you add new variables to \`.env\`.
BASE_URL="http://localhost:3000"
POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security: Remove credentials from example PostgreSQL URL.

The PostgreSQL URL contains default credentials which could be accidentally used in production. Consider using placeholder values instead.

-POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane"
+POSTGRES_URL="postgresql://<username>:<password>@127.0.0.1:5432/<database>"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane"
POSTGRES_URL="postgresql://<username>:<password>@127.0.0.1:5432/<database>"

AUTH_TRUST_HOST=false

VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security: Replace example encryption key with placeholder.

The AES key is currently set to all zeros, which is insecure. Replace with a placeholder and add guidance on generating a secure key.

-VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000
+# AES-256 encryption key for variable encryption (required)
+# Generate a secure 64-character hex key for production use
+# Example command: openssl rand -hex 32
+VARIABLES_AES_256_KEY=<generated-hex-key>
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000
# AES-256 encryption key for variable encryption (required)
# Generate a secure 64-character hex key for production use
# Example command: openssl rand -hex 32
VARIABLES_AES_256_KEY=<generated-hex-key>

Comment on lines 4 to +6
# This file will be committed to version control, so make sure not to have any secrets in it.
# If you are cloning this repo, create a copy of this file named `.env` and populate it with your secrets.

# The database URL is used to connect to your Supabase database.
POSTGRES_URL="postgres://postgres.[USERNAME]:[PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?workaround=supabase-pooler.vercel"

# You can generate the secret via 'openssl rand -base64 32' on Unix
# @see https://next-auth.js.org/configuration/options#secret
AUTH_SECRET='supersecret'
AUTH_TRUST_HOST=false

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security: Add warning about AUTH_SECRET configuration.

The example includes a weak default secret that could be accidentally used in production. Add a warning and use a placeholder instead.

-AUTH_SECRET='supersecret'
+# WARNING: Generate a strong random secret in production!
+# Minimum length: 32 characters
+AUTH_SECRET='<your-secure-random-secret>'
 AUTH_TRUST_HOST=false
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# This file will be committed to version control, so make sure not to have any secrets in it.
# If you are cloning this repo, create a copy of this file named `.env` and populate it with your secrets.
# The database URL is used to connect to your Supabase database.
POSTGRES_URL="postgres://postgres.[USERNAME]:[PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?workaround=supabase-pooler.vercel"
# You can generate the secret via 'openssl rand -base64 32' on Unix
# @see https://next-auth.js.org/configuration/options#secret
AUTH_SECRET='supersecret'
AUTH_TRUST_HOST=false
# WARNING: Generate a strong random secret in production!
# Minimum length: 32 characters
AUTH_SECRET='<your-secure-random-secret>'
AUTH_TRUST_HOST=false

Comment on lines +1 to 9
POSTGRES_URL="postgresql://ctrlplane:ctrlplane@127.0.0.1:5432/ctrlplane"
REDIS_URL="redis://127.0.0.1:6379"

# This file will be committed to version control, so make sure not to have any secrets in it.
# If you are cloning this repo, create a copy of this file named `.env` and populate it with your secrets.

# The database URL is used to connect to your Supabase database.
POSTGRES_URL="postgres://postgres.[USERNAME]:[PASSWORD]@aws-0-eu-central-1.pooler.supabase.com:6543/postgres?workaround=supabase-pooler.vercel"

# You can generate the secret via 'openssl rand -base64 32' on Unix
# @see https://next-auth.js.org/configuration/options#secret
AUTH_SECRET='supersecret'
AUTH_TRUST_HOST=false

VARIABLES_AES_256_KEY=0000000000000000000000000000000000000000000000000000000000000000

JOB_AGENT_WORKSPACE="ctrlplane"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Based on the search results, I can now provide a comprehensive response about the environment variables.

The .env.example file is missing some required environment variables.

The following required variables should be added to the .env.example file:

  • GITHUB_BOT_NAME - GitHub bot name (optional)
  • GITHUB_BOT_CLIENT_ID - GitHub bot client ID (optional)
  • GITHUB_BOT_CLIENT_SECRET - GitHub bot client secret (optional)
  • GITHUB_BOT_APP_ID - GitHub bot app ID (optional)
  • GITHUB_BOT_PRIVATE_KEY - GitHub bot private key (optional)
  • GITHUB_WEBHOOK_SECRET - GitHub webhook secret (optional)
  • GITHUB_URL - GitHub URL (optional)
  • OTEL_SAMPLER_RATIO - OpenTelemetry sampling ratio (optional, defaults to 1)

These variables are defined in the environment schema (apps/webservice/src/env.ts) but missing from the example configuration file.

🔗 Analysis chain

Add missing configuration variables.

Based on the AI summary, this PR introduces new functionality for target variables and secrets management. However, some potentially required variables might be missing from the example configuration.

Let's check for any missing required environment variables:


Let me gather more information about environment variables by checking the schema definitions and configuration files.


Let me search for all environment variable schema definitions to identify required variables.


Let me try a different search pattern to find environment variable definitions.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for environment variable usage in the codebase
# Look for process.env references in TypeScript/JavaScript files
rg -t ts -t js 'process\.env\.' -A 1

# Look for @env decorator usage in TypeScript files
rg -t ts '@env' -A 1

Length of output: 3847


Script:

#!/bin/bash
# Search for environment schema definitions
rg -t ts "z.object\({" -A 20

# Look for environment variable validation
fd "env" --type f --exec cat {}

Length of output: 67900


Script:

#!/bin/bash
# Search for environment variable schema definitions in zod objects
rg -t ts "createEnv\({" -B 2 -A 20

Length of output: 124


Script:

#!/bin/bash
# Search for environment variable schemas
rg -t ts "server: \{" -B 2 -A 20

# Also search for any environment variable validation
rg -t ts "z\.string\(\)\.min\(1\)" -B 2 -A 2

Length of output: 34031

Comment on lines 19 to 30
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
key: z.string(),
value: z.any(),
sensitive: z.boolean(),
}),
)
.optional(),
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance schema validation for variables array.

The current schema validation could be strengthened in several ways:

  1. Replace z.any() with more specific type validation
  2. Add constraints for key length and format
  3. Prevent duplicate keys
  4. Limit the total number of variables

Consider applying these improvements:

       variables: z
         .array(
           z.object({
-            key: z.string(),
+            key: z.string()
+              .min(1)
+              .max(255)
+              .regex(/^[a-zA-Z][a-zA-Z0-9_]*$/),
-            value: z.any(),
+            value: z.union([
+              z.string(),
+              z.number(),
+              z.boolean(),
+              z.null()
+            ]),
             sensitive: z.boolean(),
           }),
         )
+        .max(50)  // Limit total variables
+        .refine(
+          (vars) => new Set(vars.map(v => v.key)).size === vars.length,
+          "Duplicate keys are not allowed"
+        )
         .optional(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
key: z.string(),
value: z.any(),
sensitive: z.boolean(),
}),
)
.optional(),
}),
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
key: z.string()
.min(1)
.max(255)
.regex(/^[a-zA-Z][a-zA-Z0-9_]*$/),
value: z.union([
z.string(),
z.number(),
z.boolean(),
z.null()
]),
sensitive: z.boolean(),
}),
)
.max(50) // Limit total variables
.refine(
(vars) => new Set(vars.map(v => v.key)).size === vars.length,
"Duplicate keys are not allowed"
)
.optional(),
}),

Comment on lines 119 to 130
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
key: z.string(),
value: z.any(),
sensitive: z.boolean(),
}),
)
.optional(),
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Strengthen the schema validation for target variables.

The schema could benefit from additional validation constraints:

  1. The value field uses z.any() which is too permissive and could allow complex objects or functions.
  2. No validation prevents duplicate keys in the variables array.
  3. No specific validation for sensitive values.

Consider applying these improvements:

 .extend({
   metadata: z.record(z.string()).optional(),
   variables: z
     .array(
       z.object({
-        key: z.string(),
+        key: z.string().min(1).max(255).regex(/^[a-zA-Z][a-zA-Z0-9_]*$/),
-        value: z.any(),
+        value: z.union([z.string(), z.number(), z.boolean(), z.null()]),
         sensitive: z.boolean(),
       }),
     )
+    .refine(
+      (vars) => new Set(vars?.map((v) => v.key)).size === vars?.length,
+      "Duplicate variable keys are not allowed"
+    )
     .optional(),
 }),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
key: z.string(),
value: z.any(),
sensitive: z.boolean(),
}),
)
.optional(),
}),
.extend({
metadata: z.record(z.string()).optional(),
variables: z
.array(
z.object({
key: z.string().min(1).max(255).regex(/^[a-zA-Z][a-zA-Z0-9_]*$/),
value: z.union([z.string(), z.number(), z.boolean(), z.null()]),
sensitive: z.boolean(),
}),
)
.refine(
(vars) => new Set(vars?.map((v) => v.key)).size === vars?.length,
"Duplicate variable keys are not allowed"
)
.optional(),
}),

Comment on lines 152 to 215
const upsertTargetMetadata = async (
tx: Tx,
targets: Array<Target & { metadata?: Record<string, string> }>,
) => {
const existingTargetMetadata = await tx
.select()
.from(targetMetadata)
.where(
inArray(
targetMetadata.targetId,
targets.map((t) => t.id),
),
)
.catch((err) => {
log.error("Error fetching existing target metadata", { error: err });
throw err;
});

const targetMetadataValues = targets.flatMap((target) => {
const { id, metadata = [] } = target;

return Object.entries(metadata).map(([key, value]) => ({
targetId: id,
key,
value,
}));
});

if (targetMetadataValues.length > 0)
await tx
.insert(targetMetadata)
.values(targetMetadataValues)
.onConflictDoUpdate({
target: [targetMetadata.targetId, targetMetadata.key],
set: buildConflictUpdateColumns(targetMetadata, ["value"]),
})
.catch((err) => {
log.error("Error inserting target metadata", { error: err });
throw err;
});

const metadataToDelete = existingTargetMetadata.filter(
(metadata) =>
!targetMetadataValues.some(
(newMetadata) =>
newMetadata.targetId === metadata.targetId &&
newMetadata.key === metadata.key,
),
);

if (metadataToDelete.length > 0)
await tx
.delete(targetMetadata)
.where(
inArray(
targetMetadata.id,
metadataToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target metadata", { error: err });
throw err;
});
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix incorrect metadata default value initialization.

The metadata default value is initialized as an empty array but used with Object.entries which expects an object.

-    const { id, metadata = [] } = target;
+    const { id, metadata = {} } = target;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const upsertTargetMetadata = async (
tx: Tx,
targets: Array<Target & { metadata?: Record<string, string> }>,
) => {
const existingTargetMetadata = await tx
.select()
.from(targetMetadata)
.where(
inArray(
targetMetadata.targetId,
targets.map((t) => t.id),
),
)
.catch((err) => {
log.error("Error fetching existing target metadata", { error: err });
throw err;
});
const targetMetadataValues = targets.flatMap((target) => {
const { id, metadata = [] } = target;
return Object.entries(metadata).map(([key, value]) => ({
targetId: id,
key,
value,
}));
});
if (targetMetadataValues.length > 0)
await tx
.insert(targetMetadata)
.values(targetMetadataValues)
.onConflictDoUpdate({
target: [targetMetadata.targetId, targetMetadata.key],
set: buildConflictUpdateColumns(targetMetadata, ["value"]),
})
.catch((err) => {
log.error("Error inserting target metadata", { error: err });
throw err;
});
const metadataToDelete = existingTargetMetadata.filter(
(metadata) =>
!targetMetadataValues.some(
(newMetadata) =>
newMetadata.targetId === metadata.targetId &&
newMetadata.key === metadata.key,
),
);
if (metadataToDelete.length > 0)
await tx
.delete(targetMetadata)
.where(
inArray(
targetMetadata.id,
metadataToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target metadata", { error: err });
throw err;
});
};
const upsertTargetMetadata = async (
tx: Tx,
targets: Array<Target & { metadata?: Record<string, string> }>,
) => {
const existingTargetMetadata = await tx
.select()
.from(targetMetadata)
.where(
inArray(
targetMetadata.targetId,
targets.map((t) => t.id),
),
)
.catch((err) => {
log.error("Error fetching existing target metadata", { error: err });
throw err;
});
const targetMetadataValues = targets.flatMap((target) => {
const { id, metadata = {} } = target;
return Object.entries(metadata).map(([key, value]) => ({
targetId: id,
key,
value,
}));
});
if (targetMetadataValues.length > 0)
await tx
.insert(targetMetadata)
.values(targetMetadataValues)
.onConflictDoUpdate({
target: [targetMetadata.targetId, targetMetadata.key],
set: buildConflictUpdateColumns(targetMetadata, ["value"]),
})
.catch((err) => {
log.error("Error inserting target metadata", { error: err });
throw err;
});
const metadataToDelete = existingTargetMetadata.filter(
(metadata) =>
!targetMetadataValues.some(
(newMetadata) =>
newMetadata.targetId === metadata.targetId &&
newMetadata.key === metadata.key,
),
);
if (metadataToDelete.length > 0)
await tx
.delete(targetMetadata)
.where(
inArray(
targetMetadata.id,
metadataToDelete.map((m) => m.id),
),
)
.catch((err) => {
log.error("Error deleting target metadata", { error: err });
throw err;
});
};

@jsbroks jsbroks merged commit 402cc1e into main Oct 24, 2024
12 checks passed
@jsbroks jsbroks deleted the target-variables branch October 24, 2024 06:02
This was referenced Nov 2, 2024
@coderabbitai coderabbitai bot mentioned this pull request Nov 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants