Skip to content

Conversation

d4mr
Copy link
Member

@d4mr d4mr commented Oct 1, 2025

Summary by CodeRabbit

  • New Features

    • AWS KMS client caching for signing to reduce repeated client creation and improve latency.
    • Local development targets to spin up a local chain and fund addresses for testing.
    • Redis-cleanup utility to scan and optionally delete error keys with dry-run support.
  • Refactor

    • KMS cache initialized at startup and propagated through server and worker flows.
  • Chores

    • Added async caching dependency and removed environment-specific TypeScript declarations.
  • Config

    • Increased webhook worker parallelism (100 → 300).

Copy link

linear bot commented Oct 1, 2025

Copy link

coderabbitai bot commented Oct 1, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds a moka future cache and wires a shared AWS KMS client cache through core credentials, server state/extractors, queue/worker wiring, and signing flows. AwsKmsCredential gains cache-aware constructors and cached client retrieval. Also adds tooling scripts and justfile targets.

Changes

Cohort / File(s) Summary
Dependency: moka cache
core/Cargo.toml
Add moka dependency with future feature.
Core credentials: AWS KMS cache
core/src/credentials.rs
Add pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>; add optional kms_client_cache field to AwsKmsCredential (serde skipped); implement Hash excluding cache; add constructors (new, with_cache), cache_key, async create_kms_client and get_cached_kms_client; integrate cache usage in get_signer.
Server: state, extractors, main, queue wiring
server/src/http/server.rs, server/src/http/extractors.rs, server/src/main.rs, server/src/queue/manager.rs
Add pub kms_client_cache: KmsClientCache to EngineServerState; change FromRequestParts impl to FromRequestParts<EngineServerState> and use cached AWS KMS extraction (try_extract_aws_kms_with_cache); initialize KmsClientCache in main.rs and pass it into QueueManager::new(...); extend QueueManager::new signature to accept and propagate the cache.
Queue manager wiring
server/src/queue/manager.rs
Extend QueueManager::new signature to accept kms_client_cache: KmsClientCache and pass it into EOA job handler initialization.
Executors: EOA worker & transaction signing
executors/src/eoa/worker/mod.rs, executors/src/eoa/worker/transaction.rs
Add kms_client_cache: KmsClientCache fields to EoaExecutorJobHandler and EoaExecutorWorker; propagate cache into worker creation; inject cache into signing credential before signing (via with_aws_kms_cache / with_cache) so signing uses cached KMS client; change an EIP-1559 fee log from debug to trace.
Tooling: justfile targets
justfile
Add local-chain and fund address targets for local anvil + speedbump environment and funding addresses.
Scripts: Redis cleanup + TS changes
scripts/simple-redis-cleanup.ts, scripts/benchmarks/eoa.ts
Add scripts/simple-redis-cleanup.ts (batch Redis error-key cleanup with dry-run); remove Bun/DOM environment directives from scripts/benchmarks/eoa.ts.
Configuration: production queue size
server/configuration/server_production.yaml
Increase webhook_workers from 100 to 300.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Client
  participant Axum as Axum Router
  participant Extract as SigningCredentialsExtractor
  participant State as EngineServerState
  participant Cred as AwsKmsCredential
  participant Cache as KmsClientCache

  Client->>Axum: HTTP request requiring signing
  Axum->>Extract: from_request_parts(parts, state)
  Extract->>State: read kms_client_cache
  Extract->>Cred: AwsKmsCredential::new(..., kms_client_cache)
  note right of Cred #D3E4CD: credential carries cache handle (no client yet)
  Extract-->>Axum: return SigningCredential (cache-capable)
Loading
sequenceDiagram
  autonumber
  participant Job as EoaExecutorJobHandler
  participant Worker as EoaExecutorWorker
  participant Cred as AwsKmsCredential
  participant CacheImpl as moka::future::Cache
  participant AWS as AWS KMS

  Job->>Worker: construct (includes kms_client_cache)
  Worker->>Cred: credential.with_cache(kms_client_cache.clone())
  Worker->>Cred: get_signer(chain_id)
  alt cache configured
    Cred->>CacheImpl: get_or_insert_with(cache_key, create_kms_client)
    alt cache miss
      CacheImpl->>AWS: create KMS client (async)
      AWS-->>CacheImpl: client
    end
    CacheImpl-->>Cred: client
  else no cache
    Cred->>AWS: create KMS client
    AWS-->>Cred: client
  end
  Cred-->>Worker: AwsSigner
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Out of Scope Changes Check ⚠️ Warning This pull request includes modifications to the justfile and additions of unrelated scripts under scripts/benchmarks and scripts/simple-redis-cleanup.ts, which do not pertain to the AWS KMS client caching objective defined in the linked issue. These changes introduce new tooling and utility scripts outside the scope of caching functionality and risk conflating separate concerns within a single review. As a result, these non-caching changes are out of scope for this pull request. Split the unrelated justfile targets and script additions into separate pull requests focusing on their respective objectives, leaving only AWS KMS client caching changes in this merge.
Docstring Coverage ⚠️ Warning Docstring coverage is 33.33% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title “add aws kms client caching” succinctly and accurately captures the primary change of introducing AWS KMS client caching throughout the codebase, aligning closely with the main functionality added in this pull request. It clearly communicates the intent without extraneous details or overly generic phrasing. Therefore the title effectively summarizes the core update for team members reviewing change history.
Linked Issues Check ✅ Passed The pull request implements the AWS KMS client caching mechanism as indicated by the linked issue BLD-364, including dependency updates, public type alias, cache field propagation across credentials, executors, server state, and queue manager, along with appropriate helper methods and injection points. These changes fulfill the coding-related requirements of adding a client cache for AWS KMS usage across the Engine Core. There are no outstanding requirements visible from the issue title and the implementation aligns with the core objective.

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ea37ab8 and ddfd0ed.

📒 Files selected for processing (2)
  • core/src/credentials.rs (3 hunks)
  • server/configuration/server_production.yaml (1 hunks)

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
core/src/signer.rs (1)

277-280: Clone-and-inject pattern is safe but could be optimized.

The pattern of cloning credentials and injecting the cache is used consistently across all four AWS KMS signing methods. While functionally correct, this approach creates a temporary credential object for each signing operation.

Consider whether the cache could be injected once during credential extraction (in server/src/http/extractors.rs) rather than at each signing call site. This would eliminate the need to clone and mutate credentials in four separate locations, though the current approach is safe and the clone cost is minimal (strings + Arc).

The current implementation is correct and the performance impact is negligible, so this is optional.

Also applies to: 360-363, 432-434, 514-517

core/src/credentials.rs (2)

110-115: DefaultHasher is not stable across Rust versions or processes.

Using DefaultHasher for cache keys means the hash values may differ across Rust compiler versions or between process restarts (DefaultHasher is explicitly not guaranteed to be stable). This is acceptable for in-memory caching within a single process but could cause issues if cache behavior is ever tested or if you add cache persistence.

For production stability, consider using a stable hasher like std::collections::hash_map::RandomState with a fixed seed, or a deterministic hasher from a crate like twox-hash or ahash. However, for in-process, in-memory caching, DefaultHasher is acceptable.

Example with a stable hasher:

use std::hash::{BuildHasher, Hash, Hasher};

fn cache_key(&self) -> u64 {
    let build_hasher = std::collections::hash_map::RandomState::new();
    let mut hasher = build_hasher.build_hasher();
    self.hash(&mut hasher);
    hasher.finish()
}

This is a minor concern since the cache is ephemeral and process-local.


77-108: Consider documenting the relationship between constructors.

Two constructors are provided: new() with cache and without_cache() for backward compatibility. The without_cache constructor sets kms_client_cache: None, which triggers the fallback path in get_cached_kms_client (lines 146-150).

Add doc comments to clarify:

  • When to use each constructor
  • That without_cache() bypasses caching and creates a new client on every call
  • The performance implications of not using the cache

Example:

/// Create a new AwsKmsCredential with cache enabled.
/// Recommended for production use to reuse KMS client connections.
pub fn new(...) -> Self { ... }

/// Create a new AwsKmsCredential without cache (for backward compatibility).
/// Creates a new KMS client on every signing operation - use only for testing.
pub fn without_cache(...) -> Self { ... }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between ac2ecf3 and 621d721.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (9)
  • core/Cargo.toml (1 hunks)
  • core/src/credentials.rs (3 hunks)
  • core/src/signer.rs (6 hunks)
  • executors/src/eoa/worker/mod.rs (6 hunks)
  • executors/src/eoa/worker/transaction.rs (1 hunks)
  • server/src/http/extractors.rs (3 hunks)
  • server/src/http/server.rs (2 hunks)
  • server/src/main.rs (4 hunks)
  • server/src/queue/manager.rs (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
server/src/http/extractors.rs (4)
core/src/credentials.rs (1)
  • new (78-92)
core/src/signer.rs (1)
  • new (212-218)
server/src/http/server.rs (1)
  • new (47-109)
server/src/queue/manager.rs (1)
  • new (47-273)
core/src/credentials.rs (2)
core/src/signer.rs (1)
  • new (212-218)
server/src/queue/manager.rs (1)
  • new (47-273)
core/src/signer.rs (2)
core/src/credentials.rs (1)
  • new (78-92)
server/src/queue/manager.rs (1)
  • new (47-273)
server/src/main.rs (3)
core/src/credentials.rs (1)
  • new (78-92)
core/src/signer.rs (1)
  • new (212-218)
server/src/queue/manager.rs (1)
  • new (47-273)
🔇 Additional comments (20)
server/src/http/server.rs (2)

4-4: LGTM! Correct import for KMS client cache.


35-35: LGTM! Cache is initialized with max_capacity(100), time_to_live(1h) and time_to_idle(30m), then correctly cloned into EngineServerState.

executors/src/eoa/worker/mod.rs (7)

6-6: LGTM! Correct import.


131-132: LGTM! Cache field added to job handler.


188-189: LGTM! Noop credential augmented with cache.

Injecting the cache into the noop signing credential ensures consistent caching behavior for noop transactions.


196-209: LGTM! Cache properly propagated to worker.

The kms_client_cache is correctly passed from the job handler to the worker instance on line 208.


288-294: LGTM! Credential injection pattern is correct.

The implementation takes ownership of the credential, mutates only the AwsKms variant by injecting the cache, and returns the modified credential. This pattern avoids mutating shared state.


329-329: LGTM! Cache field added to worker.


333-339: LGTM! Consistent injection pattern in worker.

The worker's inject_kms_cache_into_credential implementation matches the job handler's pattern, ensuring consistency across the codebase.

server/src/queue/manager.rs (3)

6-6: LGTM! Correct import.


242-253: LGTM! Cache correctly wired to EOA executor handler.

The cache is passed to EoaExecutorJobHandler on line 252, completing the propagation chain from server initialization through to the executor workers.


47-55: LGTM! The kms_client_cache is initialized in main.rs with max_capacity(100), time_to_live(Duration::from_secs(60*60)), and time_to_idle(Duration::from_secs(60*30)) before being passed to QueueManager::new.

core/Cargo.toml (1)

26-26: LGTM – moka dependency and future feature selection are correct, and the KMS client cache in server/src/main.rs (lines 50–54) is properly configured with .max_capacity(100), .time_to_live(Duration::from_secs(3600)), and .time_to_idle(Duration::from_secs(1800)).

executors/src/eoa/worker/transaction.rs (1)

400-406: LGTM The helper consumes and returns a cloned SigningCredential with the cache applied, ensuring the original remains unmodified.

core/src/credentials.rs (2)

50-58: Hash implementation correctly excludes cache field.

The Hash implementation properly excludes the kms_client_cache field from hashing, ensuring that credential identity is based solely on the AWS credentials (access_key_id, secret_access_key, key_id, region).


127-152: Cache logic is correct and handles missing cache gracefully.

The get_cached_kms_client method properly:

  • Checks for cache presence
  • Uses cache.get() (non-blocking read)
  • Falls back to create_kms_client() and inserts on cache miss
  • Handles the no-cache case by creating a new client

The implementation follows moka best practices for async caches.

server/src/http/extractors.rs (2)

104-110: State type constraint is necessary for cache access.

Changing FromRequestParts<S> to FromRequestParts<EngineServerState> tightly couples the extractor to the server state but is necessary to access the kms_client_cache. This is an acceptable trade-off since signing credentials are only extracted in the context of the Engine server.

The type constraint is appropriate given that the extractor needs access to shared infrastructure (the KMS client cache).


148-170: AWS KMS extraction correctly uses cached client.

The try_extract_aws_kms_with_cache method:

  • Accepts the cache as a parameter
  • Parses KMS ARN to extract key_id and region
  • Constructs AwsKmsCredential using new() constructor with cache
  • Returns None if any required header is missing

The implementation correctly wires the cache into credential extraction, enabling cache reuse for all subsequent signing operations with these credentials.

server/src/main.rs (2)

50-54: Ensure cache capacity covers your credential diversity

max_capacity = 100 may be insufficient if your system uses over 100 distinct AWS KMS client configurations (each combination of access_key_id, secret_access_key, key_id, and region creates a separate cache entry). No documentation or config was found defining an expected number of credential sets—please verify that this limit meets your workload requirements.


50-54: Confirm credential refresh and connection reuse in AWS KMS client
The AWS SDK for Rust’s KmsClient (via the shared SdkConfig) lazy-caches and automatically refreshes temporary credentials before expiration, and reuses its underlying HTTP connection pools when cloned or reused. Your 1 hour TTL and 30 minute TTI settings align with recommended best practices.

}
}

/// Create a new AwsKmsCredential without cache (for backward compatibility)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this?

let signer = creds.get_signer(options.chain_id).await?;
// Clone and inject the cache into the credentials
let mut creds_with_cache = creds.clone();
creds_with_cache.kms_client_cache = Some(self.kms_client_cache.clone());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kidna weird pattern, fine for now but maybe encapsulate these into a get_signer_with_cache() in this trait to avoid repetition?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right, credentials arrive with cache injected already (due to extractor). Don't really need this step

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (2)
justfile (2)

28-29: Fixed sleep is fragile; add Anvil readiness check.

The 2-second sleep assumes Anvil starts within that time, but this may fail on slower systems or when Anvil takes longer to initialize.

     # Wait a moment for anvil to start
-    sleep 2
+    echo "⏳ Waiting for anvil to be ready..."
+    for i in {1..30}; do
+        if cast block-number --rpc-url http://localhost:8546 &>/dev/null; then
+            echo "✅ Anvil is ready"
+            break
+        fi
+        sleep 1
+    done

35-35: Speedbump blocks cleanup trap until natural exit.

Speedbump runs in the foreground, which means the trap cleanup won't execute until Speedbump exits naturally (Ctrl+C). This is likely intentional for a development environment, but documenting this behavior would be helpful.

Consider adding a comment to clarify the blocking behavior:

+    # Speedbump runs in foreground; press Ctrl+C to trigger cleanup
     speedbump --port=8545 --latency=300ms --sine-amplitude=150ms --sine-period=1m localhost:8546
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 3b9d581 and 246c9a0.

📒 Files selected for processing (6)
  • core/src/credentials.rs (3 hunks)
  • executors/src/eoa/worker/mod.rs (5 hunks)
  • executors/src/eoa/worker/transaction.rs (1 hunks)
  • justfile (1 hunks)
  • scripts/benchmarks/eoa.ts (0 hunks)
  • scripts/simple-redis-cleanup.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • scripts/benchmarks/eoa.ts
🧰 Additional context used
📓 Path-based instructions (2)
scripts/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (scripts/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc)

scripts/**/*.{ts,tsx,js,jsx}: Use bun <file> instead of node <file> or ts-node <file> to run JS/TS files
Do not use dotenv; Bun automatically loads .env files
Use Bun.serve() for HTTP/WebSocket servers; do not use express
Use bun:sqlite for SQLite; do not use better-sqlite3
Use Bun.redis for Redis; do not use ioredis
Use Bun.sql for Postgres; do not use pg or postgres.js
Use built-in WebSocket; do not use ws
Prefer Bun.file over node:fs readFile/writeFile for file IO
Use Bun.$ (shell) instead of execa

Files:

  • scripts/simple-redis-cleanup.ts
scripts/**/*.{html,ts,tsx,css}

📄 CodeRabbit inference engine (scripts/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc)

Use bun build <file.html|file.ts|file.css> instead of webpack or esbuild

Files:

  • scripts/simple-redis-cleanup.ts
🧠 Learnings (1)
📚 Learning: 2025-09-20T07:40:58.898Z
Learnt from: CR
PR: thirdweb-dev/engine-core#0
File: scripts/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc:0-0
Timestamp: 2025-09-20T07:40:58.898Z
Learning: Applies to scripts/**/*.{ts,tsx,js,jsx} : Use `Bun.redis` for Redis; do not use `ioredis`

Applied to files:

  • scripts/simple-redis-cleanup.ts
🧬 Code graph analysis (1)
core/src/credentials.rs (1)
server/src/queue/manager.rs (1)
  • new (47-273)
🔇 Additional comments (14)
core/src/credentials.rs (9)

16-17: LGTM: Cache type alias is well-defined.

Using moka::future::Cache with a u64 hash key derived from credential fields provides a clean abstraction for caching KMS clients. The async-friendly moka cache is a solid choice for this use case.


25-33: LGTM: Helper method for cache injection.

This helper cleanly injects the KMS cache into AWS KMS credentials after deserialization. The pattern of transforming only the relevant variant and passing through others is correct and ergonomic.


56-58: LGTM: Cache field design is correct.

The optional, serde-skipped cache field is the right approach. It prevents cache serialization while allowing runtime injection, and maintains backward compatibility.


60-68: LGTM: Hash implementation correctly excludes cache.

The manual Hash implementation correctly includes all credential identity fields while excluding the cache. This ensures cache keys are based on credential identity, not runtime state.


88-124: LGTM: Constructor design provides flexibility.

The three constructors (new, without_cache, with_cache) cover all use cases cleanly: direct construction with cache, backward compatibility without cache, and post-deserialization injection.


127-131: LGTM: Cache key generation is straightforward.

Using DefaultHasher to compute a u64 cache key from the credential fields is appropriate for in-memory cache indexing. The implementation leverages the manual Hash impl correctly.


134-168: LGTM: Caching logic is correct.

The caching implementation follows a sound pattern:

  • Cache lookup with fallback to creation
  • Clone-on-insert and clone-on-return (acceptable since AWS SDK clients are typically Arc-wrapped internally)
  • Graceful fallback when no cache is available
  • Debug logging for observability

171-175: LGTM: get_signer correctly uses cached client.

The update to use get_cached_kms_client integrates the caching pathway without changing the method's signature or error handling. This is the correct integration point for the cache.


105-118: Addresses past review comment: backward compatibility constructor.

In response to the past review comment asking "why do we need this?": The without_cache constructor provides backward compatibility for code paths that don't use the cache (e.g., testing, legacy flows, or scenarios where cache setup is deferred). This allows incremental adoption of caching without breaking existing usage.

executors/src/eoa/worker/transaction.rs (1)

400-405: LGTM: Cache injection is correctly positioned.

The cache injection happens at the right point: after building the transaction and before signing. The clone-and-inject pattern preserves the original credential while augmenting the signing pathway with the cache. The comment clearly explains the rationale (post-deserialization from Redis).

executors/src/eoa/worker/mod.rs (3)

131-132: LGTM: KMS client cache field added to handler.

The kms_client_cache field is correctly added to the EoaExecutorJobHandler struct to enable cache propagation through the execution pipeline. The comment clearly documents its purpose.


188-189: LGTM: Cache injection and propagation are correct.

The cache is correctly injected into the noop signing credential (lines 188-189) and propagated to the worker (line 208). This matches the pattern used in transaction.rs and ensures the cache is available throughout the execution pipeline.

Also applies to: 196-196, 208-208


321-321: LGTM: KMS client cache field added to worker.

The kms_client_cache field is correctly added to the EoaExecutorWorker struct, completing the cache propagation chain from handler to worker to transaction signing.

justfile (1)

26-26: Trap on EXIT may cause double cleanup with set -e.

Combining trap cleanup INT TERM EXIT with set -e can cause the cleanup function to run twice: once when an error occurs (due to set -e triggering exit) and once on the EXIT trap.

-    trap cleanup INT TERM EXIT
+    trap cleanup INT TERM

Likely an incorrect or invalid review comment.

Comment on lines +8 to +11
pkill -f "anvil.*8546" || true
pkill -f "speedbump.*8545" || true
lsof -ti:8545 | xargs kill -9 2>/dev/null || true
lsof -ti:8546 | xargs kill -9 2>/dev/null || true
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Cleanup logic may kill unrelated processes.

The pkill -f patterns and lsof | xargs kill -9 commands may be too broad and could terminate unrelated processes that happen to match the patterns or use the ports.

Consider using more precise process identification or maintaining PID files for safer cleanup.

-    pkill -f "anvil.*8546" || true
-    pkill -f "speedbump.*8545" || true
-    lsof -ti:8545 | xargs kill -9 2>/dev/null || true
-    lsof -ti:8546 | xargs kill -9 2>/dev/null || true
+    # Kill by port more safely
+    fuser -k 8545/tcp 2>/dev/null || true
+    fuser -k 8546/tcp 2>/dev/null || true

Comment on lines +38 to +41
fund address:
@echo "💰 Funding {{address}} with 1 ETH..."
@cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546
@echo "✅ Done!"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add validation and error handling for address parameter.

The target doesn't validate that the address parameter is provided and doesn't handle errors from the cast command, which could result in a misleading success message.

 fund address:
+    @test -n "{{address}}" || (echo "❌ Error: address parameter is required" && exit 1)
     @echo "💰 Funding {{address}} with 1 ETH..."
-    @cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546
-    @echo "✅ Done!"
+    @if cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546; then \
+        echo "✅ Done!"; \
+    else \
+        echo "❌ Failed to fund address"; \
+        exit 1; \
+    fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fund address:
@echo "💰 Funding {{address}} with 1 ETH..."
@cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546
@echo "✅ Done!"
fund address:
@test -n "{{address}}" || (echo "❌ Error: address parameter is required" && exit 1)
@echo "💰 Funding {{address}} with 1 ETH..."
@if cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546; then \
echo "✅ Done!"; \
else \
echo "❌ Failed to fund address"; \
exit 1; \
fi
🤖 Prompt for AI Agents
In justfile around lines 38 to 41, the target doesn't validate the address
parameter or check the result of the cast command; update the recipe to first
verify {{address}} is provided and matches an expected address pattern (or at
minimum non-empty), printing a clear error and exiting non-zero if
missing/invalid, then execute the cast rpc anvil_setBalance command and check
its exit status (do not print the success "✅ Done!" unless the cast command
succeeded); on failure forward the cast error output and exit with a non-zero
code so CI/consumers detect the failure.

@@ -0,0 +1,145 @@
#!/usr/bin/env bun

import Redis from "ioredis";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Replace ioredis with Bun.redis.

The script uses ioredis, which violates the project's coding guidelines. For scripts in scripts/**/*.{ts,tsx,js,jsx}, use Bun.redis instead to align with Bun-first practices and avoid external dependencies.

Refactor the import and client instantiation:

-import Redis from "ioredis";
+// No import needed; Bun.redis is built-in
  constructor() {
-   this.redis = new Redis(CONFIG.redisUrl);
+   this.redis = Bun.redis(CONFIG.redisUrl);
  }

Note: Verify that Bun.redis supports the same API surface (scan, pipeline, del, quit) used in this script. If there are API differences, adjust the method calls accordingly.

As per coding guidelines.

Also applies to: 26-26

🤖 Prompt for AI Agents
In scripts/simple-redis-cleanup.ts around lines 3 and 26, replace the external
ioredis dependency with Bun.redis: remove the import of "ioredis", create the
client via Bun.redis() instead, and update any client usage to the Bun.redis API
equivalents (scan/scanIterator, pipeline/multi or the Bun pipeline API, del, and
the proper disconnect/quit method). Ensure method names and return shapes match
and adjust calls/awaits as needed so scan, pipeline, del, and client shutdown
use Bun's API.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 246c9a0 and fc9a254.

📒 Files selected for processing (1)
  • core/src/credentials.rs (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
core/src/credentials.rs (1)
server/src/queue/manager.rs (1)
  • new (47-273)
🔇 Additional comments (8)
core/src/credentials.rs (8)

25-33: LGTM!

The cache injection pattern is clean and correctly handles only the AwsKms variant. The cache clone is cheap since moka::future::Cache is internally Arc-based.


56-58: LGTM!

The #[serde(skip)] attribute correctly prevents the cache from being serialized, treating it as runtime state rather than credential data. The Option wrapper provides backwards compatibility.


60-68: LGTM!

The Hash implementation correctly excludes the cache field and only hashes the credential identity (access_key_id, secret_access_key, key_id, region). This ensures credentials with the same identity produce the same hash regardless of cache presence.


88-102: LGTM!

The constructor provides a clean way to create credentials with cache at construction time. The mandatory cache parameter ensures new code paths have caching enabled.


104-108: LGTM!

The builder-style with_cache method provides a clean API for injecting cache into deserialized credentials.


117-125: LGTM!

The KMS client creation logic is correct and follows AWS SDK best practices with BehaviorVersion::latest().


154-159: LGTM!

The method correctly uses the caching pathway via get_cached_kms_client. Error propagation is correct.


16-17: Cache configuration confirmed: KmsClientCache is initialized with .max_capacity(100), a 1 hour TTL, and a 30 minute idle timeout in server/src/main.rs:50-54.

Comment on lines 8 to 17
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};
use thirdweb_core::auth::ThirdwebAuth;
use thirdweb_core::iaw::AuthToken;
use vault_types::enclave::auth::Auth as VaultAuth;

use crate::error::EngineError;

/// Cache for AWS KMS clients to avoid recreating connections
pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Consider collision risk with u64 cache keys from DefaultHasher.

Using DefaultHasher to produce u64 keys introduces:

  1. Hash collision risk: Different AWS credentials (access_key_id, secret_access_key, key_id, region combinations) could hash to the same u64, causing one credential to incorrectly reuse another's KMS client.
  2. Non-deterministic hashing: DefaultHasher uses a random seed per process, so cache keys won't be stable across restarts (though this may be acceptable for in-memory caching).

For a production KMS client cache, consider using a collision-resistant approach:

  • Use a tuple (String, String, String, String) as the cache key directly (moka supports any Hash + Eq key).
  • Or use a cryptographic hash (e.g., SHA-256) to produce a [u8; 32] or String key with negligible collision probability.

Example with tuple key:

-/// Cache for AWS KMS clients to avoid recreating connections
-pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>;
+/// Cache for AWS KMS clients to avoid recreating connections
+pub type KmsClientCache = moka::future::Cache<(String, String, String, String), aws_sdk_kms::Client>;

Then update cache_key() to return the tuple:

-    fn cache_key(&self) -> u64 {
-        let mut hasher = DefaultHasher::new();
-        self.hash(&mut hasher);
-        hasher.finish()
+    fn cache_key(&self) -> (String, String, String, String) {
+        (
+            self.access_key_id.clone(),
+            self.secret_access_key.clone(),
+            self.key_id.clone(),
+            self.region.clone(),
+        )
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};
use thirdweb_core::auth::ThirdwebAuth;
use thirdweb_core::iaw::AuthToken;
use vault_types::enclave::auth::Auth as VaultAuth;
use crate::error::EngineError;
/// Cache for AWS KMS clients to avoid recreating connections
pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>;
use std::collections::hash_map::DefaultHasher;
use std::hash::{Hash, Hasher};
use thirdweb_core::auth::ThirdwebAuth;
use thirdweb_core::iaw::AuthToken;
use vault_types::enclave::auth::Auth as VaultAuth;
use crate::error::EngineError;
/// Cache for AWS KMS clients to avoid recreating connections
pub type KmsClientCache = moka::future::Cache<(String, String, String, String), aws_sdk_kms::Client>;
🤖 Prompt for AI Agents
In core/src/credentials.rs around lines 8 to 17, replace the u64-based cache key
to use a collision-resistant key: change KmsClientCache to
moka::future::Cache<(String, String, String, String), aws_sdk_kms::Client> (or
another tuple ordering representing access_key_id, secret_access_key, key_id,
region), remove DefaultHasher/Hasher usage and update cache_key() to return that
tuple instead of a u64; update any call sites that construct or lookup the cache
to pass the tuple key (ensure ownership/clone as needed) so the cache uses the
tuple's Hash+Eq semantics and avoids DefaultHasher collisions and
nondeterminism.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
server/src/main.rs (1)

51-57: Externalize KMS client cache parameters to configuration.

  • Replace the hardcoded .max_capacity(100), .time_to_live(1 h), and .time_to_idle(30 m) with values loaded from your application config (see thirdweb-core/src/abi/mod.rs for the existing pattern).
  • Verify that your chosen defaults prevent excessive evictions under peak workloads.
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between fc9a254 and ea37ab8.

📒 Files selected for processing (2)
  • executors/src/eoa/worker/transaction.rs (2 hunks)
  • server/src/main.rs (5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • executors/src/eoa/worker/transaction.rs
🔇 Additional comments (4)
server/src/main.rs (4)

3-3: LGTM!

The import of KmsClientCache is correctly placed alongside related credential and signer types.


81-81: LGTM!

The KMS client cache is correctly cloned and passed to QueueManager::new, enabling shared access across queue workers.


132-132: LGTM!

The KMS client cache is correctly cloned and stored in EngineServerState, making it accessible to HTTP request handlers.


22-23: LGTM!

The log filter update correctly overrides webhook logs to warn level while keeping other executor logs at debug, with a helpful comment explaining the precedence.

@d4mr d4mr merged commit 7810d96 into main Oct 3, 2025
2 checks passed
@d4mr d4mr deleted the BLD-364 branch October 3, 2025 08:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants