-
Notifications
You must be signed in to change notification settings - Fork 7
add aws kms client caching #58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Caution Review failedThe pull request is closed. WalkthroughAdds a moka future cache and wires a shared AWS KMS client cache through core credentials, server state/extractors, queue/worker wiring, and signing flows. AwsKmsCredential gains cache-aware constructors and cached client retrieval. Also adds tooling scripts and justfile targets. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client
participant Axum as Axum Router
participant Extract as SigningCredentialsExtractor
participant State as EngineServerState
participant Cred as AwsKmsCredential
participant Cache as KmsClientCache
Client->>Axum: HTTP request requiring signing
Axum->>Extract: from_request_parts(parts, state)
Extract->>State: read kms_client_cache
Extract->>Cred: AwsKmsCredential::new(..., kms_client_cache)
note right of Cred #D3E4CD: credential carries cache handle (no client yet)
Extract-->>Axum: return SigningCredential (cache-capable)
sequenceDiagram
autonumber
participant Job as EoaExecutorJobHandler
participant Worker as EoaExecutorWorker
participant Cred as AwsKmsCredential
participant CacheImpl as moka::future::Cache
participant AWS as AWS KMS
Job->>Worker: construct (includes kms_client_cache)
Worker->>Cred: credential.with_cache(kms_client_cache.clone())
Worker->>Cred: get_signer(chain_id)
alt cache configured
Cred->>CacheImpl: get_or_insert_with(cache_key, create_kms_client)
alt cache miss
CacheImpl->>AWS: create KMS client (async)
AWS-->>CacheImpl: client
end
CacheImpl-->>Cred: client
else no cache
Cred->>AWS: create KMS client
AWS-->>Cred: client
end
Cred-->>Worker: AwsSigner
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (2)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (3)
core/src/signer.rs (1)
277-280
: Clone-and-inject pattern is safe but could be optimized.The pattern of cloning credentials and injecting the cache is used consistently across all four AWS KMS signing methods. While functionally correct, this approach creates a temporary credential object for each signing operation.
Consider whether the cache could be injected once during credential extraction (in
server/src/http/extractors.rs
) rather than at each signing call site. This would eliminate the need to clone and mutate credentials in four separate locations, though the current approach is safe and the clone cost is minimal (strings + Arc).The current implementation is correct and the performance impact is negligible, so this is optional.
Also applies to: 360-363, 432-434, 514-517
core/src/credentials.rs (2)
110-115
: DefaultHasher is not stable across Rust versions or processes.Using
DefaultHasher
for cache keys means the hash values may differ across Rust compiler versions or between process restarts (DefaultHasher is explicitly not guaranteed to be stable). This is acceptable for in-memory caching within a single process but could cause issues if cache behavior is ever tested or if you add cache persistence.For production stability, consider using a stable hasher like
std::collections::hash_map::RandomState
with a fixed seed, or a deterministic hasher from a crate liketwox-hash
orahash
. However, for in-process, in-memory caching,DefaultHasher
is acceptable.Example with a stable hasher:
use std::hash::{BuildHasher, Hash, Hasher}; fn cache_key(&self) -> u64 { let build_hasher = std::collections::hash_map::RandomState::new(); let mut hasher = build_hasher.build_hasher(); self.hash(&mut hasher); hasher.finish() }This is a minor concern since the cache is ephemeral and process-local.
77-108
: Consider documenting the relationship between constructors.Two constructors are provided:
new()
with cache andwithout_cache()
for backward compatibility. The without_cache constructor setskms_client_cache: None
, which triggers the fallback path inget_cached_kms_client
(lines 146-150).Add doc comments to clarify:
- When to use each constructor
- That
without_cache()
bypasses caching and creates a new client on every call- The performance implications of not using the cache
Example:
/// Create a new AwsKmsCredential with cache enabled. /// Recommended for production use to reuse KMS client connections. pub fn new(...) -> Self { ... } /// Create a new AwsKmsCredential without cache (for backward compatibility). /// Creates a new KMS client on every signing operation - use only for testing. pub fn without_cache(...) -> Self { ... }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (9)
core/Cargo.toml
(1 hunks)core/src/credentials.rs
(3 hunks)core/src/signer.rs
(6 hunks)executors/src/eoa/worker/mod.rs
(6 hunks)executors/src/eoa/worker/transaction.rs
(1 hunks)server/src/http/extractors.rs
(3 hunks)server/src/http/server.rs
(2 hunks)server/src/main.rs
(4 hunks)server/src/queue/manager.rs
(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
server/src/http/extractors.rs (4)
core/src/credentials.rs (1)
new
(78-92)core/src/signer.rs (1)
new
(212-218)server/src/http/server.rs (1)
new
(47-109)server/src/queue/manager.rs (1)
new
(47-273)
core/src/credentials.rs (2)
core/src/signer.rs (1)
new
(212-218)server/src/queue/manager.rs (1)
new
(47-273)
core/src/signer.rs (2)
core/src/credentials.rs (1)
new
(78-92)server/src/queue/manager.rs (1)
new
(47-273)
server/src/main.rs (3)
core/src/credentials.rs (1)
new
(78-92)core/src/signer.rs (1)
new
(212-218)server/src/queue/manager.rs (1)
new
(47-273)
🔇 Additional comments (20)
server/src/http/server.rs (2)
4-4
: LGTM! Correct import for KMS client cache.
35-35
: LGTM! Cache is initialized with max_capacity(100), time_to_live(1h) and time_to_idle(30m), then correctly cloned into EngineServerState.executors/src/eoa/worker/mod.rs (7)
6-6
: LGTM! Correct import.
131-132
: LGTM! Cache field added to job handler.
188-189
: LGTM! Noop credential augmented with cache.Injecting the cache into the noop signing credential ensures consistent caching behavior for noop transactions.
196-209
: LGTM! Cache properly propagated to worker.The
kms_client_cache
is correctly passed from the job handler to the worker instance on line 208.
288-294
: LGTM! Credential injection pattern is correct.The implementation takes ownership of the credential, mutates only the
AwsKms
variant by injecting the cache, and returns the modified credential. This pattern avoids mutating shared state.
329-329
: LGTM! Cache field added to worker.
333-339
: LGTM! Consistent injection pattern in worker.The worker's
inject_kms_cache_into_credential
implementation matches the job handler's pattern, ensuring consistency across the codebase.server/src/queue/manager.rs (3)
6-6
: LGTM! Correct import.
242-253
: LGTM! Cache correctly wired to EOA executor handler.The cache is passed to
EoaExecutorJobHandler
on line 252, completing the propagation chain from server initialization through to the executor workers.
47-55
: LGTM! Thekms_client_cache
is initialized in main.rs withmax_capacity(100)
,time_to_live(Duration::from_secs(60*60))
, andtime_to_idle(Duration::from_secs(60*30))
before being passed toQueueManager::new
.core/Cargo.toml (1)
26-26
: LGTM – moka dependency andfuture
feature selection are correct, and the KMS client cache inserver/src/main.rs
(lines 50–54) is properly configured with.max_capacity(100)
,.time_to_live(Duration::from_secs(3600))
, and.time_to_idle(Duration::from_secs(1800))
.executors/src/eoa/worker/transaction.rs (1)
400-406
: LGTM The helper consumes and returns a clonedSigningCredential
with the cache applied, ensuring the original remains unmodified.core/src/credentials.rs (2)
50-58
: Hash implementation correctly excludes cache field.The
Hash
implementation properly excludes thekms_client_cache
field from hashing, ensuring that credential identity is based solely on the AWS credentials (access_key_id, secret_access_key, key_id, region).
127-152
: Cache logic is correct and handles missing cache gracefully.The
get_cached_kms_client
method properly:
- Checks for cache presence
- Uses
cache.get()
(non-blocking read)- Falls back to
create_kms_client()
and inserts on cache miss- Handles the no-cache case by creating a new client
The implementation follows moka best practices for async caches.
server/src/http/extractors.rs (2)
104-110
: State type constraint is necessary for cache access.Changing
FromRequestParts<S>
toFromRequestParts<EngineServerState>
tightly couples the extractor to the server state but is necessary to access thekms_client_cache
. This is an acceptable trade-off since signing credentials are only extracted in the context of the Engine server.The type constraint is appropriate given that the extractor needs access to shared infrastructure (the KMS client cache).
148-170
: AWS KMS extraction correctly uses cached client.The
try_extract_aws_kms_with_cache
method:
- Accepts the cache as a parameter
- Parses KMS ARN to extract key_id and region
- Constructs
AwsKmsCredential
usingnew()
constructor with cache- Returns
None
if any required header is missingThe implementation correctly wires the cache into credential extraction, enabling cache reuse for all subsequent signing operations with these credentials.
server/src/main.rs (2)
50-54
: Ensure cache capacity covers your credential diversity
max_capacity = 100
may be insufficient if your system uses over 100 distinct AWS KMS client configurations (each combination of access_key_id, secret_access_key, key_id, and region creates a separate cache entry). No documentation or config was found defining an expected number of credential sets—please verify that this limit meets your workload requirements.
50-54
: Confirm credential refresh and connection reuse in AWS KMS client
The AWS SDK for Rust’s KmsClient (via the shared SdkConfig) lazy-caches and automatically refreshes temporary credentials before expiration, and reuses its underlying HTTP connection pools when cloned or reused. Your 1 hour TTL and 30 minute TTI settings align with recommended best practices.
core/src/credentials.rs
Outdated
} | ||
} | ||
|
||
/// Create a new AwsKmsCredential without cache (for backward compatibility) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we need this?
core/src/signer.rs
Outdated
let signer = creds.get_signer(options.chain_id).await?; | ||
// Clone and inject the cache into the credentials | ||
let mut creds_with_cache = creds.clone(); | ||
creds_with_cache.kms_client_cache = Some(self.kms_client_cache.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kidna weird pattern, fine for now but maybe encapsulate these into a get_signer_with_cache() in this trait to avoid repetition?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you're right, credentials arrive with cache injected already (due to extractor). Don't really need this step
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
justfile (2)
28-29
: Fixed sleep is fragile; add Anvil readiness check.The 2-second sleep assumes Anvil starts within that time, but this may fail on slower systems or when Anvil takes longer to initialize.
# Wait a moment for anvil to start - sleep 2 + echo "⏳ Waiting for anvil to be ready..." + for i in {1..30}; do + if cast block-number --rpc-url http://localhost:8546 &>/dev/null; then + echo "✅ Anvil is ready" + break + fi + sleep 1 + done
35-35
: Speedbump blocks cleanup trap until natural exit.Speedbump runs in the foreground, which means the trap cleanup won't execute until Speedbump exits naturally (Ctrl+C). This is likely intentional for a development environment, but documenting this behavior would be helpful.
Consider adding a comment to clarify the blocking behavior:
+ # Speedbump runs in foreground; press Ctrl+C to trigger cleanup speedbump --port=8545 --latency=300ms --sine-amplitude=150ms --sine-period=1m localhost:8546
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
core/src/credentials.rs
(3 hunks)executors/src/eoa/worker/mod.rs
(5 hunks)executors/src/eoa/worker/transaction.rs
(1 hunks)justfile
(1 hunks)scripts/benchmarks/eoa.ts
(0 hunks)scripts/simple-redis-cleanup.ts
(1 hunks)
💤 Files with no reviewable changes (1)
- scripts/benchmarks/eoa.ts
🧰 Additional context used
📓 Path-based instructions (2)
scripts/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (scripts/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc)
scripts/**/*.{ts,tsx,js,jsx}
: Usebun <file>
instead ofnode <file>
orts-node <file>
to run JS/TS files
Do not usedotenv
; Bun automatically loads .env files
UseBun.serve()
for HTTP/WebSocket servers; do not useexpress
Usebun:sqlite
for SQLite; do not usebetter-sqlite3
UseBun.redis
for Redis; do not useioredis
UseBun.sql
for Postgres; do not usepg
orpostgres.js
Use built-inWebSocket
; do not usews
PreferBun.file
overnode:fs
readFile/writeFile for file IO
UseBun.$
(shell) instead ofexeca
Files:
scripts/simple-redis-cleanup.ts
scripts/**/*.{html,ts,tsx,css}
📄 CodeRabbit inference engine (scripts/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc)
Use
bun build <file.html|file.ts|file.css>
instead ofwebpack
oresbuild
Files:
scripts/simple-redis-cleanup.ts
🧠 Learnings (1)
📚 Learning: 2025-09-20T07:40:58.898Z
Learnt from: CR
PR: thirdweb-dev/engine-core#0
File: scripts/.cursor/rules/use-bun-instead-of-node-vite-npm-pnpm.mdc:0-0
Timestamp: 2025-09-20T07:40:58.898Z
Learning: Applies to scripts/**/*.{ts,tsx,js,jsx} : Use `Bun.redis` for Redis; do not use `ioredis`
Applied to files:
scripts/simple-redis-cleanup.ts
🧬 Code graph analysis (1)
core/src/credentials.rs (1)
server/src/queue/manager.rs (1)
new
(47-273)
🔇 Additional comments (14)
core/src/credentials.rs (9)
16-17
: LGTM: Cache type alias is well-defined.Using
moka::future::Cache
with au64
hash key derived from credential fields provides a clean abstraction for caching KMS clients. The async-friendly moka cache is a solid choice for this use case.
25-33
: LGTM: Helper method for cache injection.This helper cleanly injects the KMS cache into AWS KMS credentials after deserialization. The pattern of transforming only the relevant variant and passing through others is correct and ergonomic.
56-58
: LGTM: Cache field design is correct.The optional, serde-skipped cache field is the right approach. It prevents cache serialization while allowing runtime injection, and maintains backward compatibility.
60-68
: LGTM: Hash implementation correctly excludes cache.The manual
Hash
implementation correctly includes all credential identity fields while excluding the cache. This ensures cache keys are based on credential identity, not runtime state.
88-124
: LGTM: Constructor design provides flexibility.The three constructors (
new
,without_cache
,with_cache
) cover all use cases cleanly: direct construction with cache, backward compatibility without cache, and post-deserialization injection.
127-131
: LGTM: Cache key generation is straightforward.Using
DefaultHasher
to compute au64
cache key from the credential fields is appropriate for in-memory cache indexing. The implementation leverages the manualHash
impl correctly.
134-168
: LGTM: Caching logic is correct.The caching implementation follows a sound pattern:
- Cache lookup with fallback to creation
- Clone-on-insert and clone-on-return (acceptable since AWS SDK clients are typically Arc-wrapped internally)
- Graceful fallback when no cache is available
- Debug logging for observability
171-175
: LGTM: get_signer correctly uses cached client.The update to use
get_cached_kms_client
integrates the caching pathway without changing the method's signature or error handling. This is the correct integration point for the cache.
105-118
: Addresses past review comment: backward compatibility constructor.In response to the past review comment asking "why do we need this?": The
without_cache
constructor provides backward compatibility for code paths that don't use the cache (e.g., testing, legacy flows, or scenarios where cache setup is deferred). This allows incremental adoption of caching without breaking existing usage.executors/src/eoa/worker/transaction.rs (1)
400-405
: LGTM: Cache injection is correctly positioned.The cache injection happens at the right point: after building the transaction and before signing. The clone-and-inject pattern preserves the original credential while augmenting the signing pathway with the cache. The comment clearly explains the rationale (post-deserialization from Redis).
executors/src/eoa/worker/mod.rs (3)
131-132
: LGTM: KMS client cache field added to handler.The
kms_client_cache
field is correctly added to theEoaExecutorJobHandler
struct to enable cache propagation through the execution pipeline. The comment clearly documents its purpose.
188-189
: LGTM: Cache injection and propagation are correct.The cache is correctly injected into the noop signing credential (lines 188-189) and propagated to the worker (line 208). This matches the pattern used in
transaction.rs
and ensures the cache is available throughout the execution pipeline.Also applies to: 196-196, 208-208
321-321
: LGTM: KMS client cache field added to worker.The
kms_client_cache
field is correctly added to theEoaExecutorWorker
struct, completing the cache propagation chain from handler to worker to transaction signing.justfile (1)
26-26
: Trap on EXIT may cause double cleanup withset -e
.Combining
trap cleanup INT TERM EXIT
withset -e
can cause the cleanup function to run twice: once when an error occurs (due toset -e
triggering exit) and once on the EXIT trap.- trap cleanup INT TERM EXIT + trap cleanup INT TERMLikely an incorrect or invalid review comment.
pkill -f "anvil.*8546" || true | ||
pkill -f "speedbump.*8545" || true | ||
lsof -ti:8545 | xargs kill -9 2>/dev/null || true | ||
lsof -ti:8546 | xargs kill -9 2>/dev/null || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleanup logic may kill unrelated processes.
The pkill -f
patterns and lsof | xargs kill -9
commands may be too broad and could terminate unrelated processes that happen to match the patterns or use the ports.
Consider using more precise process identification or maintaining PID files for safer cleanup.
- pkill -f "anvil.*8546" || true
- pkill -f "speedbump.*8545" || true
- lsof -ti:8545 | xargs kill -9 2>/dev/null || true
- lsof -ti:8546 | xargs kill -9 2>/dev/null || true
+ # Kill by port more safely
+ fuser -k 8545/tcp 2>/dev/null || true
+ fuser -k 8546/tcp 2>/dev/null || true
fund address: | ||
@echo "💰 Funding {{address}} with 1 ETH..." | ||
@cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546 | ||
@echo "✅ Done!" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
Add validation and error handling for address parameter.
The target doesn't validate that the address
parameter is provided and doesn't handle errors from the cast
command, which could result in a misleading success message.
fund address:
+ @test -n "{{address}}" || (echo "❌ Error: address parameter is required" && exit 1)
@echo "💰 Funding {{address}} with 1 ETH..."
- @cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546
- @echo "✅ Done!"
+ @if cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546; then \
+ echo "✅ Done!"; \
+ else \
+ echo "❌ Failed to fund address"; \
+ exit 1; \
+ fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
fund address: | |
@echo "💰 Funding {{address}} with 1 ETH..." | |
@cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546 | |
@echo "✅ Done!" | |
fund address: | |
@test -n "{{address}}" || (echo "❌ Error: address parameter is required" && exit 1) | |
@echo "💰 Funding {{address}} with 1 ETH..." | |
@if cast rpc anvil_setBalance {{address}} $(cast to-wei 1) --rpc-url http://localhost:8546; then \ | |
echo "✅ Done!"; \ | |
else \ | |
echo "❌ Failed to fund address"; \ | |
exit 1; \ | |
fi |
🤖 Prompt for AI Agents
In justfile around lines 38 to 41, the target doesn't validate the address
parameter or check the result of the cast command; update the recipe to first
verify {{address}} is provided and matches an expected address pattern (or at
minimum non-empty), printing a clear error and exiting non-zero if
missing/invalid, then execute the cast rpc anvil_setBalance command and check
its exit status (do not print the success "✅ Done!" unless the cast command
succeeded); on failure forward the cast error output and exit with a non-zero
code so CI/consumers detect the failure.
@@ -0,0 +1,145 @@ | |||
#!/usr/bin/env bun | |||
|
|||
import Redis from "ioredis"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace ioredis
with Bun.redis
.
The script uses ioredis
, which violates the project's coding guidelines. For scripts in scripts/**/*.{ts,tsx,js,jsx}
, use Bun.redis
instead to align with Bun-first practices and avoid external dependencies.
Refactor the import and client instantiation:
-import Redis from "ioredis";
+// No import needed; Bun.redis is built-in
constructor() {
- this.redis = new Redis(CONFIG.redisUrl);
+ this.redis = Bun.redis(CONFIG.redisUrl);
}
Note: Verify that Bun.redis
supports the same API surface (scan, pipeline, del, quit) used in this script. If there are API differences, adjust the method calls accordingly.
As per coding guidelines.
Also applies to: 26-26
🤖 Prompt for AI Agents
In scripts/simple-redis-cleanup.ts around lines 3 and 26, replace the external
ioredis dependency with Bun.redis: remove the import of "ioredis", create the
client via Bun.redis() instead, and update any client usage to the Bun.redis API
equivalents (scan/scanIterator, pipeline/multi or the Bun pipeline API, del, and
the proper disconnect/quit method). Ensure method names and return shapes match
and adjust calls/awaits as needed so scan, pipeline, del, and client shutdown
use Bun's API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
core/src/credentials.rs
(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
core/src/credentials.rs (1)
server/src/queue/manager.rs (1)
new
(47-273)
🔇 Additional comments (8)
core/src/credentials.rs (8)
25-33
: LGTM!The cache injection pattern is clean and correctly handles only the
AwsKms
variant. The cache clone is cheap sincemoka::future::Cache
is internally Arc-based.
56-58
: LGTM!The
#[serde(skip)]
attribute correctly prevents the cache from being serialized, treating it as runtime state rather than credential data. TheOption
wrapper provides backwards compatibility.
60-68
: LGTM!The
Hash
implementation correctly excludes the cache field and only hashes the credential identity (access_key_id, secret_access_key, key_id, region). This ensures credentials with the same identity produce the same hash regardless of cache presence.
88-102
: LGTM!The constructor provides a clean way to create credentials with cache at construction time. The mandatory cache parameter ensures new code paths have caching enabled.
104-108
: LGTM!The builder-style
with_cache
method provides a clean API for injecting cache into deserialized credentials.
117-125
: LGTM!The KMS client creation logic is correct and follows AWS SDK best practices with
BehaviorVersion::latest()
.
154-159
: LGTM!The method correctly uses the caching pathway via
get_cached_kms_client
. Error propagation is correct.
16-17
: Cache configuration confirmed:KmsClientCache
is initialized with.max_capacity(100)
, a 1 hour TTL, and a 30 minute idle timeout inserver/src/main.rs:50-54
.
use std::collections::hash_map::DefaultHasher; | ||
use std::hash::{Hash, Hasher}; | ||
use thirdweb_core::auth::ThirdwebAuth; | ||
use thirdweb_core::iaw::AuthToken; | ||
use vault_types::enclave::auth::Auth as VaultAuth; | ||
|
||
use crate::error::EngineError; | ||
|
||
/// Cache for AWS KMS clients to avoid recreating connections | ||
pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider collision risk with u64 cache keys from DefaultHasher.
Using DefaultHasher
to produce u64
keys introduces:
- Hash collision risk: Different AWS credentials (access_key_id, secret_access_key, key_id, region combinations) could hash to the same u64, causing one credential to incorrectly reuse another's KMS client.
- Non-deterministic hashing:
DefaultHasher
uses a random seed per process, so cache keys won't be stable across restarts (though this may be acceptable for in-memory caching).
For a production KMS client cache, consider using a collision-resistant approach:
- Use a tuple
(String, String, String, String)
as the cache key directly (moka supports anyHash + Eq
key). - Or use a cryptographic hash (e.g., SHA-256) to produce a
[u8; 32]
orString
key with negligible collision probability.
Example with tuple key:
-/// Cache for AWS KMS clients to avoid recreating connections
-pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>;
+/// Cache for AWS KMS clients to avoid recreating connections
+pub type KmsClientCache = moka::future::Cache<(String, String, String, String), aws_sdk_kms::Client>;
Then update cache_key()
to return the tuple:
- fn cache_key(&self) -> u64 {
- let mut hasher = DefaultHasher::new();
- self.hash(&mut hasher);
- hasher.finish()
+ fn cache_key(&self) -> (String, String, String, String) {
+ (
+ self.access_key_id.clone(),
+ self.secret_access_key.clone(),
+ self.key_id.clone(),
+ self.region.clone(),
+ )
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
use std::collections::hash_map::DefaultHasher; | |
use std::hash::{Hash, Hasher}; | |
use thirdweb_core::auth::ThirdwebAuth; | |
use thirdweb_core::iaw::AuthToken; | |
use vault_types::enclave::auth::Auth as VaultAuth; | |
use crate::error::EngineError; | |
/// Cache for AWS KMS clients to avoid recreating connections | |
pub type KmsClientCache = moka::future::Cache<u64, aws_sdk_kms::Client>; | |
use std::collections::hash_map::DefaultHasher; | |
use std::hash::{Hash, Hasher}; | |
use thirdweb_core::auth::ThirdwebAuth; | |
use thirdweb_core::iaw::AuthToken; | |
use vault_types::enclave::auth::Auth as VaultAuth; | |
use crate::error::EngineError; | |
/// Cache for AWS KMS clients to avoid recreating connections | |
pub type KmsClientCache = moka::future::Cache<(String, String, String, String), aws_sdk_kms::Client>; |
🤖 Prompt for AI Agents
In core/src/credentials.rs around lines 8 to 17, replace the u64-based cache key
to use a collision-resistant key: change KmsClientCache to
moka::future::Cache<(String, String, String, String), aws_sdk_kms::Client> (or
another tuple ordering representing access_key_id, secret_access_key, key_id,
region), remove DefaultHasher/Hasher usage and update cache_key() to return that
tuple instead of a u64; update any call sites that construct or lookup the cache
to pass the tuple key (ensure ownership/clone as needed) so the cache uses the
tuple's Hash+Eq semantics and avoids DefaultHasher collisions and
nondeterminism.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
server/src/main.rs (1)
51-57
: Externalize KMS client cache parameters to configuration.
- Replace the hardcoded
.max_capacity(100)
,.time_to_live(1 h)
, and.time_to_idle(30 m)
with values loaded from your application config (see thirdweb-core/src/abi/mod.rs for the existing pattern).- Verify that your chosen defaults prevent excessive evictions under peak workloads.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
executors/src/eoa/worker/transaction.rs
(2 hunks)server/src/main.rs
(5 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- executors/src/eoa/worker/transaction.rs
🔇 Additional comments (4)
server/src/main.rs (4)
3-3
: LGTM!The import of
KmsClientCache
is correctly placed alongside related credential and signer types.
81-81
: LGTM!The KMS client cache is correctly cloned and passed to
QueueManager::new
, enabling shared access across queue workers.
132-132
: LGTM!The KMS client cache is correctly cloned and stored in
EngineServerState
, making it accessible to HTTP request handlers.
22-23
: LGTM!The log filter update correctly overrides webhook logs to
warn
level while keeping other executor logs atdebug
, with a helpful comment explaining the precedence.
Summary by CodeRabbit
New Features
Refactor
Chores
Config