Add retry module for shared retry mechanism for rpc calls#2258
Add retry module for shared retry mechanism for rpc calls#2258
Conversation
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
|
||
| #[derive(Debug)] | ||
| pub enum RetryError<E> { | ||
| Config { message: String }, |
There was a problem hiding this comment.
why are we using opaque strings for errors?
| return Err(LocalDbError::Config { | ||
| message: "max_attempts must be > 0".to_string(), | ||
| }); | ||
| fn map_retry_error(error: RetryError<LocalDbError>) -> LocalDbError { |
There was a problem hiding this comment.
hm this also looks like a novel way to nest errors. we don't do this elsewhere in the codebase
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/common/src/local_db/sync.rs (1)
268-283: All‑or‑nothing on token metadata fetch — confirm intended behaviorIf any token fetch fails, the whole sync aborts (partial successes are discarded). Is this what we want for mainnet syncs with transient RPC blips? Consider committing successes and logging failures, then retrying the remainder next run.
I can propose a partial‑commit strategy that returns both successes and errors from fetch_erc20_metadata_concurrent; say the word.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (7)
crates/cli/src/commands/local_db/sync/data_source.rs(1 hunks)crates/common/src/erc20.rs(1 hunks)crates/common/src/local_db/fetch.rs(5 hunks)crates/common/src/local_db/mod.rs(2 hunks)crates/common/src/local_db/sync.rs(5 hunks)crates/common/src/local_db/token_fetch.rs(4 hunks)crates/common/src/retry.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
crates/*/{src,tests,benches,examples}/**/*.rs
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Rust code must pass clippy with all warnings denied (cargo clippy --workspace --all-targets --all-features -D warnings)
Files:
crates/common/src/erc20.rscrates/cli/src/commands/local_db/sync/data_source.rscrates/common/src/local_db/fetch.rscrates/common/src/local_db/mod.rscrates/common/src/local_db/token_fetch.rscrates/common/src/retry.rscrates/common/src/local_db/sync.rs
crates/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Rust code lives in the workspace under crates/* (e.g., cli, common, bindings, js_api, quote, subgraph, settings, math, integration_tests)
Files:
crates/common/src/erc20.rscrates/cli/src/commands/local_db/sync/data_source.rscrates/common/src/local_db/fetch.rscrates/common/src/local_db/mod.rscrates/common/src/local_db/token_fetch.rscrates/common/src/retry.rscrates/common/src/local_db/sync.rs
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs: Format Rust with nix develop -c cargo fmt --all
Lint Rust with nix develop -c rainix-rs-static (preconfigured flags)
Use snake_case for Rust crates/modules and PascalCase for types
Files:
crates/common/src/erc20.rscrates/cli/src/commands/local_db/sync/data_source.rscrates/common/src/local_db/fetch.rscrates/common/src/local_db/mod.rscrates/common/src/local_db/token_fetch.rscrates/common/src/retry.rscrates/common/src/local_db/sync.rs
🧠 Learnings (3)
📚 Learning: 2025-10-06T11:13:29.956Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2123
File: crates/common/src/raindex_client/local_db/mod.rs:23-29
Timestamp: 2025-10-06T11:13:29.956Z
Learning: In `crates/common/src/raindex_client/local_db/mod.rs`, the `Default` implementation for `LocalDb` that creates an RPC client pointing to `http://localhost:4444` is acceptable because the RPC client must be explicitly configured before actual usage in production scenarios.
Applied to files:
crates/common/src/local_db/fetch.rs
📚 Learning: 2025-04-30T09:28:36.960Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1715
File: crates/js_api/src/common/mod.rs:111-118
Timestamp: 2025-04-30T09:28:36.960Z
Learning: In the rain.orderbook repository, the WASM tests are already properly configured with conditional compilation using `#[cfg(target_family = "wasm")]` and `#[cfg(not(target_family = "wasm"))]`, and don't require additional `wasm_bindgen_test_configure!(run_in_browser)` directives.
Applied to files:
crates/common/src/local_db/token_fetch.rs
📚 Learning: 2025-05-14T05:13:59.713Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1744
File: crates/subgraph/src/orderbook_client.rs:44-46
Timestamp: 2025-05-14T05:13:59.713Z
Learning: In the rain.orderbook project, WASM dependencies are intentionally made available in non-WASM targets to facilitate testing of WASM-related functionality, so conditional compilation guards like `#[cfg(target_family = "wasm")]` should not be added to imports or implementations that may be needed for tests.
Applied to files:
crates/common/src/local_db/token_fetch.rs
🧬 Code graph analysis (5)
crates/cli/src/commands/local_db/sync/data_source.rs (2)
crates/common/src/local_db/token_fetch.rs (1)
fetch_erc20_metadata_concurrent(16-57)crates/common/src/local_db/fetch.rs (1)
default(21-28)
crates/common/src/local_db/fetch.rs (1)
crates/common/src/retry.rs (1)
retry_with_backoff(31-55)
crates/common/src/local_db/mod.rs (2)
crates/common/src/raindex_client/local_db/mod.rs (1)
from(11-16)crates/common/src/raindex_client/vaults.rs (1)
erc20(1564-1568)
crates/common/src/local_db/token_fetch.rs (3)
crates/common/src/retry.rs (1)
retry_with_backoff(31-55)crates/common/src/erc20.rs (1)
new(31-33)crates/common/src/local_db/fetch.rs (1)
default(21-28)
crates/common/src/local_db/sync.rs (1)
crates/common/src/local_db/token_fetch.rs (1)
fetch_erc20_metadata_concurrent(16-57)
🔇 Additional comments (6)
crates/common/src/local_db/token_fetch.rs (1)
24-46: Note on RPC URL cloningERC20::new takes Vec, forcing a clone per address. If this becomes hot, consider changing ERC20 to hold Arc<[Url]> or accept &[Url] and clone internally once per instance/provider. Not blocking.
⛔ Skipped due to learnings
Learnt from: findolor PR: rainlanguage/rain.orderbook#1858 File: crates/subgraph/src/cynic_client.rs:27-29 Timestamp: 2025-05-19T12:25:02.658Z Learning: In the Rain Orderbook codebase, when passing a URL from `self.get_base_url()` to `reqwest::Client::post()`, the `.clone()` is necessary because `get_base_url()` returns a reference (`&Url`), but an owned `Url` is needed for the reqwest client in the async context to avoid lifetime issues.Learnt from: findolor PR: rainlanguage/rain.orderbook#1903 File: crates/js_api/src/gui/order_operations.rs:124-134 Timestamp: 2025-06-17T16:32:04.554Z Learning: In the rain.orderbook codebase, RPC lists are typically small (2 items maximum), so performance optimizations around cloning and converting small Vec<Url> collections are generally unnecessary.crates/common/src/local_db/fetch.rs (1)
104-116: Verified: RetryError→LocalDbError conversion exists and will compile correctlyThe From<RetryError> implementation is present at crates/common/src/retry.rs:13–18, mapping InvalidMaxAttempts → InvalidRetryMaxAttemps and Operation(inner) → inner. The
?operator on line 104–116 will work as intended.crates/common/src/local_db/sync.rs (4)
236-236: Signature change looks goodPassing &FetchConfig here is the right abstraction and keeps this path allocation‑free.
274-276: Good: propagates config to concurrent fetchCorrectly forwards the config into fetch_erc20_metadata_concurrent.
789-797: Tests updated to pass config — LGTMCall sites updated cleanly; keeps tests explicit.
820-823: Integration test path updated — LGTMConfig plumbed into the prepare call; works with LocalEvm fixture.
| let fetched = | ||
| fetch_erc20_metadata_concurrent(rpcs.to_vec(), missing, &FetchConfig::default()) | ||
| .await | ||
| .map_err(|e| anyhow!(e))?; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
LGTM; consider threading FetchConfig instead of hardcoding default
Passing the default config is correct. To tune retries/concurrency from CLI, extend TokenMetadataFetcher::fetch to accept &FetchConfig and thread it from call sites.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/data_source.rs around lines 66 to 69,
the call to fetch_erc20_metadata_concurrent uses FetchConfig::default(), which
hardcodes retries/concurrency; change TokenMetadataFetcher::fetch (and/or
fetch_erc20_metadata_concurrent) to accept a &FetchConfig parameter, update its
internal calls to use that config, and then thread a &FetchConfig from the call
site instead of using FetchConfig::default(); also update all callers (including
CLI wiring) to construct/receive a FetchConfig (e.g., from CLI flags or
higher-level config) and pass it through to this function so retries and
concurrency can be tuned from the CLI.
| #[error("Invalid retry max attempts")] | ||
| InvalidRetryMaxAttemps, |
There was a problem hiding this comment.
Fix public API typo: InvalidRetryMaxAttempts
The new public variant is misspelled (Attemps). This leaks into downstream crates/JS bindings and user-facing messages.
Apply this cross-file rename (adjust all references):
- #[error("Invalid retry max attempts")]
- InvalidRetryMaxAttemps,
+ #[error("Invalid retry max attempts")]
+ InvalidRetryMaxAttempts,Also update all uses across the workspace (see related comments and verification script).
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[error("Invalid retry max attempts")] | |
| InvalidRetryMaxAttemps, | |
| #[error("Invalid retry max attempts")] | |
| InvalidRetryMaxAttempts, |
🤖 Prompt for AI Agents
In crates/common/src/erc20.rs around lines 164 to 165, the public enum variant
and its error message are misspelled as InvalidRetryMaxAttemps; rename the
variant to InvalidRetryMaxAttempts (and update the #[error("Invalid retry max
attempts")] attribute if needed to match casing/spacing conventions), then
update every reference across the workspace (all Rust code, pattern matches,
constructors, uses in other crates) and regenerate/update any JS bindings or
exported API clients that reference the old name so the public API and
downstream consumers use InvalidRetryMaxAttempts consistently.
| let response = retry_with_backoff( | ||
| || { | ||
| let client = client.clone(); | ||
| let filter = filter.clone(); | ||
| async move { client.get_logs(&filter).await.map_err(map_rpc_error) } | ||
| }, | ||
| max_attempts, | ||
| should_retry_local_db_error, | ||
| ) | ||
| .await?; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
eth_getLogs backoff is fine; consider capturing transport-only failures
Current retry predicate (see below) only triggers on LocalDbError::Rpc. If RpcClientError can surface timeouts/HTTP status differently, those may not be retried. Consider aligning map_rpc_error + should_retry to cover transient transport (e.g., timeouts, 5xx) consistently.
🤖 Prompt for AI Agents
In crates/common/src/local_db/fetch.rs around lines 241 to 251, the retry
currently only triggers for LocalDbError::Rpc but transport-level failures (e.g.
timeouts, HTTP 5xx) from the RPC client may be mapped differently and thus not
retried; update map_rpc_error to normalize transport/timeouts/5xx into the Rpc
(or a new transient RPC) LocalDbError variant and/or extend
should_retry_local_db_error to return true for those transport-related
RpcClientError variants (inspect the RpcClientError enum for timeout,
connection, and status variants and mark them transient) so transient transport
failures are consistently retried by retry_with_backoff.
| fn map_rpc_error(error: RpcClientError) -> LocalDbError { | ||
| match error { | ||
| RpcClientError::JsonSerialization(err) => LocalDbError::JsonParse(err), | ||
| other => LocalDbError::Rpc(other), | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Preserve HTTP status if available, or document why it’s folded into Rpc
map_rpc_error collapses all non-JSON errors into LocalDbError::Rpc, making LocalDbError::HttpStatus effectively unused here. If RpcClientError carries status codes, consider mapping 5xx/429 to HttpStatus to aid observability and retry heuristics, or add a comment clarifying the choice.
🤖 Prompt for AI Agents
In crates/common/src/local_db/fetch.rs around lines 273 to 279, map_rpc_error
currently folds all non-JSON RpcClientError variants into LocalDbError::Rpc;
either preserve HTTP status codes when present by matching RpcClientError
variants that carry an HTTP status and converting 5xx and 429 into
LocalDbError::HttpStatus (including the status and any body/headers useful for
observability), keep JsonSerialization mapped to LocalDbError::JsonParse, and
fall back to LocalDbError::Rpc for other cases, or if RpcClientError does not
expose status, add a concise comment explaining why HttpStatus cannot be derived
here and that all non-JSON errors are intentionally folded into Rpc.
| fn should_retry_local_db_error(error: &LocalDbError) -> bool { | ||
| matches!(error, LocalDbError::Rpc(_)) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Broaden retry predicate or justify current scope
Currently retries only on LocalDbError::Rpc(_). If timeouts/connection resets are represented differently by RpcClientError and map_rpc_error, you may miss legitimate retries. Consider:
- Retrying on HttpStatus for 429/5xx
- Retrying on Timeout, if it can occur here
🤖 Prompt for AI Agents
In crates/common/src/local_db/fetch.rs around lines 280 to 282, the retry
predicate only matches LocalDbError::Rpc(_) which is too narrow; update
should_retry_local_db_error to also return true for Rpc-related errors that
represent transient conditions (e.g., RpcClientError variants or mapped Rpc
errors representing HTTP 429 or 5xx and timeout/connect reset cases). Inspect
RpcClientError and map_rpc_error to determine the exact enum variants or wrapped
status codes for 429/5xx and Timeout/ConnectionReset and add pattern matches for
those variants (or check status code ranges / timeout flag) so transient HTTP
429/5xx and timeout/connection reset errors are retried while leaving
non-transient errors unchanged.
| let out = fetch_erc20_metadata_concurrent(rpcs, addrs, &FetchConfig::default()) | ||
| .await | ||
| .unwrap(); | ||
| assert_eq!(out.len(), 1); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tests: add deterministic‑revert case and invalid‑attempts case
Good happy-path and transport-failure coverage. Add:
- A revert case to ensure should_retry_token_error doesn’t loop on deterministic reverts.
- A config with max_retry_attempts = 0 to assert mapping to LocalDbError::InvalidRetryMaxAttempts.
I can write these tests if you want.
Also applies to: 90-95
🤖 Prompt for AI Agents
In crates/common/src/local_db/token_fetch.rs around lines 76-79 (and similarly
for lines 90-95), add two unit tests: one that simulates an on-chain
deterministic revert for an ERC20 metadata call and asserts that
should_retry_token_error does not retry (i.e., the fetch path returns the
expected deterministic error rather than looping), and another that constructs a
FetchConfig with max_retry_attempts = 0 and asserts the fetch routine maps this
to LocalDbError::InvalidRetryMaxAttempts; implement the tests by mocking the
RPC/transport responses to produce the revert and by invoking the same
fetch_erc20_metadata_concurrent path used in the happy-path test so behavior is
validated end-to-end.
| use backon::{ExponentialBuilder, Retryable}; | ||
| use std::{future::Future, time::Duration}; | ||
|
|
||
| pub const DEFAULT_BASE_DELAY_MILLIS: u64 = 500; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Don’t leak policy via a public const unless it’s part of the public API contract
Either move DEFAULT_BASE_DELAY_MILLIS behind a config or mark it crate‑private to avoid external dependence on a tuning value.
-pub const DEFAULT_BASE_DELAY_MILLIS: u64 = 500;
+pub(crate) const DEFAULT_BASE_DELAY_MILLIS: u64 = 500;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub const DEFAULT_BASE_DELAY_MILLIS: u64 = 500; | |
| pub(crate) const DEFAULT_BASE_DELAY_MILLIS: u64 = 500; |
🤖 Prompt for AI Agents
In crates/common/src/retry.rs around line 5, the public const
DEFAULT_BASE_DELAY_MILLIS exposes an internal tuning policy; change it to not be
part of the public API by either removing the pub to make it crate‑private
(preferred) or replace it with a configurable value passed into the retry policy
(e.g., add a Config/Builder or a parameter to the retry constructor) and update
all internal call sites to use the new config parameter instead of the global
const.
| impl From<RetryError<LocalDbError>> for LocalDbError { | ||
| fn from(err: RetryError<LocalDbError>) -> Self { | ||
| match err { | ||
| RetryError::InvalidMaxAttempts => LocalDbError::InvalidRetryMaxAttemps, | ||
| RetryError::Operation(inner) => inner, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| impl From<RetryError<ERC20Error>> for ERC20Error { | ||
| fn from(err: RetryError<ERC20Error>) -> Self { | ||
| match err { | ||
| RetryError::InvalidMaxAttempts => ERC20Error::InvalidRetryMaxAttemps, | ||
| RetryError::Operation(inner) => inner, | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Decouple retry from crate‑specific error types
Having retry.rs depend on LocalDbError and ERC20Error creates an unnecessary dependency edge. Make retry fully generic; implement the From<RetryError<...>> conversions in the respective modules instead.
Apply:
-impl From<RetryError<LocalDbError>> for LocalDbError {
- fn from(err: RetryError<LocalDbError>) -> Self {
- match err {
- RetryError::InvalidMaxAttempts => LocalDbError::InvalidRetryMaxAttemps,
- RetryError::Operation(inner) => inner,
- }
- }
-}
-
-impl From<RetryError<ERC20Error>> for ERC20Error {
- fn from(err: RetryError<ERC20Error>) -> Self {
- match err {
- RetryError::InvalidMaxAttempts => ERC20Error::InvalidRetryMaxAttemps,
- RetryError::Operation(inner) => inner,
- }
- }
-}
+// Remove crate-specific conversions from this module.
+// Implement `impl From<RetryError<LocalDbError>> for LocalDbError` in local_db,
+// and `impl From<RetryError<ERC20Error>> for ERC20Error` in erc20.Also, please fix the “Attemps” typo in those target enums to “Attempts” before stabilizing the API.
🏁 Script executed:
#!/bin/bash
# 1. Examine retry.rs to confirm the From implementations
echo "=== retry.rs (lines 1-50) ==="
cat crates/common/src/retry.rs | head -50
echo -e "\n=== Searching for LocalDbError definition ==="
rg -n "enum LocalDbError\|struct LocalDbError" --type=rs crates/
echo -e "\n=== Searching for ERC20Error definition ==="
rg -n "enum ERC20Error\|struct ERC20Error" --type=rs crates/
echo -e "\n=== Looking for InvalidRetryMaxAttemps typo in enums ==="
rg -n "InvalidRetryMaxAttemps" --type=rs crates/
echo -e "\n=== Searching for RetryError usage ==="
rg -n "RetryError" --type=rs crates/ -C 2Length of output: 1788
🏁 Script executed:
#!/bin/bash
# 1. Examine retry.rs content
echo "=== Full retry.rs content ==="
cat crates/common/src/retry.rs
echo -e "\n=== Searching for LocalDbError definition ==="
fd -t f "*.rs" crates/ -x rg -l "enum LocalDbError" {}
echo -e "\n=== Searching for ERC20Error definition ==="
fd -t f "*.rs" crates/ -x rg -l "enum ERC20Error" {}
echo -e "\n=== All InvalidRetryMaxAttemps occurrences ==="
rg -n "InvalidRetryMaxAttemps" --type=rs crates/ -C 3
echo -e "\n=== Directory structure of crates/common ==="
find crates/common/src -type f -name "*.rs" | head -20Length of output: 6002
🏁 Script executed:
#!/bin/bash
# Find LocalDbError definition
echo "=== Searching for LocalDbError enum definition ==="
find crates -name "*.rs" -exec grep -n "enum LocalDbError" {} +
# Find ERC20Error definition
echo -e "\n=== Searching for ERC20Error/Error enum definition ==="
find crates -name "*.rs" -exec grep -n "enum.*ERC20Error\|enum Error.*erc20" {} +
# Look for mod.rs in local_db to find main definition
echo -e "\n=== Contents of local_db/mod.rs (first 100 lines) ==="
head -100 crates/common/src/local_db/mod.rs
# Look for erc20 module structure
echo -e "\n=== Contents of erc20/mod.rs or erc20.rs (first 100 lines) ==="
cat crates/common/src/erc20.rs 2>/dev/null || cat crates/common/src/erc20/mod.rs 2>/dev/null || echo "File not found in expected locations"
# Search for uses of the From conversions
echo -e "\n=== Uses of From<RetryError conversions ==="
find crates -name "*.rs" -exec grep -n "RetryError.*from\|into.*RetryError" {} +Length of output: 17794
🏁 Script executed:
#!/bin/bash
# Search for InvalidRetryMaxAttemps in LocalDbError
echo "=== LocalDbError InvalidRetryMaxAttemps variant ==="
grep -n "InvalidRetryMaxAttemps" crates/common/src/local_db/mod.rs
# Check if LocalDbError's complete definition includes this variant
echo -e "\n=== Full LocalDbError enum definition ==="
sed -n '42,150p' crates/common/src/local_db/mod.rs
# Search for actual usages of From conversion or error conversions with RetryError
echo -e "\n=== Searching for .into() or .from() with RetryError patterns ==="
find crates -name "*.rs" -exec grep -l "\.into()" {} \; | head -10 | xargs grep -n "RetryError\|retry_with_backoff"
# Check if retry module is used elsewhere
echo -e "\n=== Module usage - files importing retry ==="
find crates -name "*.rs" -exec grep -l "use.*retry\|use.*RetryError" {} \;
# Check common/src/lib.rs or mod.rs to see module structure
echo -e "\n=== crates/common/src/lib.rs or mod.rs ==="
head -50 crates/common/src/lib.rs 2>/dev/null || head -50 crates/common/src/mod.rs 2>/dev/null || echo "Not found"Length of output: 6827
🏁 Script executed:
#!/bin/bash
# Check token_fetch.rs usage
echo "=== token_fetch.rs - retry/RetryError usage ==="
cat crates/common/src/local_db/token_fetch.rs | head -200
# Search for RetryError references and conversions across codebase
echo -e "\n=== All files referencing RetryError ==="
find crates -name "*.rs" -exec grep -l "RetryError" {} \;
# Search for actual From/into conversions with retry errors
echo -e "\n=== Searching for .into() calls in token_fetch.rs ==="
grep -n "\.into()" crates/common/src/local_db/token_fetch.rs
# Check if there are any other usages of the From implementations
echo -e "\n=== Searching for uses of retry_with_backoff ==="
find crates -name "*.rs" -exec grep -n "retry_with_backoff" {} +Length of output: 4679
🏁 Script executed:
#!/bin/bash
# Check fetch.rs error handling with RetryError
echo "=== fetch.rs retry_with_backoff error handling ==="
grep -n -A 5 "retry_with_backoff" crates/common/src/local_db/fetch.rs | head -60
# Verify no other uses of From conversions with RetryError
echo -e "\n=== Searching for From::from or .into() with retry errors ==="
find crates -name "*.rs" -exec grep -n "From::from.*RetryError\|RetryError.*\.into()" {} +
# Final confirmation: check if the From impls are ever called
echo -e "\n=== Checking if From impls are in scope and referenced ==="
grep -rn "impl From<RetryError" crates/common/src/Length of output: 1556
Remove unnecessary retry.rs dependency on error types; fix "Attemps" typo
The From implementations for RetryError conversions create an unnecessary coupling in retry.rs. These implementations are currently dead code (manual error matching is used everywhere instead), making this an ideal refactoring opportunity.
Actions required:
-
Fix typo before stabilizing API: Change "Attemps" → "Attempts" in:
LocalDbError::InvalidRetryMaxAttemps(crates/common/src/local_db/mod.rs, line 120)ERC20Error::InvalidRetryMaxAttemps(crates/common/src/erc20.rs)
-
Decouple retry.rs: Move the From implementations to their respective modules:
- Move
impl From<RetryError<LocalDbError>> for LocalDbErrorto crates/common/src/local_db/mod.rs - Move
impl From<RetryError<ERC20Error>> for ERC20Errorto crates/common/src/erc20.rs - Remove these two impl blocks and the associated imports from crates/common/src/retry.rs
- Move
This makes retry.rs fully generic and eliminates the unnecessary dependency edge on crate-specific error types.
🤖 Prompt for AI Agents
In crates/common/src/retry.rs around lines 13–29, the two From<RetryError<...>>
impls create an unnecessary dependency on crate-specific error types and should
be removed; move the impl for LocalDbError into
crates/common/src/local_db/mod.rs and the impl for ERC20Error into
crates/common/src/erc20.rs, then delete those impl blocks and any now-unused
imports from retry.rs so retry.rs stays generic. Before doing that, fix the typo
"InvalidRetryMaxAttemps" → "InvalidRetryMaxAttempts" in
crates/common/src/local_db/mod.rs (around line 120) and in
crates/common/src/erc20.rs, update the moved impls to reference the corrected
variant name, and run cargo build to adjust imports/usages as needed.
| pub async fn retry_with_backoff<T, F, Fut, E, ShouldRetry>( | ||
| operation: F, | ||
| max_attempts: usize, | ||
| should_retry: ShouldRetry, | ||
| ) -> Result<T, RetryError<E>> | ||
| where | ||
| F: Fn() -> Fut, | ||
| Fut: Future<Output = Result<T, E>>, | ||
| ShouldRetry: Fn(&E) -> bool, | ||
| { | ||
| if max_attempts == 0 { | ||
| return Err(RetryError::InvalidMaxAttempts); | ||
| } | ||
|
|
||
| let backoff = ExponentialBuilder::default() | ||
| .with_min_delay(Duration::from_millis(DEFAULT_BASE_DELAY_MILLIS)) | ||
| .with_max_times(max_attempts.saturating_sub(1)); | ||
|
|
||
| let retryable = || async { operation().await.map_err(RetryError::Operation) }; | ||
|
|
||
| retryable | ||
| .retry(&backoff) | ||
| .when(|error: &RetryError<E>| matches!(error, RetryError::Operation(err) if should_retry(err))) | ||
| .await | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Retry logic is correct; consider making backoff fully configurable
Implementation correctly bounds attempts and retries only on predicate‑approved errors. Expose base delay (and optional jitter/max delay) via config to avoid hard‑coding policy here.
Example:
-pub const DEFAULT_BASE_DELAY_MILLIS: u64 = 500;
+pub const DEFAULT_BASE_DELAY_MILLIS: u64 = 500; // keep as fallback
@@
-pub async fn retry_with_backoff<T, F, Fut, E, ShouldRetry>(
+pub async fn retry_with_backoff<T, F, Fut, E, ShouldRetry>(
operation: F,
max_attempts: usize,
should_retry: ShouldRetry,
) -> Result<T, RetryError<E>>
@@
- let backoff = ExponentialBuilder::default()
- .with_min_delay(Duration::from_millis(DEFAULT_BASE_DELAY_MILLIS))
+ let backoff = ExponentialBuilder::default()
+ // TODO: accept base delay (and maybe jitter/max delay) from a config parameter
+ .with_min_delay(Duration::from_millis(DEFAULT_BASE_DELAY_MILLIS))
.with_max_times(max_attempts.saturating_sub(1));Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In crates/common/src/retry.rs around lines 31 to 55, the backoff policy is
hard-coded (uses DEFAULT_BASE_DELAY_MILLIS and static ExponentialBuilder
settings); change the function signature to accept a configurable backoff
parameter or backoff config (e.g., base_delay, optional jitter/max_delay or an
ExponentialBuilder/Backoff instance) and use that instead of constructing
ExponentialBuilder with hard-coded values, ensuring max_attempts still bounds
retries and retaining the existing should_retry predicate behavior; if changing
the public API, provide an overloaded/helper function for backward compatibility
or clearly document the new parameter.
| #[cfg(test)] | ||
| mod tests { | ||
| use super::*; | ||
| use std::sync::atomic::{AtomicUsize, Ordering}; | ||
|
|
||
| #[derive(Debug)] | ||
| enum TestError { | ||
| Rpc, | ||
| Json, | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn retries_and_succeeds_after_transient_error() { | ||
| let attempts = AtomicUsize::new(0); | ||
| let result = retry_with_backoff( | ||
| || async { | ||
| let current = attempts.fetch_add(1, Ordering::SeqCst); | ||
| if current == 0 { | ||
| Err(TestError::Rpc) | ||
| } else { | ||
| Ok(42u32) | ||
| } | ||
| }, | ||
| 3, | ||
| |err| matches!(err, TestError::Rpc), | ||
| ) | ||
| .await; | ||
|
|
||
| assert!(result.is_ok()); | ||
| assert_eq!(result.unwrap(), 42); | ||
| assert_eq!(attempts.load(Ordering::SeqCst), 2); | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn stops_after_max_attempts() { | ||
| let attempts = AtomicUsize::new(0); | ||
| let err = retry_with_backoff( | ||
| || async { | ||
| attempts.fetch_add(1, Ordering::SeqCst); | ||
| Err::<(), _>(TestError::Rpc) | ||
| }, | ||
| 2, | ||
| |err| matches!(err, TestError::Rpc), | ||
| ) | ||
| .await | ||
| .unwrap_err(); | ||
|
|
||
| match err { | ||
| RetryError::Operation(TestError::Rpc) => {} | ||
| other => panic!("expected Rpc error, got {other:?}"), | ||
| } | ||
| assert_eq!(attempts.load(Ordering::SeqCst), 2); | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn does_not_retry_non_retryable_error() { | ||
| let attempts = AtomicUsize::new(0); | ||
| let err = retry_with_backoff( | ||
| || async { | ||
| attempts.fetch_add(1, Ordering::SeqCst); | ||
| Err::<(), _>(TestError::Json) | ||
| }, | ||
| 3, | ||
| |err| matches!(err, TestError::Rpc), | ||
| ) | ||
| .await | ||
| .unwrap_err(); | ||
|
|
||
| match err { | ||
| RetryError::Operation(TestError::Json) => {} | ||
| other => panic!("expected Json error, got {other:?}"), | ||
| } | ||
| assert_eq!(attempts.load(Ordering::SeqCst), 1); | ||
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn zero_attempts_is_config_error() { | ||
| let attempts = AtomicUsize::new(0); | ||
| let err = retry_with_backoff( | ||
| || async { | ||
| attempts.fetch_add(1, Ordering::SeqCst); | ||
| Ok::<u32, TestError>(1) | ||
| }, | ||
| 0, | ||
| |_err| true, | ||
| ) | ||
| .await | ||
| .unwrap_err(); | ||
|
|
||
| match err { | ||
| RetryError::InvalidMaxAttempts => {} | ||
| other => panic!("expected config error, got {other:?}"), | ||
| } | ||
| assert_eq!(attempts.load(Ordering::SeqCst), 0); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tests cover key paths — add a 1‑attempt case
Nice coverage for transient, max attempts, non‑retryable, and config error. Add a test for max_attempts = 1 to lock in “no retry” behavior.
#[tokio::test]
async fn one_attempt_no_retry() {
let attempts = std::sync::atomic::AtomicUsize::new(0);
let err = retry_with_backoff(
|| async {
attempts.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
Err::<(), _>(TestError::Rpc)
},
1,
|e| matches!(e, TestError::Rpc),
).await.unwrap_err();
matches!(err, RetryError::Operation(TestError::Rpc));
assert_eq!(attempts.load(std::sync::atomic::Ordering::SeqCst), 1);
}🤖 Prompt for AI Agents
In crates/common/src/retry.rs around lines 57-152, add a new #[tokio::test]
one_attempt_no_retry that uses the existing TestError and AtomicUsize setup to
verify max_attempts = 1 results in no retry: call retry_with_backoff with an
async closure that increments the counter and returns Err::<(),
_>(TestError::Rpc), set max attempts to 1 and the retry predicate to match Rpc,
assert the returned error is RetryError::Operation(TestError::Rpc) and that the
attempts counter equals 1; place the test inside the existing #[cfg(test)] mod
tests block alongside the other tests.
|
@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment: S/M/L PR Classification Guidelines:This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed. Small (S)Characteristics:
Review Effort: Would have taken 5-10 minutes Examples:
Medium (M)Characteristics:
Review Effort: Would have taken 15-30 minutes Examples:
Large (L)Characteristics:
Review Effort: Would have taken 45+ minutes Examples:
Additional Factors to ConsiderWhen deciding between sizes, also consider:
Notes:
|
|
Tip For best results, initiate chat on the files or code changes. SIZE=M 🧠 Learnings used |
Motivation
See issues:
Solution
fetch.rsandtoken_fetch.rsfiles to use the new moduleChecks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit
Refactor
Chores