Skip to content

Refactor fetch logic for local db implementation#2257

Merged
findolor merged 14 commits intomainfrom
local-db-fetch-refactor
Oct 28, 2025
Merged

Refactor fetch logic for local db implementation#2257
findolor merged 14 commits intomainfrom
local-db-fetch-refactor

Conversation

@findolor
Copy link
Copy Markdown
Collaborator

@findolor findolor commented Oct 21, 2025

Motivation

See issues:

Solution

  • Create a shared collect_logs function that fetches events based on addresses, topis and fetch config
  • Move topic definitions to bindings crate
  • Create fetch_orderbook_events and fetch_store_events functions to call shared function
  • Refactor rpc client struct to use new typed values and refactor event fetching logic
  • Refactor call sites to use the new functions
  • Update tests

Checks

By submitting this for review, I'm confirming I've done the following:

  • made this PR as small as possible
  • unit-tested any new functionality
  • linked any relevant issues or PRs
  • included screenshots (if this involves a front-end change)

Summary by CodeRabbit

Release Notes

  • Refactor

    • Reworked event log fetching to use typed, filter-based queries for more reliable and flexible retrieval.
    • Simplified RPC log API to accept structured filters.
  • New Features

    • CLI and sync flows now validate/parse addresses before fetching and apply a sensible default fetch configuration.
  • Bug Fixes

    • Improved error messages and validation to prevent invalid block-range and filter errors.

@findolor findolor requested review from 0xgleb and hardyjosh October 21, 2025 10:03
@findolor findolor self-assigned this Oct 21, 2025
@findolor
Copy link
Copy Markdown
Collaborator Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 21, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 21, 2025

Walkthrough

Adds a public topics module exposing precomputed signature hashes for orderbook and store-set events, and refactors log fetching to use typed Address/Filter/FetchConfig-based APIs: introduces fetch_orderbook_events and fetch_store_events, updates RPC client to accept an alloy Filter, and updates CLI/sync call sites to parse addresses and pass a default FetchConfig.

Changes

Cohort / File(s) Summary
Bindings: topics
crates/bindings/src/lib.rs
Added pub mod topics that re-exports event types and exposes ORDERBOOK_EVENT_TOPICS: [B256; 8] and STORE_SET_TOPICS: [B256; 1] initialized from signature hashes.
Common: fetch core
crates/common/src/local_db/fetch.rs
Replaced per-block ad-hoc fetching with Filter-based flow; added FetchConfig (with Default), build_log_filters, fetch_logs_for_filters, fetch_orderbook_events(Address, ...) and fetch_store_events(&[Address], ...); removed old fetch_events variants and store-job machinery; switched public APIs to use typed Address.
Common: sync & errors
crates/common/src/local_db/sync.rs, crates/common/src/local_db/mod.rs
Updated sync paths to call fetch_orderbook_events / fetch_store_events with parsed Address values and default FetchConfig; added new LocalDbError variants for missing/invalid filter fields and block errors; added conversions from filter errors.
Common: rpc client
crates/common/src/rpc_client.rs
Changed get_logs to accept a single Filter reference and serialize it as the RPC param; added InvalidBlockRange { start, end } error variant; adapted tests to typed Filter.
CLI: fetch command
crates/cli/src/commands/local_db/fetch_events.rs
Parse orderbook address via Address::from_str, pass numeric block bounds and &FetchConfig::default() to new fetch_orderbook_events.
CLI: sync data source
crates/cli/src/commands/local_db/sync/data_source.rs
Parse orderbook and store addresses via Address::from_str, collect parsed store Vec<Address> (with context on parse failure), and call fetch_orderbook_events / fetch_store_events with FetchConfig::default().
Tests / manifests
crates/common/tests/*, Cargo.toml, manifest_file
Updated tests to use Address/Filter/typed topics and adjusted manifests as needed.

Sequence Diagram(s)

sequenceDiagram
  participant CLI as CLI (fetch/sync)
  participant LocalDb as LocalDb.fetch_*
  participant RPC as RpcClient
  participant Node as Ethereum Node

  CLI->>LocalDb: parse addresses (Address::from_str)\ncall fetch_orderbook_events(address, from, to, &FetchConfig::default())
  LocalDb->>LocalDb: build_log_filters(addresses, from, to, topics)
  LocalDb->>RPC: get_logs(&Filter)
  RPC->>Node: eth_getLogs with serialized Filter
  Node-->>RPC: logs []
  RPC-->>LocalDb: Vec<LogEntryResponse>
  LocalDb->>LocalDb: backfill timestamps / sort / map events
  LocalDb-->>CLI: Vec<LogEntryResponse>
Loading

Note: Filter and FetchConfig are new/central to this flow; topics used in Filter are provided by bindings::topics.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Areas to focus on:

  • Correctness of new public APIs (fetch_orderbook_events, fetch_store_events) and their error paths.
  • RPC serialization of Filter and handling of block-range validation (InvalidBlockRange).
  • All call-site updates (CLI, sync) to ensure address parsing errors are surfaced with context.
  • Tests that were updated to typed Address/Filter semantics.

Possibly related issues

Possibly related PRs

Suggested reviewers

  • hardyjosh
  • 0xgleb

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 54.05% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "Refactor fetch logic for local db implementation" directly and accurately describes the primary change across the entire changeset. The PR involves a comprehensive refactoring of the fetch logic in the local database implementation, including the replacement of per-block, per-contract log collection with a Filter-based approach, the introduction of shared functions like collect_logs, and updates to public APIs like fetch_orderbook_events and fetch_store_events. The title is concise, clear, and specific enough that a teammate scanning the commit history would understand this PR addresses a fundamental restructuring of how fetching is handled in the local db module. It avoids vague terminology and accurately reflects the PR's objectives.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch local-db-fetch-refactor

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
crates/common/src/local_db/sync.rs (1)

71-79: Guard already-synced case and avoid ? type-mismatch on BlockRange

If start_block > latest_block (e.g., DB drift), BlockRange::inclusive errors; better short‑circuit and log. Also, using ? here requires From<RpcClientError> for LocalDbError. Map explicitly to avoid a compile error.

Apply:

     let start_block = if last_synced_block == 0 {
         orderbook_cfg.deployment_block
     } else {
         last_synced_block.saturating_add(1)
     };
 
-    let range = BlockRange::inclusive(start_block, latest_block)?;
+    if start_block > latest_block {
+        status.send(format!(
+            "Already up-to-date: start_block {} > latest_block {}",
+            start_block, latest_block
+        ))?;
+        return Ok(());
+    }
+    let range = BlockRange::inclusive(start_block, latest_block)
+        .map_err(|e| LocalDbError::CustomError(e.to_string()))?;
crates/cli/src/commands/local_db/sync/data_source.rs (1)

18-32: Naming drift: consider renaming fetch_store_set_events -> fetch_store_events

The impl now calls fetch_store_events. Aligning the trait method name will reduce confusion.

crates/common/src/rpc_client.rs (1)

236-261: Optional: extract RPC error message string if present

map_transport_error currently to_string()s the error object. If JSON has error.message, prefer that for cleaner logs.

crates/common/src/local_db/fetch.rs (3)

226-259: Avoid cloning Topics per job (micro‑opt)

Cloning Topics for each job is minor but avoidable. Wrap once in Arc and clone the Arc inside the task.

-    async fn fetch_logs_for_jobs(
+    async fn fetch_logs_for_jobs(
         &self,
-        jobs: Vec<LogFetchJob>,
-        topics: &Topics,
+        jobs: Vec<LogFetchJob>,
+        topics: &Topics,
         config: &FetchConfig,
     ) -> Result<Vec<LogEntryResponse>, LocalDbError> {
@@
-        let results: Vec<Vec<LogEntryResponse>> = futures::stream::iter(jobs)
+        let topics = std::sync::Arc::new(topics.clone());
+        let results: Vec<Vec<LogEntryResponse>> = futures::stream::iter(jobs)
             .map(|job| {
-                let topics = topics.clone();
+                let topics = topics.clone(); // Arc<Topics>
                 let client = client.clone();
                 let max_attempts = config.max_retry_attempts;
                 async move {
                     let response = retry_with_attempts(
-                        || client.get_logs(job.address, &topics, job.range),
+                        || client.get_logs(job.address, &*topics, job.range),
                         max_attempts,
                     )

261-269: Use sort_unstable_by for slightly cheaper sorting

No need to preserve relative order of equal keys; unstable sort is marginally faster and uses fewer moves.

-        events.sort_by(|a, b| {
+        events.sort_unstable_by(|a, b| {

280-310: Backoff: add jitter to reduce coordinated retries

Constant delay can synchronize callers across pods. Prefer jittered exponential (or at least jittered constant) and keep JSON parse as non‑retryable.

-    let backoff = ConstantBuilder::default()
-        .with_delay(Duration::from_millis(RETRY_DELAY_MILLIS))
-        .with_max_times(max_attempts.saturating_sub(1));
+    let backoff = backon::ExponentialBuilder::default()
+        .with_initial(Duration::from_millis(RETRY_DELAY_MILLIS))
+        .with_jitter() // randomize to avoid thundering herd
+        .with_max_times(max_attempts.saturating_sub(1));
📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 88a8894 and 977f8d9.

📒 Files selected for processing (6)
  • crates/bindings/src/lib.rs (1 hunks)
  • crates/cli/src/commands/local_db/fetch_events.rs (2 hunks)
  • crates/cli/src/commands/local_db/sync/data_source.rs (3 hunks)
  • crates/common/src/local_db/fetch.rs (13 hunks)
  • crates/common/src/local_db/sync.rs (3 hunks)
  • crates/common/src/rpc_client.rs (8 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-10-18T10:38:41.273Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2237
File: crates/common/src/raindex_client/local_db/sync.rs:79-89
Timestamp: 2025-10-18T10:38:41.273Z
Learning: In `crates/common/src/raindex_client/local_db/sync.rs`, the sync_database method currently only supports indexing a single orderbook per chain ID, which is why `.first()` is used to select the orderbook configuration. Multi-orderbook support per chain ID is planned for future PRs.

Applied to files:

  • crates/common/src/local_db/sync.rs
📚 Learning: 2025-10-06T11:13:29.956Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2123
File: crates/common/src/raindex_client/local_db/mod.rs:23-29
Timestamp: 2025-10-06T11:13:29.956Z
Learning: In `crates/common/src/raindex_client/local_db/mod.rs`, the `Default` implementation for `LocalDb` that creates an RPC client pointing to `http://localhost:4444` is acceptable because the RPC client must be explicitly configured before actual usage in production scenarios.

Applied to files:

  • crates/common/src/local_db/fetch.rs
📚 Learning: 2025-05-20T10:20:08.206Z
Learnt from: 0xgleb
PR: rainlanguage/rain.orderbook#1859
File: crates/quote/src/quote_debug.rs:472-492
Timestamp: 2025-05-20T10:20:08.206Z
Learning: In the Rain Orderbook codebase, the `#[tokio::test(flavor = "multi_thread")]` annotation is specifically needed for tests that use `LocalEvm`, not just for consistency across all async tests.

Applied to files:

  • crates/common/src/local_db/fetch.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: test
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-static)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-legal)
  • GitHub Check: standard-tests (ubuntu-latest, test-js-bindings)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-artifacts)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-wasm-artifacts)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-wasm-test)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-sol-test)
  • GitHub Check: standard-tests (ubuntu-latest, ob-rs-test, true)
  • GitHub Check: standard-tests (ubuntu-latest, rainix-rs-artifacts, true)
  • GitHub Check: git-clean
  • GitHub Check: build-tauri (ubuntu-22.04, true)
  • GitHub Check: Deploy-Docs-Preview
  • GitHub Check: Deploy-Preview-Push
🔇 Additional comments (7)
crates/common/src/rpc_client.rs (2)

90-111: Topics wrapper looks good

Shape matches eth_getLogs expectations; hex encoding via hex::encode_prefixed is correct. No issues.


113-133: BlockRange API is clear; inclusive constructor enforces order

as_json lowercase 0x hex aligns with JSON‑RPC. Good.

crates/common/src/local_db/sync.rs (1)

81-84: No issues found—code is correct.

OrderbookCfg.address is already of type Address (line 28 in crates/settings/src/orderbook.rs), which matches the parameter type expected by fetch_orderbook_events. No parsing or error mapping is needed.

crates/common/src/local_db/fetch.rs (4)

38-78: Shared fetch path and wrappers look solid

Good guards (inverted range, empty addresses), stable-order dedupe, unified job building, and consistent sort+timestamp backfill. The orderbook/store wrappers are thin and clear. LGTM.

Also applies to: 80-106


187-194: Dedupe preserves first occurrence order

Using a HashSet inside filter preserves input order for first occurrences while removing non-consecutive dupes. This is the right trade-off for predictable RPC filtering. LGTM.


316-341: Block/log number parsing is robust

Hex/decimal variants handled, invalid inputs mapped to InvalidBlockNumber, and U256→u64 fits Ethereum ranges. LGTM.


663-758: Tests: great coverage on sorting, chunking, dedupe, and retry semantics

  • Numeric logIndex sorting within a block is verified.
  • Per‑block chunking merges/sorts results as expected.
  • Dedupe reduces RPC calls (asserting single hit).
    As per prior learnings, using LocalDb::default in tests is acceptable. LGTM. [Based on learnings]

Also applies to: 1002-1093

Comment on lines +34 to 40
self.fetch_orderbook_events(
Address::from_str(address)?,
BlockRange::inclusive(start_block, end_block)?,
&FetchConfig::default(),
)
.await
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Avoid implicit ? conversions; map to LocalDbError with user-friendly messages

Address::from_str and BlockRange::inclusive return non‑LocalDbError types. Using ? likely won’t compile unless From exists. Map them to LocalDbError::Config (used elsewhere in CLI) and keep a helpful message.

-        self.fetch_orderbook_events(
-            Address::from_str(address)?,
-            BlockRange::inclusive(start_block, end_block)?,
-            &FetchConfig::default(),
-        )
-        .await
+        let addr = Address::from_str(address).map_err(|e| LocalDbError::Config {
+            message: format!("invalid orderbook address: {e}"),
+        })?;
+        let range = BlockRange::inclusive(start_block, end_block).map_err(|e| {
+            LocalDbError::Config {
+                message: e.to_string(),
+            }
+        })?;
+        self.fetch_orderbook_events(addr, range, &FetchConfig::default())
+            .await
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/fetch_events.rs around lines 34 to 40, the
code uses `?` on `Address::from_str` and `BlockRange::inclusive` which yield
error types that are not `LocalDbError`, so map their errors to
`LocalDbError::Config` with clear, user-friendly messages. Convert the address
and block range before calling `self.fetch_orderbook_events`, using
`.map_err(|e| LocalDbError::Config(format!("invalid address '{}': {}", address,
e)))` for the address and `.map_err(|e| LocalDbError::Config(format!("invalid
block range {}-{}: {}", start_block, end_block, e)))` for the block range so the
function returns `LocalDbError::Config` on failure and then pass the resulting
`Address` and `BlockRange` into `self.fetch_orderbook_events(...).await`.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it does compile so no need to do this

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@findolor, understood! If the From implementations are in place and the code compiles cleanly, then the current approach is fine. I'll note that for future reference.

Comment on lines +88 to 96
<LocalDb>::fetch_orderbook_events(
self,
Address::from_str(orderbook_address)?,
BlockRange::inclusive(start_block, end_block)?,
&FetchConfig::default(),
)
.await
.map_err(|e| anyhow!(e))
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add context to address/range parsing errors

Use anyhow::Context to make errors actionable (which address/range failed).

-        <LocalDb>::fetch_orderbook_events(
-            self,
-            Address::from_str(orderbook_address)?,
-            BlockRange::inclusive(start_block, end_block)?,
+        use anyhow::Context as _;
+        let addr = Address::from_str(orderbook_address)
+            .with_context(|| format!("invalid orderbook address: {orderbook_address}"))?;
+        let range = BlockRange::inclusive(start_block, end_block)
+            .with_context(|| format!("invalid block range: {}..={}", start_block, end_block))?;
+        <LocalDb>::fetch_orderbook_events(
+            self,
+            addr,
+            range,
             &FetchConfig::default(),
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<LocalDb>::fetch_orderbook_events(
self,
Address::from_str(orderbook_address)?,
BlockRange::inclusive(start_block, end_block)?,
&FetchConfig::default(),
)
.await
.map_err(|e| anyhow!(e))
}
use anyhow::Context as _;
let addr = Address::from_str(orderbook_address)
.with_context(|| format!("invalid orderbook address: {orderbook_address}"))?;
let range = BlockRange::inclusive(start_block, end_block)
.with_context(|| format!("invalid block range: {}..={}", start_block, end_block))?;
<LocalDb>::fetch_orderbook_events(
self,
addr,
range,
&FetchConfig::default(),
)
.await
.map_err(|e| anyhow!(e))
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/data_source.rs around lines 88 to 96,
the Address::from_str and BlockRange::inclusive calls lack contextual error
messages; use anyhow::Context to attach which address or range failed by
replacing the fallible calls with .with_context or .context closures (e.g.,
Address::from_str(orderbook_address).with_context(|| format!("parsing orderbook
address: {}", orderbook_address)) and BlockRange::inclusive(start_block,
end_block).with_context(|| format!("parsing block range {}..{}", start_block,
end_block))), ensure anyhow::Context is imported, and keep the existing
fetch_orderbook_events await flow (you may optionally also add context to the
fetch error with .context("fetching orderbook events for ...")).

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@findolor, understood! I'll skip this suggestion.


🧠 Learnings used
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2159
File: crates/cli/src/commands/local_db/sync/cli.rs:37-78
Timestamp: 2025-10-06T14:39:36.425Z
Learning: In the SyncLocalDb CLI command (crates/cli/src/commands/local_db/sync/cli.rs), block range validation (e.g., start_block <= end_block) is handled internally by the sync runner, not at the CLI entry point.

Learnt from: findolor
PR: rainlanguage/rain.orderbook#2237
File: crates/common/src/raindex_client/local_db/sync.rs:79-89
Timestamp: 2025-10-18T10:38:41.273Z
Learning: In `crates/common/src/raindex_client/local_db/sync.rs`, the sync_database method currently only supports indexing a single orderbook per chain ID, which is why `.first()` is used to select the orderbook configuration. Multi-orderbook support per chain ID is planned for future PRs.

Comment thread crates/cli/src/commands/local_db/sync/data_source.rs
Comment thread crates/common/src/local_db/fetch.rs Outdated
Comment on lines +95 to 99
let store_addresses: Vec<Address> = store_addresses_vec
.iter()
.map(|s| Address::from_str(s))
.collect::<Result<_, _>>()?;

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Map store address parse errors into LocalDbError

The ? here may not convert into LocalDbError. Provide a clear error and avoid relying on a blanket From.

-    let store_addresses: Vec<Address> = store_addresses_vec
-        .iter()
-        .map(|s| Address::from_str(s))
-        .collect::<Result<_, _>>()?;
+    let store_addresses: Vec<Address> = store_addresses_vec
+        .iter()
+        .map(|s| Address::from_str(s).map_err(|e| {
+            LocalDbError::CustomError(format!("invalid store address {}: {}", s, e))
+        }))
+        .collect::<Result<_, _>>()?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let store_addresses: Vec<Address> = store_addresses_vec
.iter()
.map(|s| Address::from_str(s))
.collect::<Result<_, _>>()?;
let store_addresses: Vec<Address> = store_addresses_vec
.iter()
.map(|s| Address::from_str(s).map_err(|e| {
LocalDbError::CustomError(format!("invalid store address {}: {}", s, e))
}))
.collect::<Result<_, _>>()?;
🤖 Prompt for AI Agents
In crates/common/src/local_db/sync.rs around lines 95–99, the current use of
Address::from_str(s) with a trailing ? may not convert parsing errors into
LocalDbError; change the mapping so that each Address::from_str call maps parse
failures into a LocalDbError with contextual info (e.g., include the offending
string) before collecting: use .map(|s| Address::from_str(s).map_err(|e|
LocalDbError::InvalidStoreAddress { source: e, addr: s.clone() })) and then
collect::<Result<Vec<Address>, _>>()? so the final error is a LocalDbError
instead of relying on a blanket From conversion.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to do this

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@findolor, understood! I'll leave the current implementation as-is.

Comment thread crates/common/src/rpc_client.rs Outdated
Comment thread crates/common/src/local_db/fetch.rs Outdated
};

#[derive(Debug, Clone)]
struct LogFilter {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread crates/common/src/local_db/fetch.rs Outdated
break;
}
current_block = to_block.saturating_add(1);
if filter.range.start > filter.range.end {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we were using the alloy type it comes with ensure_valid_block_range

Comment thread crates/common/src/local_db/fetch.rs Outdated
}

let unique_addresses = Self::dedupe_addresses(store_addresses);
let unique_addresses = Self::dedupe_addresses(&filter.addresses);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the alloy struct uses a hash set so it wouldn't have dupes

Comment thread crates/common/src/rpc_client.rs Outdated
}

#[derive(Debug, Clone)]
pub struct Topics(pub Option<Vec<Option<Vec<B256>>>>);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we use the alloy filter struct does it handle serialisation for us?

@findolor findolor requested a review from hardyjosh October 24, 2025 07:50
Base automatically changed from local-db-sql-statement to main October 28, 2025 05:26
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 977f8d9 and 182196c.

📒 Files selected for processing (7)
  • crates/bindings/src/lib.rs (1 hunks)
  • crates/cli/src/commands/local_db/fetch_events.rs (2 hunks)
  • crates/cli/src/commands/local_db/sync/data_source.rs (4 hunks)
  • crates/common/src/local_db/fetch.rs (11 hunks)
  • crates/common/src/local_db/mod.rs (3 hunks)
  • crates/common/src/local_db/sync.rs (2 hunks)
  • crates/common/src/rpc_client.rs (5 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
crates/*/{src,tests,benches,examples}/**/*.rs

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Rust code must pass clippy with all warnings denied (cargo clippy --workspace --all-targets --all-features -D warnings)

Files:

  • crates/cli/src/commands/local_db/fetch_events.rs
  • crates/common/src/local_db/sync.rs
  • crates/common/src/local_db/mod.rs
  • crates/bindings/src/lib.rs
  • crates/common/src/local_db/fetch.rs
  • crates/cli/src/commands/local_db/sync/data_source.rs
  • crates/common/src/rpc_client.rs
crates/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

Rust code lives in the workspace under crates/* (e.g., cli, common, bindings, js_api, quote, subgraph, settings, math, integration_tests)

Files:

  • crates/cli/src/commands/local_db/fetch_events.rs
  • crates/common/src/local_db/sync.rs
  • crates/common/src/local_db/mod.rs
  • crates/bindings/src/lib.rs
  • crates/common/src/local_db/fetch.rs
  • crates/cli/src/commands/local_db/sync/data_source.rs
  • crates/common/src/rpc_client.rs
**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/*.rs: Format Rust with nix develop -c cargo fmt --all
Lint Rust with nix develop -c rainix-rs-static (preconfigured flags)
Use snake_case for Rust crates/modules and PascalCase for types

Files:

  • crates/cli/src/commands/local_db/fetch_events.rs
  • crates/common/src/local_db/sync.rs
  • crates/common/src/local_db/mod.rs
  • crates/bindings/src/lib.rs
  • crates/common/src/local_db/fetch.rs
  • crates/cli/src/commands/local_db/sync/data_source.rs
  • crates/common/src/rpc_client.rs
🧠 Learnings (6)
📚 Learning: 2025-10-14T07:51:55.148Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2202
File: crates/common/src/raindex_client/local_db/sync.rs:33-34
Timestamp: 2025-10-14T07:51:55.148Z
Learning: In `crates/common/src/raindex_client/local_db/sync.rs`, the hard-coded `DEFAULT_SYNC_CHAIN_ID` constant (set to `SUPPORTED_LOCAL_DB_CHAINS[0]`) will be replaced with proper chain ID handling in downstream PRs as part of the multi-network/orderbook implementation.

Applied to files:

  • crates/common/src/local_db/sync.rs
📚 Learning: 2025-10-18T10:38:41.273Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2237
File: crates/common/src/raindex_client/local_db/sync.rs:79-89
Timestamp: 2025-10-18T10:38:41.273Z
Learning: In `crates/common/src/raindex_client/local_db/sync.rs`, the sync_database method currently only supports indexing a single orderbook per chain ID, which is why `.first()` is used to select the orderbook configuration. Multi-orderbook support per chain ID is planned for future PRs.

Applied to files:

  • crates/common/src/local_db/sync.rs
  • crates/common/src/local_db/fetch.rs
  • crates/cli/src/commands/local_db/sync/data_source.rs
📚 Learning: 2025-07-09T07:42:48.458Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1975
File: crates/js_api/src/config.rs:44-45
Timestamp: 2025-07-09T07:42:48.458Z
Learning: User findolor prefers to defer code improvements and refactoring suggestions to future PRs when they are not directly related to the current PR's scope, even when the suggestions are valid improvements.

Applied to files:

  • crates/common/src/local_db/sync.rs
  • crates/common/src/rpc_client.rs
📚 Learning: 2025-05-14T05:13:59.713Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1744
File: crates/subgraph/src/orderbook_client.rs:44-46
Timestamp: 2025-05-14T05:13:59.713Z
Learning: In the rain.orderbook project, WASM dependencies are intentionally made available in non-WASM targets to facilitate testing of WASM-related functionality, so conditional compilation guards like `#[cfg(target_family = "wasm")]` should not be added to imports or implementations that may be needed for tests.

Applied to files:

  • crates/bindings/src/lib.rs
📚 Learning: 2025-10-06T11:13:29.956Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2123
File: crates/common/src/raindex_client/local_db/mod.rs:23-29
Timestamp: 2025-10-06T11:13:29.956Z
Learning: In `crates/common/src/raindex_client/local_db/mod.rs`, the `Default` implementation for `LocalDb` that creates an RPC client pointing to `http://localhost:4444` is acceptable because the RPC client must be explicitly configured before actual usage in production scenarios.

Applied to files:

  • crates/common/src/local_db/fetch.rs
📚 Learning: 2025-10-06T14:39:36.425Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2159
File: crates/cli/src/commands/local_db/sync/cli.rs:37-78
Timestamp: 2025-10-06T14:39:36.425Z
Learning: In the SyncLocalDb CLI command (crates/cli/src/commands/local_db/sync/cli.rs), block range validation (e.g., start_block <= end_block) is handled internally by the sync runner, not at the CLI entry point.

Applied to files:

  • crates/cli/src/commands/local_db/sync/data_source.rs
🧬 Code graph analysis (5)
crates/cli/src/commands/local_db/fetch_events.rs (1)
crates/common/src/local_db/mod.rs (2)
  • rpc_client (195-197)
  • default (34-38)
crates/common/src/local_db/sync.rs (1)
crates/cli/src/commands/local_db/sync/data_source.rs (1)
  • store_addresses (105-113)
crates/common/src/local_db/mod.rs (1)
crates/common/src/raindex_client/local_db/mod.rs (1)
  • from (11-16)
crates/common/src/local_db/fetch.rs (2)
crates/common/src/local_db/mod.rs (3)
  • new (191-193)
  • default (34-38)
  • new_with_regular_rpc (177-179)
crates/common/src/rpc_client.rs (1)
  • sample_block_response (253-281)
crates/cli/src/commands/local_db/sync/data_source.rs (1)
crates/common/src/local_db/fetch.rs (3)
  • fetch_orderbook_events (58-71)
  • default (22-29)
  • fetch_store_events (73-86)
🔇 Additional comments (18)
crates/bindings/src/lib.rs (1)

48-59: LGTM! Const arrays avoid allocation overhead.

The compile-time constant arrays efficiently expose event signature hashes without per-call allocations, addressing the concern raised in the previous review.

crates/common/src/rpc_client.rs (3)

3-3: Typed Filter import is appropriate

Good move to typed filtering; keeps RPC params consistent with alloy’s model.


142-150: get_logs(&Filter) refactor looks solid

Passing the typed Filter and serializing as a single param matches eth_getLogs expectations; error mapping is consistent with the rest of the client.


400-447: Tests assert wire shape effectively

Building a Filter via JSON and asserting body_contains for block range, address, and topic is fine and verifies serialization behavior.

crates/common/src/local_db/mod.rs (2)

12-12: Importing FilterBlockError is correct

Enables transparent propagation from alloy when validating filter blocks.


164-171: Readable messages align with variants

Human-friendly strings are clear and consistent with callers’ needs.

crates/cli/src/commands/local_db/fetch_events.rs (1)

34-41: CLI now calls typed API with Address + FetchConfig::default()

Refactor aligns with the new fetch_orderbook_events signature; straightforward and correct.

crates/common/src/local_db/sync.rs (2)

78-84: Orderbook fetch path updated to typed API

Good use of FetchConfig::default(); keeps sync behavior intact.


97-100: Store addresses parsed to Address before fetch

Approach is correct and keeps downstream filters typed.

crates/cli/src/commands/local_db/sync/data_source.rs (2)

88-97: Typed orderbook fetch with default config

Clean delegation to LocalDb::fetch_orderbook_events; error mapping via anyhow is fine.


105-117: Great: contextual errors for store address parsing

Including index and value via Context makes failures actionable; then calling fetch_store_events with typed addresses is correct.

crates/common/src/local_db/fetch.rs (7)

1-30: LGTM! Clean imports and well-structured configuration.

The FetchConfig struct provides good control over fetch behavior with sensible defaults.


33-56: Well-structured orchestration of the fetch pipeline.

The collect_logs method correctly validates the filter, handles edge cases (empty addresses), and properly sequences building filters, fetching logs, sorting, and backfilling timestamps.


88-165: LGTM! Efficient timestamp backfilling with proper concurrency.

The implementation correctly identifies missing timestamps, fetches them concurrently with appropriate retry logic, and backfills the events.


216-255: LGTM! Properly concurrent log fetching with correct sorting.

The concurrent fetching uses appropriate buffer_unordered patterns, and the sorting correctly handles numeric block/log indices.


257-320: LGTM! Robust retry logic with comprehensive block number parsing.

The retry mechanism correctly distinguishes transient RPC errors from permanent parse errors, and the block number parser handles both hex and decimal formats properly.


391-643: Excellent test coverage for public APIs.

The tests comprehensively cover happy paths (fetching, sorting, timestamp backfill), edge cases (empty addresses, duplicate addresses), error cases (inverted ranges), and correctness of numeric sorting within blocks.


645-1085: Comprehensive test coverage for internal helpers and edge cases.

The tests thoroughly verify retry behavior, parsing edge cases, chunking logic (including critical u64::MAX boundary), timestamp fetching error handling, and filter merging. The u64::MAX boundary test at lines 888-911 is particularly valuable for preventing potential infinite loops.

Comment on lines +38 to +60
pub mod topics {
use crate::{
IInterpreterStoreV3::Set,
IOrderBookV5::{
AddOrderV3, AfterClearV2, ClearV3, DepositV2, RemoveOrderV3, TakeOrderV3, WithdrawV2,
},
OrderBook::MetaV1_2,
};
use alloy::{primitives::B256, sol_types::SolEvent};

pub const ORDERBOOK_EVENT_TOPICS: [B256; 8] = [
AddOrderV3::SIGNATURE_HASH,
TakeOrderV3::SIGNATURE_HASH,
WithdrawV2::SIGNATURE_HASH,
DepositV2::SIGNATURE_HASH,
RemoveOrderV3::SIGNATURE_HASH,
ClearV3::SIGNATURE_HASH,
AfterClearV2::SIGNATURE_HASH,
MetaV1_2::SIGNATURE_HASH,
];

pub const STORE_SET_TOPICS: [B256; 1] = [Set::SIGNATURE_HASH];
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding documentation for the public API.

The public constants would benefit from doc comments explaining their purpose and typical usage (e.g., "Event topic hashes for filtering orderbook events from RPC logs").

Apply this diff to add documentation:

+/// Event signature hashes for filtering orderbook contract events.
+///
+/// Use these topic hashes when constructing log filters for RPC queries
+/// to retrieve orderbook-related events.
 pub mod topics {
     use crate::{
         IInterpreterStoreV3::Set,
@@ -46,9 +50,13 @@
     use alloy::{primitives::B256, sol_types::SolEvent};
 
+    /// Signature hashes for all orderbook events: AddOrder, TakeOrder, Withdraw,
+    /// Deposit, RemoveOrder, Clear, AfterClear, and Meta.
     pub const ORDERBOOK_EVENT_TOPICS: [B256; 8] = [
         // ... existing array elements
     ];
 
+    /// Signature hash for the interpreter store Set event.
     pub const STORE_SET_TOPICS: [B256; 1] = [Set::SIGNATURE_HASH];
 }
🤖 Prompt for AI Agents
In crates/bindings/src/lib.rs around lines 38 to 60, the public constants
ORDERBOOK_EVENT_TOPICS and STORE_SET_TOPICS lack documentation; add Rust doc
comments (///) above each public constant explaining their purpose and typical
usage (e.g., "Event topic hashes for filtering orderbook events from RPC logs"
for ORDERBOOK_EVENT_TOPICS and "Topic hash for Store::Set events" for
STORE_SET_TOPICS), include brief notes on the type (B256 array) and how
consumers should use them (filtering RPC logs or matching event signatures), and
keep comments concise and idiomatic for lib-level public API docs.

Comment on lines +58 to +86
pub async fn fetch_orderbook_events(
&self,
address: Address,
from_block: u64,
to_block: u64,
config: &FetchConfig,
) -> Result<Vec<LogEntryResponse>, LocalDbError> {
let filter = Filter::new()
.address(address)
.from_block(from_block)
.to_block(to_block)
.event_signature(ORDERBOOK_EVENT_TOPICS.to_vec());
self.collect_logs(&filter, config).await
}

pub async fn fetch_store_events(
&self,
addresses: &[Address],
from_block: u64,
to_block: u64,
config: &FetchConfig,
) -> Result<Vec<LogEntryResponse>, LocalDbError> {
let filter = Filter::new()
.address(addresses.to_vec())
.from_block(from_block)
.to_block(to_block)
.event_signature(STORE_SET_TOPICS.to_vec());
self.collect_logs(&filter, config).await
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding doc comments to public APIs.

The public methods fetch_orderbook_events and fetch_store_events would benefit from documentation describing their purpose, parameters, return values, and potential errors.

Example:

/// Fetches orderbook events for a given address within a block range.
///
/// # Arguments
/// * `address` - The orderbook contract address
/// * `from_block` - Starting block number (inclusive)
/// * `to_block` - Ending block number (inclusive)
/// * `config` - Fetch configuration controlling chunking and concurrency
///
/// # Returns
/// A sorted vector of log entries, with timestamps backfilled where missing.
///
/// # Errors
/// Returns an error if the block range is invalid or if RPC requests fail.
pub async fn fetch_orderbook_events(
    &self,
    address: Address,
    from_block: u64,
    to_block: u64,
    config: &FetchConfig,
) -> Result<Vec<LogEntryResponse>, LocalDbError> {
🤖 Prompt for AI Agents
crates/common/src/local_db/fetch.rs lines 58-86: The two public async functions
fetch_orderbook_events and fetch_store_events lack Rust doc comments; add ///
doc comments immediately above each function describing purpose, each parameter
(address/addresses, from_block, to_block, config), the return value
(Result<Vec<LogEntryResponse>, LocalDbError>), and possible errors (invalid
block ranges, RPC failures), keeping them brief and following Rustdoc
conventions (short summary line, optional # Arguments and # Errors sections).

Comment on lines +167 to 214
fn build_log_filters(
filter: &Filter,
config: &FetchConfig,
) -> Vec<StoreLogFetchJob> {
if start_block > end_block {
return Vec::new();
}

) -> Result<Vec<Filter>, LocalDbError> {
let chunk_size = config.chunk_size.max(1);
let chunk_span = chunk_size.saturating_sub(1);
let mut jobs = Vec::new();

for address in addresses {
let mut current_block = start_block;
while current_block <= end_block {
let to_block = current_block.saturating_add(chunk_span).min(end_block);
jobs.push(StoreLogFetchJob {
address: address.clone(),
from_block: current_block,
to_block,
});

if to_block == end_block || to_block == u64::MAX {
let mut filters = Vec::new();

let from_block = filter
.block_option
.get_from_block()
.ok_or(LocalDbError::MissingBlockFilter("from".to_string()))?
.as_number()
.ok_or(LocalDbError::NonNumberBlockNumber("from".to_string()))?;
let to_block = filter
.block_option
.get_to_block()
.ok_or(LocalDbError::MissingBlockFilter("to".to_string()))?
.as_number()
.ok_or(LocalDbError::NonNumberBlockNumber("to".to_string()))?;

for &address in filter.address.iter() {
let mut current_block = from_block;
while current_block <= to_block {
let end = current_block.saturating_add(chunk_span).min(to_block);

let all_topics = filter.topics.clone();
let topics = all_topics
.first()
.ok_or(LocalDbError::MissingTopicsFilter)?;

let mut filter = Filter::new()
.address(address)
.from_block(current_block)
.to_block(end);
filter = filter.event_signature(topics.clone());
filters.push(filter);

if end == to_block || end == u64::MAX {
break;
}

current_block = to_block.saturating_add(1);
current_block = end.saturating_add(1);
}
}

jobs
}

fn store_set_topics() -> Option<Vec<Option<Vec<String>>>> {
Some(vec![Some(vec![Set::SIGNATURE_HASH.to_string()])])
Ok(filters)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Optimize topics handling to avoid unnecessary cloning.

Lines 193-196 clone the entire filter.topics array on each address iteration, but only the first element is used. This is inefficient when processing many addresses.

Apply this refactor to clone only the needed element:

         for &address in filter.address.iter() {
             let mut current_block = from_block;
             while current_block <= to_block {
                 let end = current_block.saturating_add(chunk_span).min(to_block);
 
-                let all_topics = filter.topics.clone();
-                let topics = all_topics
+                let topics = filter.topics
                     .first()
-                    .ok_or(LocalDbError::MissingTopicsFilter)?;
+                    .ok_or(LocalDbError::MissingTopicsFilter)?
+                    .clone();
 
                 let mut filter = Filter::new()
                     .address(address)
                     .from_block(current_block)
                     .to_block(end);
-                filter = filter.event_signature(topics.clone());
+                filter = filter.event_signature(topics);
                 filters.push(filter);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn build_log_filters(
filter: &Filter,
config: &FetchConfig,
) -> Vec<StoreLogFetchJob> {
if start_block > end_block {
return Vec::new();
}
) -> Result<Vec<Filter>, LocalDbError> {
let chunk_size = config.chunk_size.max(1);
let chunk_span = chunk_size.saturating_sub(1);
let mut jobs = Vec::new();
for address in addresses {
let mut current_block = start_block;
while current_block <= end_block {
let to_block = current_block.saturating_add(chunk_span).min(end_block);
jobs.push(StoreLogFetchJob {
address: address.clone(),
from_block: current_block,
to_block,
});
if to_block == end_block || to_block == u64::MAX {
let mut filters = Vec::new();
let from_block = filter
.block_option
.get_from_block()
.ok_or(LocalDbError::MissingBlockFilter("from".to_string()))?
.as_number()
.ok_or(LocalDbError::NonNumberBlockNumber("from".to_string()))?;
let to_block = filter
.block_option
.get_to_block()
.ok_or(LocalDbError::MissingBlockFilter("to".to_string()))?
.as_number()
.ok_or(LocalDbError::NonNumberBlockNumber("to".to_string()))?;
for &address in filter.address.iter() {
let mut current_block = from_block;
while current_block <= to_block {
let end = current_block.saturating_add(chunk_span).min(to_block);
let all_topics = filter.topics.clone();
let topics = all_topics
.first()
.ok_or(LocalDbError::MissingTopicsFilter)?;
let mut filter = Filter::new()
.address(address)
.from_block(current_block)
.to_block(end);
filter = filter.event_signature(topics.clone());
filters.push(filter);
if end == to_block || end == u64::MAX {
break;
}
current_block = to_block.saturating_add(1);
current_block = end.saturating_add(1);
}
}
jobs
}
fn store_set_topics() -> Option<Vec<Option<Vec<String>>>> {
Some(vec![Some(vec![Set::SIGNATURE_HASH.to_string()])])
Ok(filters)
}
fn build_log_filters(
filter: &Filter,
config: &FetchConfig,
) -> Result<Vec<Filter>, LocalDbError> {
let chunk_size = config.chunk_size.max(1);
let chunk_span = chunk_size.saturating_sub(1);
let mut filters = Vec::new();
let from_block = filter
.block_option
.get_from_block()
.ok_or(LocalDbError::MissingBlockFilter("from".to_string()))?
.as_number()
.ok_or(LocalDbError::NonNumberBlockNumber("from".to_string()))?;
let to_block = filter
.block_option
.get_to_block()
.ok_or(LocalDbError::MissingBlockFilter("to".to_string()))?
.as_number()
.ok_or(LocalDbError::NonNumberBlockNumber("to".to_string()))?;
for &address in filter.address.iter() {
let mut current_block = from_block;
while current_block <= to_block {
let end = current_block.saturating_add(chunk_span).min(to_block);
let topics = filter.topics
.first()
.ok_or(LocalDbError::MissingTopicsFilter)?
.clone();
let mut filter = Filter::new()
.address(address)
.from_block(current_block)
.to_block(end);
filter = filter.event_signature(topics);
filters.push(filter);
if end == to_block || end == u64::MAX {
break;
}
current_block = end.saturating_add(1);
}
}
Ok(filters)
}
🤖 Prompt for AI Agents
In crates/common/src/local_db/fetch.rs around lines 167-214, the code currently
clones the entire filter.topics inside the per-address loop then takes the first
element, which is wasteful; move the topics extraction outside the address loop,
call filter.topics.first() once (returning an Option) and map/clone only that
first topic (or return MissingTopicsFilter if None), then use that cloned first
topic inside the address loop when building each Filter; this avoids cloning the
full topics array on every iteration and reduces allocations.

Comment on lines +107 to 118
#[error("Missing topics filter")]
MissingTopicsFilter,

#[error("Missing block filter: {0}")]
MissingBlockFilter(String),

#[error("Block number is not number: {0}")]
NonNumberBlockNumber(String),

#[error(transparent)]
FilterBlockError(#[from] FilterBlockError),
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

New filter-related error variants make sense; tiny wording nit

Good coverage of missing topics/blocks and underlying filter errors. Suggest fixing the error text grammar.

Apply this diff to improve wording:

-    #[error("Block number is not number: {0}")]
+    #[error("Block number is not a number: {0}")]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#[error("Missing topics filter")]
MissingTopicsFilter,
#[error("Missing block filter: {0}")]
MissingBlockFilter(String),
#[error("Block number is not number: {0}")]
NonNumberBlockNumber(String),
#[error(transparent)]
FilterBlockError(#[from] FilterBlockError),
}
#[error("Missing topics filter")]
MissingTopicsFilter,
#[error("Missing block filter: {0}")]
MissingBlockFilter(String),
#[error("Block number is not a number: {0}")]
NonNumberBlockNumber(String),
#[error(transparent)]
FilterBlockError(#[from] FilterBlockError),
🤖 Prompt for AI Agents
In crates/common/src/local_db/mod.rs around lines 107–118, the error message
wording needs a small grammar fix: change the "Block number is not number: {0}"
variant to read "Block number is not a number: {0}" (and while here, optionally
standardize punctuation/capitalization for consistency, e.g., ensure "Missing
topics filter" and "Missing block filter: {0}" match project style); update the
string literal(s) accordingly so the displayed error text uses the corrected
wording.

Comment on lines +236 to 238
#[error("Invalid block range: start {start} > end {end}")]
InvalidBlockRange { start: u64, end: u64 },
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Unused error variant — either validate or remove

InvalidBlockRange { start, end } isn’t produced anywhere in this file. If range validation will be added elsewhere, keep; otherwise consider removing to avoid dead API surface.

🤖 Prompt for AI Agents
In crates/common/src/rpc_client.rs around lines 236–238, the Rpc error enum
defines InvalidBlockRange { start, end } but that variant is never emitted;
either remove the variant to avoid dead API surface, or add explicit validation
in the RPC methods that accept start/end ranges (check start <= end) and return
Err(RpcError::InvalidBlockRange { start, end }) when violated; update any
callers/tests accordingly so the variant is used only if range validation
exists.

@findolor findolor merged commit 48f239a into main Oct 28, 2025
19 checks passed
@findolor findolor deleted the local-db-fetch-refactor branch October 28, 2025 08:22
@github-actions
Copy link
Copy Markdown
Contributor

@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment:

S/M/L PR Classification Guidelines:

This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed.

Small (S)

Characteristics:

  • Simple bug fixes, typos, or minor refactoring
  • Single-purpose changes affecting 1-2 files
  • Documentation updates
  • Configuration tweaks
  • Changes that require minimal context to review

Review Effort: Would have taken 5-10 minutes

Examples:

  • Fix typo in variable name
  • Update README with new instructions
  • Adjust configuration values
  • Simple one-line bug fixes
  • Import statement cleanup

Medium (M)

Characteristics:

  • Feature additions or enhancements
  • Refactoring that touches multiple files but maintains existing behavior
  • Breaking changes with backward compatibility
  • Changes requiring some domain knowledge to review

Review Effort: Would have taken 15-30 minutes

Examples:

  • Add new feature or component
  • Refactor common utility functions
  • Update dependencies with minor breaking changes
  • Add new component with tests
  • Performance optimizations
  • More complex bug fixes

Large (L)

Characteristics:

  • Major feature implementations
  • Breaking changes or API redesigns
  • Complex refactoring across multiple modules
  • New architectural patterns or significant design changes
  • Changes requiring deep context and multiple review rounds

Review Effort: Would have taken 45+ minutes

Examples:

  • Complete new feature with frontend/backend changes
  • Protocol upgrades or breaking changes
  • Major architectural refactoring
  • Framework or technology upgrades

Additional Factors to Consider

When deciding between sizes, also consider:

  • Test coverage impact: More comprehensive test changes lean toward larger classification
  • Risk level: Changes to critical systems bump up a size category
  • Team familiarity: Novel patterns or technologies increase complexity

Notes:

  • the assessment must be for the totality of the PR, that means comparing the base branch to the last commit of the PR
  • the assessment output must be exactly one of: S, M or L (single-line comment) in format of: SIZE={S/M/L}
  • do not include any additional text, only the size classification
  • your assessment comment must not include tips or additional sections
  • do NOT tag me or anyone else on your comment

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Oct 28, 2025

Tip

For best results, initiate chat on the files or code changes.

SIZE=L


🧠 Learnings used
Learnt from: rouzwelt
PR: rainlanguage/rain.orderbook#0
File: :0-0
Timestamp: 2025-05-21T21:24:42.275Z
Learning: For PR #1884, after each commit review, I should provide a new comment with a PR size report in the format: `TOTAL=number ADDITIONS=number DELETIONS=number`, excluding irrelevant files such as lock files (package-lock.json, cargo.lock, etc.).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Move Event Topic Definitions Out of the Fetch Module Consolidate Local DB Event and Store Log Fetching

3 participants