Update local db cli command and add binary generation workflow#2159
Update local db cli command and add binary generation workflow#2159
Conversation
WalkthroughAdds a new LocalDB incremental sync command to the CLI, with supporting modules for config resolution, data source integration, sync runner, window computation, token metadata handling, and SQLite helpers. Wires the command into the CLI entrypoint, updates docs, introduces Nix build workflow and artifact packaging, and updates webapp constants to use a new remote settings URL. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant U as User
participant CLI as SyncLocalDb (CLI)
participant SR as SyncRunner
participant ST as Storage (schema/last_synced/SQLite)
participant DS as DataSource (LocalDb)
participant TF as TokenMetadataFetcher
participant RPC as RPC URLs
U->>CLI: invoke with db_path, chain_id, settings_yaml, api_token, [start,end]
CLI->>CLI: load_primary_orderbook_from_settings()
CLI->>ST: build_local_db_from_network(chain_id, network, api_token)
CLI->>SR: new(db_path, data_source, metadata_rpcs, token_fetcher)
CLI->>SR: run(params)
SR->>ST: ensure_schema(db_path)
SR->>ST: fetch_last_synced(db_path)
SR->>DS: latest_block()
SR->>SR: compute_sync_window()
alt noop
SR-->>CLI: return (no work)
else do sync
SR->>DS: fetch_events(orderbook, start..target)
SR->>DS: decode_events(events)
SR->>TF: fetch missing token metadata
TF-->>SR: decimals + token SQL prefix
SR->>DS: events_to_sql(decoded, target, decimals, prefix_sql)
SR->>ST: sqlite_execute(db_path, combined SQL)
SR-->>CLI: done
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 26
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
crates/cli/bin/rain-orderbook-cli.tar.gzis excluded by!**/*.gz
📒 Files selected for processing (17)
.github/workflows/build-cli-binary.yml(1 hunks)crates/cli/src/commands/local_db/README.md(1 hunks)crates/cli/src/commands/local_db/mod.rs(1 hunks)crates/cli/src/commands/local_db/sqlite.rs(1 hunks)crates/cli/src/commands/local_db/sync/cli.rs(1 hunks)crates/cli/src/commands/local_db/sync/config.rs(1 hunks)crates/cli/src/commands/local_db/sync/data_source.rs(1 hunks)crates/cli/src/commands/local_db/sync/mod.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/apply.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/mod.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/window.rs(1 hunks)crates/cli/src/commands/local_db/sync/storage.rs(1 hunks)crates/cli/src/commands/local_db/sync/token.rs(1 hunks)crates/cli/src/lib.rs(3 hunks)flake.nix(2 hunks)packages/webapp/src/lib/constants.ts(1 hunks)packages/webapp/src/routes/+layout.ts(1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-06T11:13:29.938Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2123
File: crates/common/src/raindex_client/local_db/mod.rs:23-29
Timestamp: 2025-10-06T11:13:29.938Z
Learning: In `crates/common/src/raindex_client/local_db/mod.rs`, the `Default` implementation for `LocalDb` that creates an RPC client pointing to `http://localhost:4444` is acceptable because the RPC client must be explicitly configured before actual usage in production scenarios.
Applied to files:
crates/cli/src/commands/local_db/sync/storage.rs
📚 Learning: 2025-10-06T13:03:35.325Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2146
File: crates/cli/src/commands/local_db/README.md:22-106
Timestamp: 2025-10-06T13:03:35.325Z
Learning: In the rain.orderbook repository, `cargo run` commands work without the `--` separator (e.g., `cargo run local-db fetch-events` works directly). Do not suggest adding `--` before CLI arguments in cargo run examples for this repository.
Applied to files:
crates/cli/src/commands/local_db/README.md
🔇 Additional comments (16)
.github/workflows/build-cli-binary.yml (3)
20-25: Verify if full git history is required.The checkout uses
fetch-depth: 0to fetch the entire git history. For a build-only workflow, this is typically unnecessary and increases checkout time and resource usage.Please confirm if the build process requires git history (e.g., for version tags, changelog generation). If not, consider removing this option or setting a shallow depth:
- name: Checkout repository uses: actions/checkout@v4 with: - fetch-depth: 0 submodules: recursive
14-25: Verify security scope of PUBLISH_PRIVATE_KEY.The workflow uses a
PUBLISH_PRIVATE_KEYsecret for SSH checkout, which is a powerful credential. Ensure this key has minimal necessary permissions and is properly scoped.Please confirm:
- Does the private key have write access only to this repository?
- Is it a deploy key rather than a personal access token?
- Could this workflow use GITHUB_TOKEN instead for checkout?
If GITHUB_TOKEN can be used, it's more secure as it's automatically scoped:
- name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 submodules: recursive - ssh-key: ${{ secrets.PUBLISH_PRIVATE_KEY }} + token: ${{ secrets.GITHUB_TOKEN }}Note: If submodules are in private repositories, you may need to use a PAT with appropriate scope instead.
40-42: No changes needed for prep-all.sh error handling.
prep-all.sh includesset -euxo pipefail, ensuring failures are surfaced.flake.nix (1)
298-298: LGTM!Adding the
rainix-ob-cli-artifacttask to the default dev shell makes it available for developers to test the artifact generation locally, which is good for development workflow.crates/cli/src/commands/local_db/mod.rs (1)
1-1: LGTM: module wiring and re-exportExposure of sync and re-export of SyncLocalDb look correct; private sqlite helper scoped appropriately.
Also applies to: 7-7, 15-15
packages/webapp/src/routes/+layout.ts (1)
7-7: Good change: centralize remote URL via constant
REMOTE_SETTINGS_URL is exported in constants.ts and used correctly.crates/cli/src/commands/local_db/sync/mod.rs (1)
1-8: Module boundaries and re-export look good.Encapsulates internals with pub(crate) and exposes only SyncLocalDb. LGTM.
crates/cli/src/lib.rs (1)
2-2: Wiring of LocalDb::Sync is correct.Import, variant, and execute arm align with existing pattern. No issues.
Also applies to: 25-26, 40-40
crates/cli/src/commands/local_db/sync/token.rs (1)
37-43: Confirm DB address normalization to avoid case-sensitivity misses.addr_strings are lower-case; if erc20_tokens.address stores mixed/upper-case and the query is case-sensitive, fetch_existing_tokens may miss rows. Ensure addresses are normalized to lower-case at insert and queries use the same canonical form.
crates/cli/src/commands/local_db/sync/runner/apply.rs (2)
28-33: LGTM: fetch_events result/count handlingClear separation of fetch and raw count. No issues spotted.
39-45: LGTM: decode path and countingDecoding and count extraction are straightforward.
crates/cli/src/commands/local_db/sync/storage.rs (2)
20-27: LGTM: schema idempotence and last_synced defaultSchema creation is idempotent; last_synced defaults to 0 cleanly.
Also applies to: 29-32
34-58: LGTM: network validation and Hyper RPC constructionClear chain_id/RPC checks and LocalDb construction. Matches intended separation of event RPC (Hyper) vs metadata RPCs.
crates/cli/src/commands/local_db/sync/data_source.rs (1)
35-53: LGTM: DefaultTokenFetcher short-circuit and concurrent fetchShort-circuit on empty input and delegate to concurrent metadata fetcher look good.
crates/cli/src/commands/local_db/sync/runner/mod.rs (2)
104-111: Verify that events_to_sql always updates sync_status.last_synced_blockRunner assumes the generated SQL advances last_synced_block to window.target_block. If any data source variant omits this, DB will drift.
- Ensure the data_source contract mandates updating sync_status.
- Optionally: after sqlite_execute, re-query last_synced and assert it equals target_block (fail fast).
I can add that post-condition check in the runner if you want.
75-86: LGTM: fetch/decode/prepare flow and metadata RPC selectionFlow is clean, metadata_rpcs() fallback is sensible, and tests cover both cached and fetched token paths.
Also applies to: 88-103
| ## Incremental Sync (fetch, decode, insert) | ||
|
|
||
| `local-db sync` is now a single command that mirrors the browser sync logic. It reads the Rain settings from a specific repository commit, resolves the primary orderbook for the given chain, and then updates your SQLite database by fetching the missing on-chain events, decoding them, preparing any missing token metadata, and applying the combined SQL transaction. | ||
|
|
||
| Example: | ||
|
|
||
| ```bash | ||
| cargo run -p rain_orderbook_cli -- local-db sync \ | ||
| --db-path "./data/orderbook.db" \ | ||
| --chain-id 42161 \ | ||
| --repo-commit f00ba47cafe1234567890abcdef1234567890ab \ | ||
| --api-token hyper-token \ | ||
| --start-block 352866209 \ | ||
| --end-block 352999999 | ||
| ``` | ||
|
|
||
| Key behaviour: | ||
| - The command downloads `constants.ts`/settings for the provided commit to discover the orderbook deployment. No manual `--orderbook-address`, `--deployment-block`, or user-supplied RPC URLs are required. | ||
| - HyperRPC log fetching uses the provided `--api-token`; ERC-20 metadata reads reuse the network RPCs declared in settings. Start/end blocks are optional—when omitted the runner resumes from the last synced block and stops at the chain head. | ||
| - When the DB is empty the schema is created automatically, `sync_status` is advanced to the synced block, and existing token metadata is reused while missing addresses trigger live fetches. | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Call out sqlite3 CLI requirement
The new sync path invokes the external sqlite3 binary. Note the prerequisite (sqlite3 in PATH; JSON mode supported) to prevent surprise run-time failures.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/README.md around lines 16 to 36, the
documentation doesn’t mention that the new sync flow requires the external
sqlite3 CLI binary; add a short prerequisite note stating that sqlite3 must be
installed and available on PATH and that the installed sqlite3 must support JSON
mode (so the sync can invoke the sqlite3 binary), and include a one-line
example/check command or instruction on how to verify installation (e.g., run
sqlite3 --version or ensure sqlite3 is callable) and a brief caution that
missing/unsupported sqlite3 will cause runtime failures.
There was a problem hiding this comment.
this is available in our nix shell. we don't use this without nix so no issues here
There was a problem hiding this comment.
@findolor, understood! Since you have a Nix-based workflow that guarantees sqlite3 availability, documenting it as a separate prerequisite isn't necessary. Thanks for the clarification.
| ```bash | ||
| cargo run -p rain_orderbook_cli -- local-db sync \ | ||
| --db-path "./data/orderbook.db" \ | ||
| --chain-id 42161 \ | ||
| --repo-commit f00ba47cafe1234567890abcdef1234567890ab \ | ||
| --api-token hyper-token \ | ||
| --start-block 352866209 \ | ||
| --end-block 352999999 | ||
| ``` |
There was a problem hiding this comment.
Align cargo run example with repo style; drop “--” and/or show both forms
For consistency with other commands here, prefer:
- cargo run local-db sync …
- Or cargo run -p rain_orderbook_cli local-db sync …
Avoid the “--” separator in examples for this repo.
Based on learnings
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/README.md around lines 22 to 30, the cargo
run example uses the “--” separator which is inconsistent with the repo style;
update the example to drop the “--” and present the preferred forms, either
"cargo run local-db sync ..." or "cargo run -p rain_orderbook_cli local-db sync
...", keeping the same flags/arguments and values, and optionally include both
variants so readers see both supported invocation styles.
| rainix-ob-cli-artifact = rainix.mkTask.${system} { | ||
| name = "rainix-ob-cli-artifact"; | ||
| body = '' | ||
| set -euxo pipefail | ||
|
|
||
| OUTPUT_DIR=crates/cli/bin | ||
| ARCHIVE_NAME=rain-orderbook-cli.tar.gz | ||
| BINARY_NAME=rain-orderbook-cli | ||
|
|
||
| TARGET_TRIPLE=x86_64-unknown-linux-gnu | ||
|
|
||
| cargo build --release -p rain_orderbook_cli --target "$TARGET_TRIPLE" | ||
|
|
||
| mkdir -p "$OUTPUT_DIR" | ||
| rm -f "$OUTPUT_DIR/$ARCHIVE_NAME" | ||
|
|
||
| cp "target/$TARGET_TRIPLE/release/rain_orderbook_cli" "$OUTPUT_DIR/$BINARY_NAME" | ||
| chmod 755 "$OUTPUT_DIR/$BINARY_NAME" | ||
| strip "$OUTPUT_DIR/$BINARY_NAME" || true | ||
|
|
||
| tar -C "$OUTPUT_DIR" -czf "$OUTPUT_DIR/$ARCHIVE_NAME" "$BINARY_NAME" | ||
|
|
||
| rm -f "$OUTPUT_DIR/$BINARY_NAME" | ||
| ''; | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider platform portability and build validation.
The task hardcodes TARGET_TRIPLE=x86_64-unknown-linux-gnu, which limits the artifact to Linux x86_64. This may be intentional for the remote sync use case, but consider documenting this limitation.
Additionally, the build doesn't validate success before proceeding with packaging.
Apply this diff to add validation and improve error handling:
rainix-ob-cli-artifact = rainix.mkTask.${system} {
name = "rainix-ob-cli-artifact";
body = ''
set -euxo pipefail
OUTPUT_DIR=crates/cli/bin
ARCHIVE_NAME=rain-orderbook-cli.tar.gz
BINARY_NAME=rain-orderbook-cli
TARGET_TRIPLE=x86_64-unknown-linux-gnu
cargo build --release -p rain_orderbook_cli --target "$TARGET_TRIPLE"
+
+ # Verify binary was created
+ BINARY_PATH="target/$TARGET_TRIPLE/release/rain_orderbook_cli"
+ if [ ! -f "$BINARY_PATH" ]; then
+ echo "Error: Binary not found at $BINARY_PATH"
+ exit 1
+ fi
mkdir -p "$OUTPUT_DIR"
rm -f "$OUTPUT_DIR/$ARCHIVE_NAME"
- cp "target/$TARGET_TRIPLE/release/rain_orderbook_cli" "$OUTPUT_DIR/$BINARY_NAME"
+ cp "$BINARY_PATH" "$OUTPUT_DIR/$BINARY_NAME"
chmod 755 "$OUTPUT_DIR/$BINARY_NAME"
strip "$OUTPUT_DIR/$BINARY_NAME" || true
tar -C "$OUTPUT_DIR" -czf "$OUTPUT_DIR/$ARCHIVE_NAME" "$BINARY_NAME"
+
+ # Verify tarball was created and generate checksum
+ if [ ! -f "$OUTPUT_DIR/$ARCHIVE_NAME" ]; then
+ echo "Error: Tarball not created"
+ exit 1
+ fi
+ sha256sum "$OUTPUT_DIR/$ARCHIVE_NAME" > "$OUTPUT_DIR/$ARCHIVE_NAME.sha256"
rm -f "$OUTPUT_DIR/$BINARY_NAME"
'';
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rainix-ob-cli-artifact = rainix.mkTask.${system} { | |
| name = "rainix-ob-cli-artifact"; | |
| body = '' | |
| set -euxo pipefail | |
| OUTPUT_DIR=crates/cli/bin | |
| ARCHIVE_NAME=rain-orderbook-cli.tar.gz | |
| BINARY_NAME=rain-orderbook-cli | |
| TARGET_TRIPLE=x86_64-unknown-linux-gnu | |
| cargo build --release -p rain_orderbook_cli --target "$TARGET_TRIPLE" | |
| mkdir -p "$OUTPUT_DIR" | |
| rm -f "$OUTPUT_DIR/$ARCHIVE_NAME" | |
| cp "target/$TARGET_TRIPLE/release/rain_orderbook_cli" "$OUTPUT_DIR/$BINARY_NAME" | |
| chmod 755 "$OUTPUT_DIR/$BINARY_NAME" | |
| strip "$OUTPUT_DIR/$BINARY_NAME" || true | |
| tar -C "$OUTPUT_DIR" -czf "$OUTPUT_DIR/$ARCHIVE_NAME" "$BINARY_NAME" | |
| rm -f "$OUTPUT_DIR/$BINARY_NAME" | |
| ''; | |
| }; | |
| rainix-ob-cli-artifact = rainix.mkTask.${system} { | |
| name = "rainix-ob-cli-artifact"; | |
| body = '' | |
| set -euxo pipefail | |
| OUTPUT_DIR=crates/cli/bin | |
| ARCHIVE_NAME=rain-orderbook-cli.tar.gz | |
| BINARY_NAME=rain-orderbook-cli | |
| TARGET_TRIPLE=x86_64-unknown-linux-gnu | |
| cargo build --release -p rain_orderbook_cli --target "$TARGET_TRIPLE" | |
| # Verify binary was created | |
| BINARY_PATH="target/$TARGET_TRIPLE/release/rain_orderbook_cli" | |
| if [ ! -f "$BINARY_PATH" ]; then | |
| echo "Error: Binary not found at $BINARY_PATH" | |
| exit 1 | |
| fi | |
| mkdir -p "$OUTPUT_DIR" | |
| rm -f "$OUTPUT_DIR/$ARCHIVE_NAME" | |
| cp "$BINARY_PATH" "$OUTPUT_DIR/$BINARY_NAME" | |
| chmod 755 "$OUTPUT_DIR/$BINARY_NAME" | |
| strip "$OUTPUT_DIR/$BINARY_NAME" || true | |
| tar -C "$OUTPUT_DIR" -czf "$OUTPUT_DIR/$ARCHIVE_NAME" "$BINARY_NAME" | |
| # Verify tarball was created and generate checksum | |
| if [ ! -f "$OUTPUT_DIR/$ARCHIVE_NAME" ]; then | |
| echo "Error: Tarball not created" | |
| exit 1 | |
| fi | |
| sha256sum "$OUTPUT_DIR/$ARCHIVE_NAME" > "$OUTPUT_DIR/$ARCHIVE_NAME.sha256" | |
| rm -f "$OUTPUT_DIR/$BINARY_NAME" | |
| ''; | |
| }; |
There was a problem hiding this comment.
@findolor, understood! I'll move forward with the current implementation. 👍
🧠 Learnings used
Learnt from: CR
PR: rainlanguage/rain.orderbook#0
File: AGENTS.md:0-0
Timestamp: 2025-10-06T08:05:50.440Z
Learning: WASM bundle via nix develop -c rainix-wasm-artifacts
| 'https://raw.githubusercontent.com/rainlanguage/rain.strategies/4a91fbafccac4e791ec3fedec0b92eff7a3d4f7e/registry'; | ||
| export const REMOTE_SETTINGS_URL = | ||
| 'https://raw.githubusercontent.com/rainlanguage/rain.strategies/4a91fbafccac4e791ec3fedec0b92eff7a3d4f7e/settings.yaml'; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
De-duplicate the commit hash and allow env override.
Avoid repeating the commit by factoring it into a single constant (or env var) to ease rotations and reduce drift.
Apply this refactor:
-export const REGISTRY_URL =
- 'https://raw.githubusercontent.com/rainlanguage/rain.strategies/4a91fbafccac4e791ec3fedec0b92eff7a3d4f7e/registry';
-export const REMOTE_SETTINGS_URL =
- 'https://raw.githubusercontent.com/rainlanguage/rain.strategies/4a91fbafccac4e791ec3fedec0b92eff7a3d4f7e/settings.yaml';
+// Optionally source from env (e.g. VITE_STRATEGIES_COMMIT) if your build tooling supports it.
+export const STRATEGIES_COMMIT =
+ (process.env?.VITE_STRATEGIES_COMMIT as string) ??
+ '4a91fbafccac4e791ec3fedec0b92eff7a3d4f7e';
+export const REGISTRY_URL = `https://raw.githubusercontent.com/rainlanguage/rain.strategies/${STRATEGIES_COMMIT}/registry`;
+export const REMOTE_SETTINGS_URL = `https://raw.githubusercontent.com/rainlanguage/rain.strategies/${STRATEGIES_COMMIT}/settings.yaml`;🤖 Prompt for AI Agents
In packages/webapp/src/lib/constants.ts around lines 2 to 4, the same commit
hash is duplicated in two hardcoded URLs; factor it out and allow an environment
override. Add a single constant (preferably reading from an env var like
process.env.REMOTE_COMMIT or a Vite equivalent) for the commit hash, fallback to
the current hash, then construct both the registry and settings URLs from that
constant so rotating the hash requires one change and users can override via
env.
There was a problem hiding this comment.
no need to do this
There was a problem hiding this comment.
@findolor, understood! I'll leave the constants as-is.
|
|
||
| pub(crate) struct SyncParams<'a> { | ||
| pub(crate) chain_id: u32, | ||
| pub(crate) orderbook_address: &'a str, |
There was a problem hiding this comment.
why not just Address? that way no lifetime annotations would be needed here
| } = self; | ||
|
|
||
| let primary_orderbook = load_primary_orderbook_from_commit(chain_id, &repo_commit).await?; | ||
| let orderbook_address = format!("{:#x}", primary_orderbook.address); |
There was a problem hiding this comment.
consider alloy::hex::encode
| let constants_url = format!( | ||
| "https://raw.githubusercontent.com/rainlanguage/rain.orderbook/{}/packages/webapp/src/lib/constants.ts", | ||
| commit_hash | ||
| ); |
There was a problem hiding this comment.
the cli should just take rain.orderbook settings to populate the database with all the orderbooks. we can put the url in the github actions by assigning the url value to the appropriate env var, e.g. how order owner is set in st0x liquidity deployment pipeline
There was a problem hiding this comment.
we use .yaml for a little over half of actions. clearly not a standard but maybe worth making the extension a bit more consistent
| Ok(body) | ||
| } | ||
|
|
||
| pub(crate) fn extract_settings_url(constants_source: &str) -> Result<String> { |
There was a problem hiding this comment.
i guess this has to go following our discussion with Josh
| let mut all_token_addrs: Vec<Address> = address_set.into_iter().collect(); | ||
| all_token_addrs.sort(); |
There was a problem hiding this comment.
Consider itertools .sorted method
| let mut all_token_addrs: Vec<Address> = address_set.into_iter().collect(); | |
| all_token_addrs.sort(); | |
| let all_token_addrs: Vec<Address> = address_set.into_iter().sorted().collect(); |
|
|
||
| let addr_strings: Vec<String> = all_token_addrs | ||
| .iter() | ||
| .map(|a| format!("0x{:x}", a)) |
There was a problem hiding this comment.
consider alloy's encode_prefixed
| let existing_rows = fetch_existing_tokens(db_path, chain_id, &addr_strings)?; | ||
|
|
||
| let mut decimals_by_addr: HashMap<Address, u8> = HashMap::new(); | ||
| let mut existing_lower: HashSet<String> = HashSet::new(); | ||
| for row in existing_rows.iter() { | ||
| let key = row.address.to_ascii_lowercase(); | ||
| existing_lower.insert(key.clone()); | ||
| let address = Address::from_str(&row.address) | ||
| .with_context(|| format!("Invalid address stored in DB: {}", row.address))?; | ||
| decimals_by_addr.insert(address, row.decimals); | ||
| } |
There was a problem hiding this comment.
If Erc20TokenRow used Address then we wouldn't need to convert it to lowercase or convert it to Address at this point. Plus, why do we need both existing_lower and decimals_by_adds - shouldn't the keys of decimals_by_addr already be the same as a set of all addresses? And then you can write a single expression with iterator chains and get decimals_by_addr that way
There was a problem hiding this comment.
adding this comment to #2182 issue so i can handle this during the type refactor. does that sound good?
| let mut missing_addrs: Vec<Address> = Vec::new(); | ||
| for addr in all_token_addrs.iter() { | ||
| let key = format!("0x{:x}", addr).to_ascii_lowercase(); | ||
| if !existing_lower.contains(&key) { | ||
| missing_addrs.push(*addr); | ||
| } |
There was a problem hiding this comment.
also no need to convert to lowercase if everything used addresses. and no need for the loop here either, can just use .filter
There was a problem hiding this comment.
also adding this comment to the type issue
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (5)
crates/cli/src/commands/local_db/README.md (1)
22-30: Align cargo run example with repo style; drop “--” separatorPer repo convention, examples should omit “--”.
[suggest_minor_issue]
-cargo run -p rain_orderbook_cli -- local-db sync \ +cargo run -p rain_orderbook_cli local-db sync \Based on learnings
crates/cli/src/commands/local_db/sync/runner/window.rs (1)
130-212: Add tests for BehindLastSynced adjustment and EndClampCoverage lacks:
- start <= last_synced → bump to last_synced + 1 with BehindLastSynced.
- requested end > latest → clamp to latest with EndClamp.
crates/cli/src/commands/local_db/sync/token.rs (1)
100-145: Add a test for the missing_addrs pathCurrent tests cover existing metadata only. Add a case where DB has none so prepare_token_metadata fetches and emits a non-empty tokens_prefix_sql and merges decimals.
crates/cli/src/commands/local_db/sync/storage.rs (1)
69-79: Avoid massive IN clauses; chunk queries to stay under limitsString-built IN clauses can exceed OS argv limits and SQLite parameter limits for large address sets. Chunk the address list to keep queries bounded.
- let in_clause = addresses - .iter() - .map(|addr| format!("'{}'", addr.replace('\'', "''"))) - .collect::<Vec<_>>() - .join(", "); - - let sql = ERC20_QUERY_TEMPLATE - .replace("?chain_id", &chain_id.to_string()) - .replace("?addresses_in", &in_clause); - - sqlite_query_json(db_path, &sql) + // Chunk to avoid oversized SQL/argv; SQLite default param limit is ~999. + let mut all: Vec<Erc20TokenRow> = Vec::new(); + for chunk in addresses.chunks(900) { + let in_clause = chunk + .iter() + .map(|addr| format!("'{}'", addr.replace('\'', "''"))) + .collect::<Vec<_>>() + .join(", "); + let sql = ERC20_QUERY_TEMPLATE + .replace("?chain_id", &chain_id.to_string()) + .replace("?addresses_in", &in_clause); + let mut part: Vec<Erc20TokenRow> = sqlite_query_json(db_path, &sql)?; + all.append(&mut part); + } + Ok(all)Longer-term, prefer bound parameters over string interpolation to eliminate injection risk. Based on learnings
.github/workflows/build-cli-binary.yaml (1)
52-64: Harden commit/push: check file exists before add and push via GITHUB_TOKEN (if keeping SSH for submodules).
- Guard git add to avoid failing when the tarball path changes or build produces no file.
- Ensure push uses HTTPS with GITHUB_TOKEN rather than the SSH key from checkout (Option B mentioned earlier).
- name: Commit and push updated tarball env: COMMIT_MESSAGE: "refresh orderbook cli" GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | - git add crates/cli/bin/rain-orderbook-cli.tar.gz + if [ ! -f crates/cli/bin/rain-orderbook-cli.tar.gz ]; then + echo "Tarball not found at crates/cli/bin/rain-orderbook-cli.tar.gz" + exit 1 + fi + git add crates/cli/bin/rain-orderbook-cli.tar.gz if git diff --staged --quiet; then echo "No changes to commit" exit 0 fi git commit -m "$COMMIT_MESSAGE" - git push origin HEAD:${{ github.ref_name }} + # Force HTTPS remote for token-based push (if checkout used SSH) + git remote set-url origin https://x-access-token:${GITHUB_TOKEN}@github.com/${{ github.repository }}.git + git push origin HEAD:${{ github.ref_name }}
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
.github/workflows/build-cli-binary.yaml(1 hunks)Cargo.toml(1 hunks)crates/cli/Cargo.toml(1 hunks)crates/cli/src/commands/local_db/README.md(1 hunks)crates/cli/src/commands/local_db/sync/cli.rs(1 hunks)crates/cli/src/commands/local_db/sync/config.rs(1 hunks)crates/cli/src/commands/local_db/sync/data_source.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/apply.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/mod.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/window.rs(1 hunks)crates/cli/src/commands/local_db/sync/storage.rs(1 hunks)crates/cli/src/commands/local_db/sync/token.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
crates/*/{src,tests,benches,examples}/**/*.rs
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Rust code must pass clippy with all warnings denied (cargo clippy --workspace --all-targets --all-features -D warnings)
Files:
crates/cli/src/commands/local_db/sync/token.rscrates/cli/src/commands/local_db/sync/runner/window.rscrates/cli/src/commands/local_db/sync/data_source.rscrates/cli/src/commands/local_db/sync/cli.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/config.rscrates/cli/src/commands/local_db/sync/storage.rscrates/cli/src/commands/local_db/sync/runner/apply.rs
crates/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Rust code lives in the workspace under crates/* (e.g., cli, common, bindings, js_api, quote, subgraph, settings, math, integration_tests)
Files:
crates/cli/src/commands/local_db/sync/token.rscrates/cli/src/commands/local_db/sync/runner/window.rscrates/cli/src/commands/local_db/sync/data_source.rscrates/cli/src/commands/local_db/sync/cli.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/config.rscrates/cli/src/commands/local_db/sync/storage.rscrates/cli/src/commands/local_db/sync/runner/apply.rs
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs: Format Rust with nix develop -c cargo fmt --all
Lint Rust with nix develop -c rainix-rs-static (preconfigured flags)
Use snake_case for Rust crates/modules and PascalCase for types
Files:
crates/cli/src/commands/local_db/sync/token.rscrates/cli/src/commands/local_db/sync/runner/window.rscrates/cli/src/commands/local_db/sync/data_source.rscrates/cli/src/commands/local_db/sync/cli.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/config.rscrates/cli/src/commands/local_db/sync/storage.rscrates/cli/src/commands/local_db/sync/runner/apply.rs
🧠 Learnings (9)
📚 Learning: 2025-10-06T14:41:41.909Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2159
File: crates/cli/src/commands/local_db/sync/runner/mod.rs:52-113
Timestamp: 2025-10-06T14:41:41.909Z
Learning: The local DB sync CLI command (crates/cli/src/commands/local_db/sync/) is designed for CI-only usage, and simple println! statements are preferred over structured logging for status messages.
Applied to files:
crates/cli/src/commands/local_db/README.mdcrates/cli/src/commands/local_db/sync/token.rscrates/cli/src/commands/local_db/sync/cli.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/storage.rs
📚 Learning: 2025-10-06T13:03:35.337Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2146
File: crates/cli/src/commands/local_db/README.md:22-106
Timestamp: 2025-10-06T13:03:35.337Z
Learning: In the rain.orderbook repository, `cargo run` commands work without the `--` separator (e.g., `cargo run local-db fetch-events` works directly). Do not suggest adding `--` before CLI arguments in cargo run examples for this repository.
Applied to files:
crates/cli/src/commands/local_db/README.md
📚 Learning: 2025-07-04T09:02:57.301Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1956
File: crates/common/src/fuzz/mod.rs:64-64
Timestamp: 2025-07-04T09:02:57.301Z
Learning: In rainlanguage/rain.orderbook, user findolor prefers to limit type consistency changes to only the parts directly related to the current work scope. For example, when updating chain_id fields from u64 to u32 in fuzz-related code, unrelated files like tauri-app wallet commands can remain as u64 if they serve different purposes and aren't part of the current changes.
Applied to files:
crates/cli/src/commands/local_db/sync/token.rs
📚 Learning: 2025-10-06T14:39:36.425Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2159
File: crates/cli/src/commands/local_db/sync/cli.rs:37-78
Timestamp: 2025-10-06T14:39:36.425Z
Learning: In the SyncLocalDb CLI command (crates/cli/src/commands/local_db/sync/cli.rs), block range validation (e.g., start_block <= end_block) is handled internally by the sync runner, not at the CLI entry point.
Applied to files:
crates/cli/src/commands/local_db/sync/runner/window.rscrates/cli/src/commands/local_db/sync/cli.rscrates/cli/src/commands/local_db/sync/runner/mod.rs
📚 Learning: 2025-10-06T08:05:50.452Z
Learnt from: CR
PR: rainlanguage/rain.orderbook#0
File: AGENTS.md:0-0
Timestamp: 2025-10-06T08:05:50.452Z
Learning: Applies to crates/**/*.rs : Rust code lives in the workspace under crates/* (e.g., cli, common, bindings, js_api, quote, subgraph, settings, math, integration_tests)
Applied to files:
crates/cli/Cargo.toml
📚 Learning: 2025-05-19T07:14:24.219Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1838
File: crates/cli/src/output.rs:29-41
Timestamp: 2025-05-19T07:14:24.219Z
Learning: For the rainlanguage/rain.orderbook repository, findolor prefers minimal documentation for straightforward functions like `output` in crates/cli/src/output.rs where the implementation is self-explanatory.
Applied to files:
crates/cli/src/commands/local_db/sync/runner/mod.rs
📚 Learning: 2025-04-08T12:56:03.272Z
Learnt from: hardingjam
PR: rainlanguage/rain.orderbook#1512
File: packages/ui-components/src/__tests__/OrderDetail.test.ts:125-143
Timestamp: 2025-04-08T12:56:03.272Z
Learning: The OrderDetail component in the Rain orderbook UI doesn't currently have error handling tests, but issue #1605 has been created to address this in the future.
Applied to files:
crates/cli/src/commands/local_db/sync/runner/mod.rs
📚 Learning: 2025-07-31T19:03:56.594Z
Learnt from: brusherru
PR: rainlanguage/rain.orderbook#2044
File: tauri-app/src/routes/orders/[chainId]-[orderbook]-[orderHash]/+page.svelte:76-78
Timestamp: 2025-07-31T19:03:56.594Z
Learning: In the rainlanguage/rain.orderbook project, when the Tauri app has issues preventing proper testing, the team prefers to defer Tauri-specific implementations to separate PRs rather than including untested code, especially for financial operations like withdrawals.
Applied to files:
crates/cli/src/commands/local_db/sync/runner/mod.rs
📚 Learning: 2025-08-21T10:29:32.424Z
Learnt from: hardyjosh
PR: rainlanguage/rain.orderbook#2087
File: .github/workflows/rainix.yaml:52-54
Timestamp: 2025-08-21T10:29:32.424Z
Learning: In the rain.orderbook repository, the repository variable for the Sepolia metaboard address used in deployment workflows is named CI_DEPLOY_SEPOLIA_METABOARD_ADDRESS, following the CI_DEPLOY_ prefix convention for deployment-related variables.
Applied to files:
crates/cli/src/commands/local_db/sync/config.rs
🧬 Code graph analysis (8)
crates/cli/src/commands/local_db/sync/token.rs (5)
crates/common/src/raindex_client/local_db/insert.rs (1)
generate_erc20_tokens_sql(167-204)crates/common/src/raindex_client/local_db/tokens.rs (1)
collect_token_addresses(6-34)crates/cli/src/commands/local_db/sync/storage.rs (1)
fetch_existing_tokens(60-80)crates/cli/src/commands/local_db/sync/data_source.rs (4)
rpc_urls(34-34)rpc_urls(107-109)fetch(39-40)fetch(47-60)crates/cli/src/commands/local_db/sqlite.rs (1)
sqlite_execute(8-41)
crates/cli/src/commands/local_db/sync/runner/window.rs (3)
crates/cli/src/commands/local_db/sync/storage.rs (1)
fetch_last_synced(29-32)crates/cli/src/commands/local_db/sync/data_source.rs (2)
latest_block(16-16)latest_block(65-70)crates/cli/src/commands/local_db/sqlite.rs (1)
sqlite_execute(8-41)
crates/cli/src/commands/local_db/sync/data_source.rs (2)
crates/common/src/raindex_client/local_db/token_fetch.rs (1)
fetch_erc20_metadata_concurrent(7-55)crates/common/src/raindex_client/local_db/mod.rs (1)
rpc_client(194-196)
crates/cli/src/commands/local_db/sync/cli.rs (3)
crates/cli/src/commands/local_db/sync/config.rs (1)
load_primary_orderbook_from_settings(10-29)crates/cli/src/commands/local_db/sync/storage.rs (1)
build_local_db_from_network(34-58)crates/cli/src/commands/local_db/sync/runner/mod.rs (1)
new(38-50)
crates/cli/src/commands/local_db/sync/runner/mod.rs (5)
crates/cli/src/commands/local_db/sqlite.rs (1)
sqlite_execute(8-41)crates/cli/src/commands/local_db/sync/storage.rs (1)
ensure_schema(20-27)crates/cli/src/commands/local_db/sync/data_source.rs (12)
decode_events(23-26)decode_events(83-88)fetch_events(17-22)fetch_events(72-81)latest_block(16-16)latest_block(65-70)fetch(39-40)fetch(47-60)events_to_sql(27-33)events_to_sql(90-105)rpc_urls(34-34)rpc_urls(107-109)crates/cli/src/commands/local_db/sync/runner/apply.rs (9)
decode_events(35-48)decode_events(128-133)fetch_events(19-33)fetch_events(119-126)prepare_sql(50-84)latest_block(115-117)fetch(181-183)events_to_sql(135-168)rpc_urls(170-172)crates/cli/src/commands/local_db/sync/runner/window.rs (6)
decode_events(159-164)fetch_events(155-157)compute_sync_window(33-96)latest_block(151-153)events_to_sql(166-174)rpc_urls(176-178)
crates/cli/src/commands/local_db/sync/config.rs (1)
crates/common/src/dotrain_order.rs (1)
orderbook_yaml(484-486)
crates/cli/src/commands/local_db/sync/storage.rs (2)
crates/cli/src/commands/local_db/sqlite.rs (3)
sqlite_execute(8-41)sqlite_has_required_tables(90-95)sqlite_query_json(44-71)crates/cli/src/commands/local_db/sync/runner/mod.rs (2)
metadata_rpcs(115-121)new(38-50)
crates/cli/src/commands/local_db/sync/runner/apply.rs (3)
crates/cli/src/commands/local_db/sync/token.rs (2)
prepare_token_metadata(19-82)fetch(104-106)crates/cli/src/commands/local_db/sync/data_source.rs (12)
fetch_events(17-22)fetch_events(72-81)decode_events(23-26)decode_events(83-88)latest_block(16-16)latest_block(65-70)events_to_sql(27-33)events_to_sql(90-105)rpc_urls(34-34)rpc_urls(107-109)fetch(39-40)fetch(47-60)crates/cli/src/commands/local_db/sqlite.rs (1)
sqlite_execute(8-41)
🔇 Additional comments (5)
crates/cli/src/commands/local_db/sync/config.rs (1)
64-72: Verify negative test expectation matches upstream errorTest asserts error contains “chain-id: 1”. This depends on OrderbookYaml::get_orderbooks_by_chain_id returning an Err with that wording (vs Ok([]) leading to your custom “No orderbooks configured…” message). Please confirm.
If upstream returns Ok([]), adjust the assertion to accept your custom error.
crates/cli/src/commands/local_db/sync/data_source.rs (3)
43-61: LGTM! DefaultTokenFetcher implementation is correct and efficient.The early return for empty input (lines 52-54) is a good optimization. The delegation to
fetch_erc20_metadata_concurrentis clean, and the error handling is appropriate for CLI code.Note: The
to_vec()on line 56 is necessary given the function signature inrain_orderbook_commonexpects an ownedVec<Url>.
63-110: LGTM! Previous review concerns have been properly addressed.The implementation correctly uses fully qualified syntax (
<LocalDb>::fetch_events,<LocalDb>::decode_events,<LocalDb>::decoded_events_to_sql) to disambiguate inherent method calls from trait methods. Therpc_urlsimplementation (line 108) appropriately delegates throughrpc_client()to avoid confusion.All error handling follows consistent patterns with
anyhow, and the prefix handling logic (lines 97-101) is clear and correct.
112-125: LGTM! Test coverage is adequate for the wrapper implementation.The test correctly verifies the empty-input short-circuit optimization. Since
DefaultTokenFetcheris a thin wrapper aroundfetch_erc20_metadata_concurrent(which has its own tests inrain_orderbook_common), this level of testing is appropriate..github/workflows/build-cli-binary.yaml (1)
20-26: Avoid using a private SSH key for checkout/push; prefer GITHUB_TOKEN to reduce secret exposure and simplify auth.Using ssh-key loads a writable private key into the runner and sets origin to SSH; your push will use that key, while GITHUB_TOKEN remains unused. Prefer HTTPS with GITHUB_TOKEN for pushes; use SSH only if strictly needed for submodules.
Option A (preferred; if submodules are accessible with token):
- name: Checkout repository uses: actions/checkout@v4 with: fetch-depth: 0 submodules: recursive - ssh-key: ${{ secrets.PUBLISH_PRIVATE_KEY }} + persist-credentials: true + token: ${{ secrets.GITHUB_TOKEN }}Option B (if SSH is required for private submodules): keep SSH for checkout, but force the push remote to HTTPS with the token (see push step diff below).
| concurrency: | ||
| group: build-cli-binary-${{ github.ref }} | ||
| cancel-in-progress: false | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prevent concurrent runs from racing on commits/pushes.
Set cancel-in-progress to true so a new manual trigger cancels the previous one on the same ref, avoiding push conflicts.
concurrency:
group: build-cli-binary-${{ github.ref }}
- cancel-in-progress: false
+ cancel-in-progress: true🤖 Prompt for AI Agents
.github/workflows/build-cli-binary.yaml around lines 6 to 9: the workflow
concurrency block currently sets cancel-in-progress to false which allows
multiple runs on the same ref to overlap and potentially race on pushes; change
cancel-in-progress to true so a newly-triggered run cancels any in-progress run
for the same group (e.g., group: build-cli-binary-${{ github.ref }}) to avoid
conflicts.
| if: github.actor != 'github-actions[bot]' | ||
| runs-on: ubuntu-latest |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
The actor guard is redundant.
With workflow_dispatch and no push trigger, commits won’t re-trigger this workflow. You can remove the if to simplify.
- if: github.actor != 'github-actions[bot]'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if: github.actor != 'github-actions[bot]' | |
| runs-on: ubuntu-latest | |
| runs-on: ubuntu-latest |
🤖 Prompt for AI Agents
In .github/workflows/build-cli-binary.yaml around lines 12 to 13, the
conditional "if: github.actor != 'github-actions[bot]'" is redundant given this
workflow only runs via workflow_dispatch (no push triggers); remove that if line
to simplify the workflow file and avoid unnecessary actor checks.
| permissions: | ||
| id-token: write | ||
| contents: write | ||
| env: |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Drop unused id-token permission.
id-token: write isn’t used here. Remove to tighten permissions.
permissions:
- id-token: write
contents: write📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| permissions: | |
| id-token: write | |
| contents: write | |
| env: | |
| permissions: | |
| contents: write | |
| env: |
🤖 Prompt for AI Agents
.github/workflows/build-cli-binary.yaml around lines 14 to 17: the permissions
block currently includes an unused "id-token: write" entry; remove the
"id-token: write" line from the permissions so only the required "contents:
write" remains (preserve indentation and the env: block below) to tighten GitHub
Actions permissions.
| COMMIT_SHA: ${{ github.sha }} | ||
| steps: |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Remove unused environment variable.
COMMIT_SHA isn’t referenced.
- env:
- COMMIT_SHA: ${{ github.sha }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| COMMIT_SHA: ${{ github.sha }} | |
| steps: | |
| steps: |
🤖 Prompt for AI Agents
In .github/workflows/build-cli-binary.yaml around lines 18 to 19, the workflow
sets an unused environment variable COMMIT_SHA which is never referenced; remove
the COMMIT_SHA: ${{ github.sha }} line from the env/with block to avoid clutter
and potential confusion and commit the workflow change.
| use alloy::primitives::Address; | ||
| use anyhow::{anyhow, Result}; | ||
| use async_trait::async_trait; | ||
| use rain_orderbook_common::erc20::TokenInfo; | ||
| use rain_orderbook_common::raindex_client::local_db::{ | ||
| decode::{DecodedEvent, DecodedEventData}, | ||
| token_fetch::fetch_erc20_metadata_concurrent, | ||
| LocalDb, | ||
| }; | ||
| use rain_orderbook_common::rpc_client::LogEntryResponse; | ||
| use std::collections::HashMap; | ||
| use url::Url; | ||
|
|
||
| #[async_trait] | ||
| pub(crate) trait SyncDataSource { | ||
| async fn latest_block(&self) -> Result<u64>; | ||
| async fn fetch_events( | ||
| &self, | ||
| orderbook_address: &str, | ||
| start_block: u64, | ||
| end_block: u64, | ||
| ) -> Result<Vec<LogEntryResponse>>; | ||
| fn decode_events( | ||
| &self, | ||
| events: &[LogEntryResponse], | ||
| ) -> Result<Vec<DecodedEventData<DecodedEvent>>>; | ||
| fn events_to_sql( | ||
| &self, | ||
| decoded_events: &[DecodedEventData<DecodedEvent>], | ||
| end_block: u64, | ||
| decimals_by_token: &HashMap<Address, u8>, | ||
| prefix_sql: &str, | ||
| ) -> Result<String>; | ||
| fn rpc_urls(&self) -> &[Url]; | ||
| } | ||
|
|
||
| #[async_trait] | ||
| pub(crate) trait TokenMetadataFetcher { | ||
| async fn fetch(&self, rpcs: &[Url], missing: Vec<Address>) | ||
| -> Result<Vec<(Address, TokenInfo)>>; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
LGTM! Trait definitions are clean and well-structured.
The SyncDataSource and TokenMetadataFetcher traits provide clear abstraction boundaries for the sync functionality. Imports are minimal and appropriate.
Optional: Consider adding doc comments for maintainability.
While these traits are crate-private, adding doc comments would improve maintainability and help future contributors understand the intended usage and contracts. For example:
+/// Abstracts blockchain data retrieval and processing for orderbook event syncing.
#[async_trait]
pub(crate) trait SyncDataSource {
+ /// Returns the latest block number from the connected RPC endpoint.
async fn latest_block(&self) -> Result<u64>;
// ... etc📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use alloy::primitives::Address; | |
| use anyhow::{anyhow, Result}; | |
| use async_trait::async_trait; | |
| use rain_orderbook_common::erc20::TokenInfo; | |
| use rain_orderbook_common::raindex_client::local_db::{ | |
| decode::{DecodedEvent, DecodedEventData}, | |
| token_fetch::fetch_erc20_metadata_concurrent, | |
| LocalDb, | |
| }; | |
| use rain_orderbook_common::rpc_client::LogEntryResponse; | |
| use std::collections::HashMap; | |
| use url::Url; | |
| #[async_trait] | |
| pub(crate) trait SyncDataSource { | |
| async fn latest_block(&self) -> Result<u64>; | |
| async fn fetch_events( | |
| &self, | |
| orderbook_address: &str, | |
| start_block: u64, | |
| end_block: u64, | |
| ) -> Result<Vec<LogEntryResponse>>; | |
| fn decode_events( | |
| &self, | |
| events: &[LogEntryResponse], | |
| ) -> Result<Vec<DecodedEventData<DecodedEvent>>>; | |
| fn events_to_sql( | |
| &self, | |
| decoded_events: &[DecodedEventData<DecodedEvent>], | |
| end_block: u64, | |
| decimals_by_token: &HashMap<Address, u8>, | |
| prefix_sql: &str, | |
| ) -> Result<String>; | |
| fn rpc_urls(&self) -> &[Url]; | |
| } | |
| #[async_trait] | |
| pub(crate) trait TokenMetadataFetcher { | |
| async fn fetch(&self, rpcs: &[Url], missing: Vec<Address>) | |
| -> Result<Vec<(Address, TokenInfo)>>; | |
| } | |
| /// Abstracts blockchain data retrieval and processing for orderbook event syncing. | |
| #[async_trait] | |
| pub(crate) trait SyncDataSource { | |
| /// Returns the latest block number from the connected RPC endpoint. | |
| async fn latest_block(&self) -> Result<u64>; | |
| async fn fetch_events( | |
| &self, | |
| orderbook_address: &str, | |
| start_block: u64, | |
| end_block: u64, | |
| ) -> Result<Vec<LogEntryResponse>>; | |
| fn decode_events( | |
| &self, | |
| events: &[LogEntryResponse], | |
| ) -> Result<Vec<DecodedEventData<DecodedEvent>>>; | |
| fn events_to_sql( | |
| &self, | |
| decoded_events: &[DecodedEventData<DecodedEvent>], | |
| end_block: u64, | |
| decimals_by_token: &HashMap<Address, u8>, | |
| prefix_sql: &str, | |
| ) -> Result<String>; | |
| fn rpc_urls(&self) -> &[Url]; | |
| } |
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/data_source.rs around lines 1 to 41,
add concise doc comments to the crate-private traits and their methods to
improve maintainability: document the purpose of SyncDataSource and
TokenMetadataFetcher, describe each method’s role, expected parameters (e.g.,
units/types like block numbers, RPC URLs, token addresses), return values and
error conditions, and any important invariants (e.g., whether addresses must be
checksummed or blocks inclusive/exclusive); keep comments short, use /// style
doc comments above each trait and method, and avoid changing existing signatures
or visibility.
| pub(super) fn default_start_block(last_synced_block: u64, deployment_block: u64) -> u64 { | ||
| if last_synced_block == 0 { | ||
| deployment_block | ||
| } else { | ||
| last_synced_block + 1 | ||
| } | ||
| } |
There was a problem hiding this comment.
Prevent overflow in default_start_block
last_synced_block + 1 can overflow (wrap in release, panic in debug). Use saturating_add to keep behavior deterministic and let the later checked_add path surface a clear error.
pub(super) fn default_start_block(last_synced_block: u64, deployment_block: u64) -> u64 {
if last_synced_block == 0 {
deployment_block
} else {
- last_synced_block + 1
+ last_synced_block.saturating_add(1)
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub(super) fn default_start_block(last_synced_block: u64, deployment_block: u64) -> u64 { | |
| if last_synced_block == 0 { | |
| deployment_block | |
| } else { | |
| last_synced_block + 1 | |
| } | |
| } | |
| pub(super) fn default_start_block(last_synced_block: u64, deployment_block: u64) -> u64 { | |
| if last_synced_block == 0 { | |
| deployment_block | |
| } else { | |
| last_synced_block.saturating_add(1) | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/runner/window.rs around lines 98 to
104, the expression last_synced_block + 1 can overflow; replace the plain
addition with last_synced_block.saturating_add(1) so the start block calculation
is deterministic (no wrap/panic) and allow downstream checked_add logic to
surface an explicit error; update the return path to use saturating_add and keep
the rest of the function behavior unchanged.
|
@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment: S/M/L PR Classification Guidelines:This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed. Small (S)Characteristics:
Review Effort: Would have taken 5-10 minutes Examples:
Medium (M)Characteristics:
Review Effort: Would have taken 15-30 minutes Examples:
Large (L)Characteristics:
Review Effort: Would have taken 45+ minutes Examples:
Additional Factors to ConsiderWhen deciding between sizes, also consider:
Notes:
|
|
Tip For best results, initiate chat on the files or code changes. SIZE=L 🧠 Learnings used |
Motivation
This PR exposes a sync function in the orderbook cli and adds the workflow to create and push cli binary to the repository. This binary will be used later on in our remote sync repo to sync the local db and dump in regular intervals.
Solution
Checks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit
New Features
Documentation
Chores