Introduce chain id and orderbook address fields to table schema#2186
Introduce chain id and orderbook address fields to table schema#2186findolor wants to merge 5 commits intolocal-db-multi-firstfrom
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThe PR threads chain_id and orderbook_address through the CLI, sync runner, data sources, insert/query layers, and schema. It adds an address field to decoded events, updates SQL generation and storage, migrates tables to composite keys (chain_id, orderbook_address, ...), adjusts views/queries, and simplifies webapp sync status handling. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant UI as UI/Runner
participant SR as SyncRunner
participant AP as apply::prepare_sql
participant ST as Storage(fetch_last_synced)
participant DS as SyncDataSource
participant LD as LocalDb (insert/query)
UI->>SR: run(params: chain_id, orderbook_address, ...)
SR->>AP: prepare_sql(..., chain_id, orderbook_address, decoded, raw, target_block)
AP->>ST: fetch_last_synced(db_path, chain_id, orderbook_address)
ST-->>AP: last_synced_block
note over AP: Build SQL with context<br/>chain_id, orderbook_address
AP->>DS: raw_events_to_sql(raw, chain_id, orderbook_address)
DS->>LD: raw_events_to_sql(raw, chain_id, orderbook_address)
LD-->>DS: raw_sql
DS-->>AP: raw_sql
AP->>DS: events_to_sql(decoded, chain_id, orderbook_address, end_block, decimals, prefix)
DS->>LD: decoded_events_to_sql(decoded, chain_id, orderbook_address, end_block, decimals, prefix)
LD-->>DS: events_sql
DS-->>AP: events_sql
AP-->>SR: final SQL (prefixed)
SR-->>UI: Apply/return SQL
sequenceDiagram
autonumber
participant Q as LocalDbQuery
participant DB as SQLite (via callbacks)
Q->>DB: SELECT ... FROM sync_status<br/>WHERE chain_id=? AND orderbook_address=?<br/>ORDER BY updated_at DESC LIMIT 1
DB-->>Q: row { chain_id, orderbook_address, last_synced_block, updated_at }
Q->>DB: UPDATE sync_status SET last_synced_block=?, updated_at=?<br/>WHERE chain_id=? AND orderbook_address=?
DB-->>Q: ok
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
packages/webapp/src/routes/db-playground/+page.svelte (1)
10-10: Consider multi-chain support for the playground.The playground hardcodes chain ID
42161(Arbitrum One) on lines 10, 139, and 160. While this may be intentional for simplicity, it prevents demonstrating or testing the multi-chain functionality introduced by this PR. Consider adding a chain selector UI or environment variable to support multiple chains in the playground.crates/common/src/raindex_client/local_db/query/create_tables/query.sql (3)
223-236: interpreter_store_sets PK likely too narrow; prevents multiple sets per eventPK is only (chain_id, orderbook_address, transaction_hash, log_index). If a single event writes multiple (namespace,key) pairs, inserts will conflict. Make PK include store_address, namespace, key.
Apply:
CREATE TABLE IF NOT EXISTS interpreter_store_sets ( chain_id INTEGER NOT NULL, orderbook_address TEXT NOT NULL, store_address TEXT NOT NULL, transaction_hash TEXT NOT NULL, log_index INTEGER NOT NULL, block_number INTEGER NOT NULL, block_timestamp INTEGER NOT NULL, namespace TEXT NOT NULL, key TEXT NOT NULL, value TEXT NOT NULL, - PRIMARY KEY (chain_id, orderbook_address, transaction_hash, log_index) + PRIMARY KEY ( + chain_id, + orderbook_address, + transaction_hash, + log_index, + store_address, + namespace, + key + ) );
64-85: Add composite index for common order lookup by (owner, nonce)Searching a specific order typically filters by owner+nonce. Add an index to avoid scans.
CREATE INDEX IF NOT EXISTS idx_order_events_owner ON order_events(chain_id, orderbook_address, order_owner); +CREATE INDEX IF NOT EXISTS idx_order_events_owner_nonce ON order_events(chain_id, orderbook_address, order_owner, order_nonce);
1-240: Ensure SQLite foreign keys are enforced at runtimeSQLite requires PRAGMA foreign_keys=ON per connection. Without it, FKs in this schema won’t enforce. Please confirm it’s enabled in the client.
crates/cli/src/commands/local_db/decoded_events_to_sql.rs (2)
102-111: Ensure output directory exists before writingPrevents failures when a custom path includes non-existent dirs.
let output_path = self.output_file.map(PathBuf::from).unwrap_or_else(|| { let input_path = Path::new(&self.input_file); input_path .parent() .map(|dir| dir.join("events.sql")) .unwrap_or_else(|| PathBuf::from("events.sql")) }); + if let Some(parent) = output_path.parent() { + std::fs::create_dir_all(parent).with_context(|| { + format!("Failed to create output directory: {}", parent.display()) + })?; + } + write(&output_path, sql_statements) .with_context(|| format!("Failed to write output file: {}", output_path.display()))?;
241-241: Remove multi_thread flavor; not using LocalEvm hereThis test doesn’t use LocalEvm; default tokio runtime suffices. Minor speed/stability gain. Based on learnings.
- #[tokio::test(flavor = "multi_thread", worker_threads = 2)] + #[tokio::test]crates/cli/src/commands/local_db/sync/runner/window.rs (1)
98-104: Avoid potential overflow in default_start_blockUse saturating_add(1) to prevent panic/wrap in extreme edge cases.
pub(super) fn default_start_block(last_synced_block: u64, deployment_block: u64) -> u64 { if last_synced_block == 0 { deployment_block } else { - last_synced_block + 1 + last_synced_block.saturating_add(1) } }crates/cli/src/commands/local_db/sync/runner/mod.rs (1)
283-303: Optional: assert forwarding of chain_id/orderbook in testsConsider capturing and asserting the chain_id and orderbook_address passed to events_to_sql/raw_events_to_sql in TestDataSource to ensure the runner forwards them.
Also applies to: 319-329
crates/cli/src/commands/local_db/sync/runner/apply.rs (1)
52-64: Reduce argument count with a context struct.prepare_sql has grown; consider a small context struct to carry chain_id and orderbook_address (and possibly target_block) to keep the signature stable as we evolve:
pub struct SyncCtx { pub chain_id: u32, pub orderbook: Address, pub target_block: u64, }This will also simplify test scaffolding and future additions.
crates/common/src/raindex_client/local_db/sync.rs (1)
166-173: Scope existing store addresses by chain/orderbook to reduce noise.fetch_store_addresses returns all rows; we then fetch store set events against the current chain RPCs. While harmless, filtering by (chain_id, orderbook_address) in the query would avoid mixing addresses from other tenants and trims RPC calls.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (23)
crates/cli/src/commands/local_db/decoded_events_to_sql.rs(12 hunks)crates/cli/src/commands/local_db/sync/data_source.rs(3 hunks)crates/cli/src/commands/local_db/sync/runner/apply.rs(12 hunks)crates/cli/src/commands/local_db/sync/runner/mod.rs(10 hunks)crates/cli/src/commands/local_db/sync/runner/window.rs(5 hunks)crates/cli/src/commands/local_db/sync/storage.rs(4 hunks)crates/cli/src/commands/local_db/sync/token.rs(1 hunks)crates/cli/src/commands/local_db/tokens_fetch.rs(2 hunks)crates/common/src/raindex_client/local_db/decode.rs(2 hunks)crates/common/src/raindex_client/local_db/insert.rs(33 hunks)crates/common/src/raindex_client/local_db/mod.rs(6 hunks)crates/common/src/raindex_client/local_db/query/create_tables/mod.rs(0 hunks)crates/common/src/raindex_client/local_db/query/create_tables/query.sql(10 hunks)crates/common/src/raindex_client/local_db/query/fetch_last_synced_block/mod.rs(6 hunks)crates/common/src/raindex_client/local_db/query/fetch_last_synced_block/query.sql(1 hunks)crates/common/src/raindex_client/local_db/query/update_last_synced_block/mod.rs(2 hunks)crates/common/src/raindex_client/local_db/query/update_last_synced_block/query.sql(1 hunks)crates/common/src/raindex_client/local_db/sync.rs(12 hunks)crates/common/src/raindex_client/local_db/tokens.rs(1 hunks)crates/common/src/raindex_client/local_db/views/vault_deltas.sql(1 hunks)crates/settings/src/remote/manifest.rs(2 hunks)packages/webapp/src/lib/services/startLocalDbSync.ts(2 hunks)packages/webapp/src/routes/db-playground/+page.svelte(1 hunks)
💤 Files with no reviewable changes (1)
- crates/common/src/raindex_client/local_db/query/create_tables/mod.rs
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-05-20T10:20:08.206Z
Learnt from: 0xgleb
PR: rainlanguage/rain.orderbook#1859
File: crates/quote/src/quote_debug.rs:472-492
Timestamp: 2025-05-20T10:20:08.206Z
Learning: In the Rain Orderbook codebase, the `#[tokio::test(flavor = "multi_thread")]` annotation is specifically needed for tests that use `LocalEvm`, not just for consistency across all async tests.
Applied to files:
crates/cli/src/commands/local_db/decoded_events_to_sql.rscrates/cli/src/commands/local_db/tokens_fetch.rs
📚 Learning: 2025-07-18T10:31:05.498Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2008
File: crates/common/src/raindex_client/add_orders.rs:85-86
Timestamp: 2025-07-18T10:31:05.498Z
Learning: In the rainlanguage/rain.orderbook codebase, cfg-guarded imports like `#[cfg(not(target_family = "wasm"))] use super::*;` in test modules are sometimes needed to fix compiler warnings, even when similar imports exist in nested sub-modules. These should not be removed as they serve a specific purpose in the compilation process.
Applied to files:
crates/cli/src/commands/local_db/decoded_events_to_sql.rs
🔇 Additional comments (28)
crates/cli/src/commands/local_db/sync/token.rs (1)
128-128: LGTM! Test data correctly updated for new DecodedEventData schema.The address field (orderbook address
0xbb...20) is appropriately distinct from the token address (0xaa...20), confirming they represent different entities. The test still validates its intended behavior: existing token metadata is reused without re-fetching.packages/webapp/src/routes/db-playground/+page.svelte (3)
38-38: Clarify the commented-out auto-start behavior.The commented-out
startAutoSync()call leaves the intent unclear. If auto-sync should not start on mount, remove the comment and the line entirely. If it should start automatically, uncomment it. If this is a temporary debugging state, add a TODO or explanation.
105-151: LGTM! Simplified sync logic.The refactored auto-sync implementation correctly removes the client-side state tracking (
isSyncing,lastSyncedBlock,lastSyncTime) in favor of relying on the callback-drivensyncStatusmessage. The 5-second interval andisLoadingcheck provide sufficient protection against overlapping operations.
258-273: LGTM! Clean status display.The UI simplification effectively presents the auto-sync state with clear visual indicators (animated dot, color-coded text) and conditionally displays the
syncStatusmessage from the sync callback. This aligns well with the removal of client-side state tracking.packages/webapp/src/lib/services/startLocalDbSync.ts (1)
26-34: Ignore dead-code warning forlocalDbClientResult.value
ThegetLocalDbClientcall is intentionally used to gate unsupported chains; itsvalueisn’t needed downstream.Likely an incorrect or invalid review comment.
crates/common/src/raindex_client/local_db/query/create_tables/query.sql (2)
11-26: Confirm block_timestamp nullability intentraw_events.block_timestamp is nullable; others are NOT NULL. If all rows originate from finalized blocks, consider NOT NULL for consistency. If raw ingest can be incomplete, current choice is fine—just confirm.
Also applies to: 158-181, 183-211
3-9: Add versioned migrations for schema changesUsing
CREATE TABLE IF NOT EXISTSonly impacts fresh installs—existing databases won’t receive new columns or updated primary keys. Introduce a migration mechanism (e.g. managePRAGMA user_version, writeALTER TABLE/backfill scripts or recreate‐and‐copy tables) alongside your create‐tables script to safely upgrade existing schemas.⛔ Skipped due to learnings
Learnt from: findolor PR: rainlanguage/rain.orderbook#2145 File: crates/common/src/raindex_client/local_db/query/create_tables/query.sql:71-72 Timestamp: 2025-10-06T11:44:07.888Z Learning: The local DB feature in the rain.orderbook codebase is not live yet (as of PR #2145), so schema migrations for existing databases are not required when modifying table structures in `crates/common/src/raindex_client/local_db/query/create_tables/query.sql`.crates/settings/src/remote/manifest.rs (2)
356-362: Precise YAML field error assertion looks goodMatching on ManifestError::Yaml(... Missing("chains") at "manifest") is correct and future-proof.
192-192: Import added is necessary for the new matchBrings FieldErrorKind into scope for the assertion. LGTM.
crates/common/src/raindex_client/local_db/tokens.rs (1)
88-89: Test helper update looks correctPopulating the new address field with Address::ZERO keeps tests compiling and focused.
crates/common/src/raindex_client/local_db/query/fetch_last_synced_block/query.sql (1)
1-6: Remove LOWER() refactor – unnecessary All write and read paths normalizeorderbook_addressto lower-case hex, so case-sensitive equality already matches.Likely an incorrect or invalid review comment.
crates/cli/src/commands/local_db/decoded_events_to_sql.rs (2)
29-31: Good addition: explicit orderbook address CLI argClear, discoverable flag and help text.
43-45: Robust address parsing with contextNice use of Address::from_str and with_context for actionable errors.
crates/cli/src/commands/local_db/tokens_fetch.rs (1)
89-115: LGTM: Address type usage in tests is consistentUpdated fields (address, sender) and imports align with the new DecodedEventData shape.
crates/cli/src/commands/local_db/sync/runner/window.rs (1)
41-41: LGTM: fetch_last_synced now keyed by chain_id + orderbook_addressMatches the composite-key schema change.
crates/common/src/raindex_client/local_db/query/update_last_synced_block/mod.rs (1)
42-45: WASM test covers the new composite-key substitutionsGood assertions on replaced values and absence of placeholders.
crates/common/src/raindex_client/local_db/mod.rs (2)
20-20: LGTM: ManifestError wiring and readable messageImport, error variant, and to_readable_msg arm look correct and consistent.
Also applies to: 111-113, 162-164
237-254: LGTM: chain_id/orderbook forwarded and errors mappedNew params are threaded through to insert layer; InsertError mapping preserved.
Also applies to: 259-267
crates/cli/src/commands/local_db/sync/data_source.rs (1)
38-50: LGTM: SyncDataSource API and LocalDb impl updatedNew chain_id/orderbook params are accepted and forwarded correctly. Error wrapping unchanged.
Also applies to: 123-148, 150-158
crates/cli/src/commands/local_db/sync/runner/mod.rs (2)
136-136: LGTM: pass orderbook_address into prepare_sqlCorrectly threads the new parameter into the apply phase.
202-204: LGTM: tests updated for composite key (chain_id, orderbook_address)RAW_SQL_STUB and seed SQL now include both fields; consistent with schema.
Also applies to: 512-516
crates/common/src/raindex_client/local_db/query/fetch_last_synced_block/mod.rs (2)
8-15: LGTM: SyncStatusResponse fields extendedNew chain_id and orderbook_address fields align with composite key design.
26-31: No changes needed for placeholder quoting The SQL already wraps?orderbook_addressin single quotes; with escaped input inserted, it’s safe.crates/cli/src/commands/local_db/sync/runner/apply.rs (2)
74-78: Good: validate and normalize orderbook address once.Parsing the address up front and threading Address through SQL generators is correct and reduces downstream parsing ambiguity.
122-123: Add a negative test for invalid orderbook address.Cover the Address::from_str error path to lock in diagnostics and avoid regressions.
I can add a test like:
#[tokio::test] async fn prepare_sql_rejects_invalid_address() { // ...build minimal mocks... let err = prepare_sql(&ds, &fetcher, "db", &[], 1, "not_an_address", &[], &[], 0) .await .unwrap_err(); assert!(format!("{err}").contains("Invalid orderbook address")); }crates/common/src/raindex_client/local_db/sync.rs (2)
56-63: Correct scoping for last synced block.Fetching last_synced_block by (chain_id, orderbook_address) aligns with the new composite keys and prevents cross‑orderbook contamination.
195-201: Good: chain/orderbook threaded into SQL generation.Passing (chain_id, orderbook address) into raw_events_to_sql and decoded_events_to_sql matches the schema changes and keeps inserts consistent.
Also applies to: 216-228
crates/common/src/raindex_client/local_db/views/vault_deltas.sql (1)
38-43: Remove incorrect TAKE delta swap suggestion.
The current view aligns with engine semantics: input diffs usetaker_output, output diffs use negatedtaker_input.Likely an incorrect or invalid review comment.
| let sql_statements = decoded_events_to_sql( | ||
| &decoded_events, | ||
| self.chain_id, | ||
| orderbook_address, | ||
| self.end_block, | ||
| &decimals_by_token, | ||
| None, | ||
| ) | ||
| .map_err(|e| anyhow::anyhow!("Failed to generate SQL: {}", e))?; | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
Unify chain_id width (u32 vs u64) to avoid mismatches
Here chain_id is u32, but other paths (e.g., update_last_synced_block) use u64. This can cause subtle mismatches across DB queries/rows and JSON fixtures in multi-network scenarios. Prefer taking u64 in CLI, then safely downcast only where strictly required.
Proposed minimal change:
- Accept u64 in CLI.
- Downcast to u32 only for decoded_events_to_sql with an explicit bounds check.
Run to inventory chain_id types/signatures across the repo:
Apply this diff locally:
@@
- #[arg(long, help = "Chain ID for erc20_tokens upserts")]
- pub chain_id: u32,
+ #[arg(long, help = "Chain ID for erc20_tokens upserts")]
+ pub chain_id: u64,
@@
- let sql_statements = decoded_events_to_sql(
+ let chain_id_u32 = u32::try_from(self.chain_id)
+ .map_err(|_| anyhow!("--chain-id {} does not fit into u32", self.chain_id))?;
+
+ let sql_statements = decoded_events_to_sql(
&decoded_events,
- self.chain_id,
+ chain_id_u32,
orderbook_address,
self.end_block,
&decimals_by_token,
None,
)🏁 Script executed:
#!/bin/bash
set -e
echo "Fields and params typed as chain_id (u32/u64):"
rg -nP -C2 '\bchain_id\s*:\s*(u32|u64)\b'
echo
echo "Function signatures carrying chain_id:"
rg -nP -C3 'fn\s+\w+\([^)]*chain_id\s*:\s*(u32|u64)[^)]*\)'
echo
echo "update_last_synced_block signature occurrences:"
rg -nP -C2 'update_last_synced_block\([^\)]*chain_id\s*:\s*u64'Length of output: 51108
🏁 Script executed:
#!/bin/bash
set -e
echo "chain_id typed as u64 across repo with context:"
rg -nP '\bchain_id\s*:\s*u64\b' -C2Length of output: 1035
Unify chain_id width across CLI and common crates
- Change every
chain_id: u32incrates/cli/src/commands/local_db/**/*.rstou64so CLI arguments matchupdate_last_synced_blockand other u64‐based APIs. - Downcast to
u32only for calls intodecoded_events_to_sql(and other DB‐insert helpers) with an explicitu32::try_from(…)check. - Verify all invocations of
update_last_synced_blockand related methods now consistently takeu64.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/decoded_events_to_sql.rs around lines 92 to
101, the CLI still uses chain_id: u32 while common APIs use u64; update the
CLI's chain_id fields and function signatures in this file (and other files
under crates/cli/src/commands/local_db/) from u32 to u64 so CLI args match
update_last_synced_block and other u64 APIs, and when calling
decoded_events_to_sql (and other DB-insert helpers that expect u32) perform an
explicit downcast using u32::try_from(self.chain_id).map_err(|e| anyhow!(...))?
to handle overflow, then pass the u32 to those functions; also audit and ensure
all calls to update_last_synced_block and related methods now accept and pass
u64 consistently.
There was a problem hiding this comment.
updated it to be u32
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "=== Checking chain_id types in CLI commands ==="
rg -nP '\bchain_id\s*:\s*(u32|u64)\b' crates/cli/src/commands/local_db/ -A1 -B1
echo ""
echo "=== Checking function signatures with chain_id in CLI ==="
rg -nP 'fn\s+\w+\([^)]*chain_id\s*:\s*(u32|u64)' crates/cli/src/commands/local_db/ -A2
echo ""
echo "=== Checking update_last_synced_block calls ==="
rg -nP 'update_last_synced_block' crates/cli/src/commands/local_db/ -A1 -B1
Length of output: 5145
@findolor, thanks for confirming! I can see that chain_id is now consistently u32 across all CLI commands and aligns with the common crate APIs like decoded_events_to_sql, LocalDb::new, and ManifestEntry::entry which all expect u32. The implementation looks good. 👍
🧬 Code Graph Analysis Results
: crates/common/src/raindex_client/local_db/mod.rs
<relevant_snippet>
234-254
pub fn decoded_events_to_sql(
events: &[DecodedEventData],
chain_id: u32,
orderbook_address: Address,
end_block: u64,
decimals_by_token: &HashMap<Address, u8>,
prefix_sql: Option<&str>,
) -> Result<String, InsertError> {
let mut sql = String::new();
sql.push_str("BEGIN TRANSACTION;\n\n");
if let Some(prefix) = prefix_sql {
if !prefix.is_empty() {
sql.push_str(prefix);
if !prefix.ends_with('\n') {
sql.push('\n');
}
}
}
for event in events {
match &event.decoded_data {
DecodedEvent::DepositV2(decoded) => {
let context = event_context(event, chain_id, orderbook_address)?;
sql.push_str(&generate_deposit_sql(
&context,
decoded.as_ref(),
decimals_by_token,
)?);
}
// ... (rest of match arms omitted for brevity)
DecodedEvent::Unknown(decoded) => {
eprintln!(
"Warning: Unknown event type for transaction {}: {}",
event.transaction_hash, decoded.note
);
}
}
}
sql.push_str(&format!(
"\nINSERT INTO sync_status (chain_id, orderbook_address, last_synced_block, updated_at) VALUES ({chain_id}, {orderbook}, {last_block}, CURRENT_TIMESTAMP) ON CONFLICT(chain_id, orderbook_address) DO UPDATE SET last_synced_block = excluded.last_synced_block, updated_at = CURRENT_TIMESTAMP;\n",
chain_id = chain_id,
orderbook = orderbook_address,
last_block = end_block
));
sql.push_str("\nCOMMIT;\n");
Ok(sql)
}
</relevant_snippet>
: crates/settings/src/remote/manifest.rs
<relevant_snippet>
62-66
pub fn entry(&self, chain_id: u32) -> Result<&ManifestEntry, ManifestError> {
self.chains
.get(&chain_id)
.ok_or(ManifestError::ChainNotFound { chain_id })
}
</relevant_snippet>
: crates/common/src/raindex_client/local_db/mod.rs
<relevant_snippet>
202-204
pub fn new(chain_id: u32, api_token: String) -> Result<Self, LocalDbError> {
Self::new_with_hyper_rpc(chain_id, api_token)
}
</relevant_snippet>
🧠 Learnings used
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1956
File: crates/common/src/fuzz/mod.rs:64-64
Timestamp: 2025-07-04T09:02:57.301Z
Learning: In rainlanguage/rain.orderbook, user findolor prefers to limit type consistency changes to only the parts directly related to the current work scope. For example, when updating chain_id fields from u64 to u32 in fuzz-related code, unrelated files like tauri-app wallet commands can remain as u64 if they serve different purposes and aren't part of the current changes.
| pub(crate) fn fetch_last_synced( | ||
| db_path: &str, | ||
| chain_id: u32, | ||
| orderbook_address: &str, | ||
| ) -> Result<u64> { | ||
| let escaped_address = orderbook_address.replace('\'', "''"); | ||
| let sql = SYNC_STATUS_QUERY | ||
| .replace("?chain_id", &chain_id.to_string()) | ||
| .replace("?orderbook_address", &escaped_address); | ||
|
|
||
| let rows: Vec<SyncStatusRow> = sqlite_query_json(db_path, &sql)?; | ||
| Ok(rows.first().map(|row| row.last_synced_block).unwrap_or(0)) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Normalize and sanitize orderbook_address before SQL interpolation
Avoid case-sensitive misses and keep escaping centralized.
Apply:
pub(crate) fn fetch_last_synced(
db_path: &str,
chain_id: u32,
orderbook_address: &str,
) -> Result<u64> {
- let escaped_address = orderbook_address.replace('\'', "''");
+ let normalized_address = orderbook_address.trim().to_ascii_lowercase();
+ let escaped_address = normalized_address.replace('\'', "''");
let sql = SYNC_STATUS_QUERY
.replace("?chain_id", &chain_id.to_string())
.replace("?orderbook_address", &escaped_address);
let rows: Vec<SyncStatusRow> = sqlite_query_json(db_path, &sql)?;
Ok(rows.first().map(|row| row.last_synced_block).unwrap_or(0))
}🤖 Prompt for AI Agents
crates/cli/src/commands/local_db/sync/storage.rs lines 32-43: the
orderbook_address should be normalized and sanitized before interpolating into
the SQL to avoid case-sensitive misses and duplicate ad-hoc escaping; convert
the address to a canonical form (trim and lowercase), then pass it through the
centralized escaping function (or use the existing helper for SQL literal
escaping) and use that escaped, normalized value in the SQL.replace calls (or
better yet switch to a parameterized query API if available) so all escaping is
centralized and case normalization prevents mismatches.
There was a problem hiding this comment.
this is unrelated. the data we use is already normalized and sanitized
There was a problem hiding this comment.
@findolor, understood. Thank you for clarifying that the orderbook address is already normalized and sanitized upstream before reaching this function. I'll note that and move on.
| pub address: Address, | ||
| pub decoded_data: T, |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
New address field is useful; align error semantics or handle empty addresses
Current parsing will fail the whole batch on invalid event.address with a HexDecode error, while store-set uses AbiDecode with context. Consider aligning:
Replace:
address: Address::from_str(&event.address)?,with:
address: Address::from_str(&event.address)
.map_err(|e| DecodeError::AbiDecode(format!("Invalid log address {}: {}", event.address, e)))?,Also verify upstream guarantees that address is always non-empty and valid; otherwise decide whether to skip such events or default to Address::ZERO.
Also applies to: 166-167
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/local_db/decode.rs around lines 84-85 (and
similarly at 166-167), parsing event.address currently propagates the raw
HexDecode error and can fail the whole batch; change the logic to map parsing
failures into the common DecodeError::AbiDecode with a clear message (e.g.,
"Invalid log address <address>: <err>") and return that mapped error, and
additionally verify/address upstream guarantees about empty addresses — if
addresses may be empty decide to skip the event or substitute Address::ZERO
before parsing and document that behavior.
There was a problem hiding this comment.
@findolor, understood. I'll skip this suggestion.
| "INSERT INTO deposits (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, sender, token, vault_id, deposit_amount, deposit_amount_uint256) VALUES ({}, {}, {}, {}, '{}', {}, '{}', '{}', '{}', '{}', '{}');\n", | ||
| context.chain_id, | ||
| context.orderbook_address, | ||
| context.block_number, |
There was a problem hiding this comment.
Critical: orderbook_address is not quoted in many INSERTs (SQL correctness/integrity risk)
orderbook_address is inserted without quotes in multiple statements (deposits, withdrawals, order_events, take_orders, contexts, clear_v3, after_clear_v2, meta_events, interpreter_store_sets, order_ios, sync_status). This is inconsistent with raw_events (which quotes/escapes) and likely breaks with TEXT columns and ON CONFLICT; SQLite may misparse 0x… as a numeric literal or overflow.
Quote and hex-encode the address consistently. Example fixes below.
@@
- Ok(format!(
- "INSERT INTO deposits (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, sender, token, vault_id, deposit_amount, deposit_amount_uint256) VALUES ({}, {}, {}, {}, '{}', {}, '{}', '{}', '{}', '{}', '{}');\n",
- context.chain_id,
- context.orderbook_address,
+ Ok(format!(
+ "INSERT INTO deposits (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, sender, token, vault_id, deposit_amount, deposit_amount_uint256) VALUES ({}, '{}', {}, {}, '{}', {}, '{}', '{}', '{}', '{}', '{}');\n",
+ context.chain_id,
+ hex::encode_prefixed(context.orderbook_address),
@@
- Ok(format!(
- "INSERT INTO withdrawals (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, sender, token, vault_id, target_amount, withdraw_amount, withdraw_amount_uint256) VALUES ({}, {}, {}, {}, '{}', {}, '{}', '{}', '{}', '{}', '{}', '{}');\n",
- context.chain_id,
- context.orderbook_address,
+ Ok(format!(
+ "INSERT INTO withdrawals (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, sender, token, vault_id, target_amount, withdraw_amount, withdraw_amount_uint256) VALUES ({}, '{}', {}, {}, '{}', {}, '{}', '{}', '{}', '{}', '{}', '{}');\n",
+ context.chain_id,
+ hex::encode_prefixed(context.orderbook_address),
@@
- sql.push_str(&format!(
- "INSERT INTO order_events (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, event_type, sender, interpreter_address, store_address, order_hash, order_owner, order_nonce, order_bytes) VALUES ({}, {}, {}, {}, '{}', {}, 'AddOrderV3', '{}', '{}', '{}', '{}', '{}', '{}', '{}');\n",
- context.chain_id,
- context.orderbook_address,
+ sql.push_str(&format!(
+ "INSERT INTO order_events (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, event_type, sender, interpreter_address, store_address, order_hash, order_owner, order_nonce, order_bytes) VALUES ({}, '{}', {}, {}, '{}', {}, 'AddOrderV3', '{}', '{}', '{}', '{}', '{}', '{}', '{}');\n",
+ context.chain_id,
+ hex::encode_prefixed(context.orderbook_address),
@@
- sql.push_str(&format!(
- "INSERT INTO order_events (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, event_type, sender, interpreter_address, store_address, order_hash, order_owner, order_nonce, order_bytes) VALUES ({}, {}, {}, {}, '{}', {}, 'RemoveOrderV3', '{}', '{}', '{}', '{}', '{}', '{}', '{}');\n",
- context.chain_id,
- context.orderbook_address,
+ sql.push_str(&format!(
+ "INSERT INTO order_events (chain_id, orderbook_address, block_number, block_timestamp, transaction_hash, log_index, event_type, sender, interpreter_address, store_address, order_hash, order_owner, order_nonce, order_bytes) VALUES ({}, '{}', {}, {}, '{}', {}, 'RemoveOrderV3', '{}', '{}', '{}', '{}', '{}', '{}', '{}');\n",
+ context.chain_id,
+ hex::encode_prefixed(context.orderbook_address),
@@
- sql.push_str(&format!(
- "INSERT INTO take_orders (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, order_owner, order_nonce, input_io_index, output_io_index, taker_input, taker_output) VALUES ({chain_id}, {orderbook}, '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{owner}', '{nonce}', {input_idx}, {output_idx}, '{taker_input}', '{taker_output}');\n",
+ sql.push_str(&format!(
+ "INSERT INTO take_orders (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, order_owner, order_nonce, input_io_index, output_io_index, taker_input, taker_output) VALUES ({chain_id}, '{orderbook}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{owner}', '{nonce}', {input_idx}, {output_idx}, '{taker_input}', '{taker_output}');\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- sql.push_str(&format!(
- "INSERT INTO take_order_contexts (chain_id, orderbook_address, transaction_hash, log_index, context_index, context_value) VALUES ({chain_id}, {orderbook}, '{tx}', {log_index}, {context_index}, '{context_value}');\n",
+ sql.push_str(&format!(
+ "INSERT INTO take_order_contexts (chain_id, orderbook_address, transaction_hash, log_index, context_index, context_value) VALUES ({chain_id}, '{orderbook}', '{tx}', {log_index}, {context_index}, '{context_value}');\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- sql.push_str(&format!(
- "INSERT INTO context_values (chain_id, orderbook_address, transaction_hash, log_index, context_index, value_index, value) VALUES ({chain_id}, {orderbook}, '{tx}', {log_index}, {context_index}, {value_index}, '{value_hex}');\n",
+ sql.push_str(&format!(
+ "INSERT INTO context_values (chain_id, orderbook_address, transaction_hash, log_index, context_index, value_index, value) VALUES ({chain_id}, '{orderbook}', '{tx}', {log_index}, {context_index}, {value_index}, '{value_hex}');\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- Ok(format!(
- "INSERT INTO clear_v3_events (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, alice_order_hash, alice_order_owner, alice_input_io_index, alice_output_io_index, alice_bounty_vault_id, alice_input_vault_id, alice_output_vault_id, bob_order_hash, bob_order_owner, bob_input_io_index, bob_output_io_index, bob_bounty_vault_id, bob_input_vault_id, bob_output_vault_id) VALUES ({chain_id}, {orderbook}, '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{alice_hash}', '{alice_owner}', {alice_input_idx}, {alice_output_idx}, '{alice_bounty}', '{alice_input_vault}', '{alice_output_vault}', '{bob_hash}', '{bob_owner}', {bob_input_idx}, {bob_output_idx}, '{bob_bounty}', '{bob_input_vault}', '{bob_output_vault}');\n",
+ Ok(format!(
+ "INSERT INTO clear_v3_events (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, alice_order_hash, alice_order_owner, alice_input_io_index, alice_output_io_index, alice_bounty_vault_id, alice_input_vault_id, alice_output_vault_id, bob_order_hash, bob_order_owner, bob_input_io_index, bob_output_io_index, bob_bounty_vault_id, bob_input_vault_id, bob_output_vault_id) VALUES ({chain_id}, '{orderbook}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{alice_hash}', '{alice_owner}', {alice_input_idx}, {alice_output_idx}, '{alice_bounty}', '{alice_input_vault}', '{alice_output_vault}', '{bob_hash}', '{bob_owner}', {bob_input_idx}, {bob_output_idx}, '{bob_bounty}', '{bob_input_vault}', '{bob_output_vault}');\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- Ok(format!(
- "INSERT INTO after_clear_v2_events (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, alice_input, alice_output, bob_input, bob_output) VALUES ({chain_id}, {orderbook}, '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{alice_input}', '{alice_output}', '{bob_input}', '{bob_output}');\n",
+ Ok(format!(
+ "INSERT INTO after_clear_v2_events (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, alice_input, alice_output, bob_input, bob_output) VALUES ({chain_id}, '{orderbook}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{alice_input}', '{alice_output}', '{bob_input}', '{bob_output}');\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- Ok(format!(
- "INSERT INTO meta_events (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, subject, meta) VALUES ({chain_id}, {orderbook}, '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{subject}', '{meta}');\n",
+ Ok(format!(
+ "INSERT INTO meta_events (chain_id, orderbook_address, transaction_hash, log_index, block_number, block_timestamp, sender, subject, meta) VALUES ({chain_id}, '{orderbook}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{sender}', '{subject}', '{meta}');\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- sql.push_str(&format!(
- "INSERT INTO interpreter_store_sets (chain_id, orderbook_address, store_address, transaction_hash, log_index, block_number, block_timestamp, namespace, key, value) VALUES ({chain_id}, {orderbook}, '{store_address}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{namespace}', '{key}', '{value}') ON CONFLICT(chain_id, orderbook_address, transaction_hash, log_index) DO UPDATE SET store_address = excluded.store_address, block_number = excluded.block_number, block_timestamp = excluded.block_timestamp, namespace = excluded.namespace, key = excluded.key, value = excluded.value;\n",
+ sql.push_str(&format!(
+ "INSERT INTO interpreter_store_sets (chain_id, orderbook_address, store_address, transaction_hash, log_index, block_number, block_timestamp, namespace, key, value) VALUES ({chain_id}, '{orderbook}', '{store_address}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{namespace}', '{key}', '{value}') ON CONFLICT(chain_id, orderbook_address, transaction_hash, log_index) DO UPDATE SET store_address = excluded.store_address, block_number = excluded.block_number, block_timestamp = excluded.block_timestamp, namespace = excluded.namespace, key = excluded.key, value = excluded.value;\n",
chain_id = context.chain_id,
- orderbook = context.orderbook_address,
+ orderbook = hex::encode_prefixed(context.orderbook_address),
@@
- rows.push(format!(
- "({}, {}, '{}', {}, {}, 'input', '{}', '{}')",
- context.chain_id,
- context.orderbook_address,
+ rows.push(format!(
+ "({}, '{}', '{}', {}, {}, 'input', '{}', '{}')",
+ context.chain_id,
+ hex::encode_prefixed(context.orderbook_address),
context.transaction_hash,
@@
- rows.push(format!(
- "({}, {}, '{}', {}, {}, 'output', '{}', '{}')",
- context.chain_id,
- context.orderbook_address,
+ rows.push(format!(
+ "({}, '{}', '{}', {}, {}, 'output', '{}', '{}')",
+ context.chain_id,
+ hex::encode_prefixed(context.orderbook_address),
context.transaction_hash,
@@
- sql.push_str(&format!(
- "\nINSERT INTO sync_status (chain_id, orderbook_address, last_synced_block, updated_at) VALUES ({chain_id}, {orderbook}, {last_block}, CURRENT_TIMESTAMP) ON CONFLICT(chain_id, orderbook_address) DO UPDATE SET last_synced_block = excluded.last_synced_block, updated_at = CURRENT_TIMESTAMP;\n",
- chain_id = chain_id,
- orderbook = orderbook_address,
- last_block = end_block
- ));
+ sql.push_str(&format!(
+ "\nINSERT INTO sync_status (chain_id, orderbook_address, last_synced_block, updated_at) VALUES ({chain_id}, '{orderbook}', {last_block}, CURRENT_TIMESTAMP) ON CONFLICT(chain_id, orderbook_address) DO UPDATE SET last_synced_block = excluded.last_synced_block, updated_at = CURRENT_TIMESTAMP;\n",
+ chain_id = chain_id,
+ orderbook = hex::encode_prefixed(orderbook_address),
+ last_block = end_block
+ ));Also update the store_set test expectation:
@@
- let expected = format!(
- "INSERT INTO interpreter_store_sets (chain_id, orderbook_address, store_address, transaction_hash, log_index, block_number, block_timestamp, namespace, key, value) VALUES (1, {orderbook}, '{store}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{namespace}', '{key}', '{value}') ON CONFLICT(chain_id, orderbook_address, transaction_hash, log_index) DO UPDATE SET store_address = excluded.store_address, block_number = excluded.block_number, block_timestamp = excluded.block_timestamp, namespace = excluded.namespace, key = excluded.key, value = excluded.value;\n",
- orderbook = context.orderbook_address,
+ let expected = format!(
+ "INSERT INTO interpreter_store_sets (chain_id, orderbook_address, store_address, transaction_hash, log_index, block_number, block_timestamp, namespace, key, value) VALUES (1, '{orderbook}', '{store}', '{tx}', {log_index}, {block_number}, {block_timestamp}, '{namespace}', '{key}', '{value}') ON CONFLICT(chain_id, orderbook_address, transaction_hash, log_index) DO UPDATE SET store_address = excluded.store_address, block_number = excluded.block_number, block_timestamp = excluded.block_timestamp, namespace = excluded.namespace, key = excluded.key, value = excluded.value;\n",
+ orderbook = hex::encode_prefixed(context.orderbook_address),Also applies to: 338-341, 364-379, 398-401, 432-446, 456-463, 467-475, 523-545, 553-565, 573-584, 592-603, 611-614, 624-627, 640-643, 170-175
| chain_id INTEGER NOT NULL, | ||
| orderbook_address TEXT NOT NULL, | ||
| last_synced_block INTEGER NOT NULL DEFAULT 0, | ||
| updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | ||
| updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, | ||
| PRIMARY KEY (chain_id, orderbook_address) | ||
| ); |
There was a problem hiding this comment.
updated_at won’t auto-refresh on UPDATE
DEFAULT CURRENT_TIMESTAMP sets only on INSERT in SQLite. If you expect it to reflect last update, add a trigger.
CREATE TABLE IF NOT EXISTS sync_status (
chain_id INTEGER NOT NULL,
orderbook_address TEXT NOT NULL,
last_synced_block INTEGER NOT NULL DEFAULT 0,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (chain_id, orderbook_address)
);
+-- Auto-update updated_at
+CREATE TRIGGER IF NOT EXISTS trg_sync_status_updated_at
+AFTER UPDATE ON sync_status
+FOR EACH ROW
+BEGIN
+ UPDATE sync_status
+ SET updated_at = CURRENT_TIMESTAMP
+ WHERE chain_id = OLD.chain_id AND orderbook_address = OLD.orderbook_address;
+END;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| chain_id INTEGER NOT NULL, | |
| orderbook_address TEXT NOT NULL, | |
| last_synced_block INTEGER NOT NULL DEFAULT 0, | |
| updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP | |
| updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, | |
| PRIMARY KEY (chain_id, orderbook_address) | |
| ); | |
| CREATE TABLE IF NOT EXISTS sync_status ( | |
| chain_id INTEGER NOT NULL, | |
| orderbook_address TEXT NOT NULL, | |
| last_synced_block INTEGER NOT NULL DEFAULT 0, | |
| updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, | |
| PRIMARY KEY (chain_id, orderbook_address) | |
| ); | |
| -- Auto-update updated_at | |
| CREATE TRIGGER IF NOT EXISTS trg_sync_status_updated_at | |
| AFTER UPDATE ON sync_status | |
| FOR EACH ROW | |
| BEGIN | |
| UPDATE sync_status | |
| SET updated_at = CURRENT_TIMESTAMP | |
| WHERE chain_id = OLD.chain_id | |
| AND orderbook_address = OLD.orderbook_address; | |
| END; |
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/local_db/query/create_tables/query.sql
around lines 4 to 9, the updated_at column uses DEFAULT CURRENT_TIMESTAMP which
only sets on INSERT in SQLite and will not auto-refresh on UPDATE; add a AFTER
UPDATE trigger for this table that sets NEW.updated_at = CURRENT_TIMESTAMP (or
uses the appropriate UPDATE statement on the row) so updated_at is refreshed on
every update, and include trigger creation immediately after the CREATE TABLE
statement.
There was a problem hiding this comment.
this is incorrect. we update that field
There was a problem hiding this comment.
@findolor, you're absolutely right—I apologize for the incorrect suggestion. Since you're manually updating the updated_at field in your SQL statements, the trigger is unnecessary. Thank you for the clarification!
| let last_synced_block = | ||
| get_last_synced_block(&db_callback, chain_id, &orderbook_cfg.address.to_string()) | ||
| .await | ||
| .map_err(LocalDbError::SyncStatusReadFailed)?; | ||
| send_status_message( | ||
| &status_callback, | ||
| format!("Last synced block: {}", last_synced_block), | ||
| )?; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider iterating all orderbooks for the chain.
Currently we pick the first orderbook found. For multi‑orderbook support, iterate configs and sync each (or accept an explicit address parameter) to avoid silently ignoring others.
There was a problem hiding this comment.
this is out of scope
There was a problem hiding this comment.
@findolor, understood! I've noted that handling multiple orderbooks per chain is out of scope for this PR. The current implementation will work fine for the single-orderbook-per-chain scenario.
| w.sender, | ||
| 'WITHDRAW', | ||
| w.token, | ||
| w.vault_id, | ||
| FLOAT_NEGATE(w.withdraw_amount) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Unify aliases for clarity in withdrawals branch.
Align column aliases with the first SELECT for readability and to future‑proof column naming.
- w.sender,
- 'WITHDRAW',
+ w.sender AS owner,
+ 'WITHDRAW' AS kind,
w.token,
w.vault_id,
- FLOAT_NEGATE(w.withdraw_amount)
+ FLOAT_NEGATE(w.withdraw_amount) AS delta📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| w.sender, | |
| 'WITHDRAW', | |
| w.token, | |
| w.vault_id, | |
| FLOAT_NEGATE(w.withdraw_amount) | |
| w.sender AS owner, | |
| 'WITHDRAW' AS kind, | |
| w.token, | |
| w.vault_id, | |
| FLOAT_NEGATE(w.withdraw_amount) AS delta |
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/local_db/views/vault_deltas.sql around lines
23 to 27, the second branch (withdrawals) uses different or missing column
aliases compared to the first SELECT; update these aliases to exactly match the
first SELECT’s column names (e.g., alias the action as the same literal, ensure
token, vault_id and amount use the same alias names and use the negative amount
alias consistent with the deposit branch) so both SELECTs return identical
column names and ordering for UNION/UNION ALL compatibility and clarity.
There was a problem hiding this comment.
@findolor, understood! I'll skip this suggestion. The SQL will work correctly without the explicit aliases.
| JOIN order_events oe | ||
| ON oe.order_owner = t.order_owner | ||
| ON oe.chain_id = t.chain_id | ||
| AND oe.orderbook_address = t.orderbook_address | ||
| AND oe.order_owner = t.order_owner | ||
| AND oe.order_nonce = t.order_nonce | ||
| AND (oe.block_number < t.block_number | ||
| OR (oe.block_number = t.block_number AND oe.log_index <= t.log_index)) | ||
| AND NOT EXISTS ( | ||
| SELECT 1 FROM order_events x | ||
| WHERE x.order_owner = t.order_owner AND x.order_nonce = t.order_nonce | ||
| WHERE x.chain_id = oe.chain_id | ||
| AND x.orderbook_address = oe.orderbook_address | ||
| AND x.order_owner = t.order_owner | ||
| AND x.order_nonce = t.order_nonce | ||
| AND (x.block_number < t.block_number | ||
| OR (x.block_number = t.block_number AND x.log_index <= t.log_index)) | ||
| AND (x.block_number > oe.block_number | ||
| OR (x.block_number = oe.block_number AND x.log_index > oe.log_index)) | ||
| ) | ||
| JOIN order_ios io | ||
| ON io.transaction_hash = oe.transaction_hash | ||
| ON io.chain_id = oe.chain_id | ||
| AND io.orderbook_address = oe.orderbook_address | ||
| AND io.transaction_hash = oe.transaction_hash | ||
| AND io.log_index = oe.log_index | ||
| AND io.io_index = t.input_io_index | ||
| AND UPPER(io.io_type) = 'INPUT' | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add supporting indexes for join predicates to avoid view scan costs.
This view joins on multi‑column keys across large tables and uses a NOT EXISTS windowing pattern. Add composite indexes to back the ON/WHERE clauses:
- order_events(chain_id, orderbook_address, order_owner, order_nonce, block_number, log_index)
- order_ios(chain_id, orderbook_address, transaction_hash, log_index, io_index, io_type)
- take_orders(chain_id, orderbook_address, order_owner, order_nonce, block_number, log_index)
- clear_v3_events(chain_id, orderbook_address, transaction_hash, log_index, alice_order_hash, bob_order_hash)
- after_clear_v2_events(chain_id, orderbook_address, transaction_hash, log_index)
- deposits(chain_id, orderbook_address, block_number, log_index)
- withdrawals(chain_id, orderbook_address, block_number, log_index)
These will materially reduce nested‑loop and subquery costs when materializing the view.
Also applies to: 84-109, 124-170, 184-200, 214-229
🤖 Prompt for AI Agents
In crates/common/src/raindex_client/local_db/views/vault_deltas.sql around lines
44 to 69, the view performs multi-column joins and a NOT EXISTS windowing
pattern that will force expensive scans; add supporting composite indexes to
back the join and filter predicates to avoid full view scans and
nested-loop/subquery costs: create composite indexes on order_events(chain_id,
orderbook_address, order_owner, order_nonce, block_number, log_index),
order_ios(chain_id, orderbook_address, transaction_hash, log_index, io_index,
io_type), take_orders(chain_id, orderbook_address, order_owner, order_nonce,
block_number, log_index), clear_v3_events(chain_id, orderbook_address,
transaction_hash, log_index, alice_order_hash, bob_order_hash),
after_clear_v2_events(chain_id, orderbook_address, transaction_hash, log_index),
deposits(chain_id, orderbook_address, block_number, log_index), and
withdrawals(chain_id, orderbook_address, block_number, log_index); apply the
same index additions for the other mentioned view ranges (lines 84-109, 124-170,
184-200, 214-229).
There was a problem hiding this comment.
this is not needed now
There was a problem hiding this comment.
@findolor, understood! I've noted that the index optimization can be deferred. Feel free to revisit this if query performance becomes a concern in the future.
/)_/)
( •_•)
/>🗒️
"Noted for later!"
| dbSyncLastBlock.set(null); | ||
| dbSyncLastSyncTime.set(null); |
There was a problem hiding this comment.
🧩 Analysis chain
Verify loss of sync status tracking is intentional.
These stores are reset to null but are never updated again after sync completion. This removes user visibility into:
- The last synced block number
- The last sync timestamp
This appears to be a regression in observability unless this tracking is being reimplemented elsewhere as part of the multi-chain refactoring.
Confirm this is intentional. If sync status tracking is still needed:
🏁 Script executed:
#!/bin/bash
# Check if these stores are populated elsewhere or if there's a new sync status mechanism
rg -n --type=ts 'dbSyncLastBlock\.set|dbSyncLastSyncTime\.set' packages/webapp/Length of output: 347
Restore last sync block & timestamp tracking
The stores are reset to null but never set on successful sync, removing visibility into the last synced block number and timestamp. Reintroduce updating them after each sync completes or remove these stores if this feature is deprecated.
🤖 Prompt for AI Agents
In packages/webapp/src/lib/services/startLocalDbSync.ts around lines 71-72 the
stores dbSyncLastBlock and dbSyncLastSyncTime are being reset to null but never
updated after a successful sync; restore last-sync tracking by setting
dbSyncLastBlock.set(<lastSyncedBlockNumber>) and
dbSyncLastSyncTime.set(<Date.now() or ISO timestamp>) immediately after each
successful sync completion (or remove these stores entirely if the feature is
deprecated), and ensure the values you set match the stores' expected types and
that any necessary imports/exports remain correct.
| const localDbClient = { | ||
| getSyncStatus: vi.fn() | ||
| } as unknown as LocalDb; | ||
| const localDbClient = {} as unknown as LocalDb; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test coverage gap matches removed functionality.
The localDbClient mock is now an empty object, consistent with the removal of localDbClient usage and sync status tracking in the main code. Tests no longer verify that sync status (last block, last sync time) is properly updated after sync operations.
If sync status tracking is reimplemented elsewhere, ensure corresponding tests are added.
🤖 Prompt for AI Agents
In packages/webapp/src/lib/services/startLocalDbSync.ts around line 145, the
test mock sets localDbClient to an empty object which no longer exercises or
verifies sync status (last block / last sync time); restore the mock to
implement the LocalDb interface methods/properties used by the sync flow (e.g.,
getSyncStatus(), updateSyncStatus(lastBlock, lastSyncTime) or
lastBlock/lastSyncTime fields) and make the test assert those values are updated
after a sync, or if sync status tracking was intentionally removed, update the
tests to drop those assertions and remove the mock members; ensure the mock
behavior mirrors the real LocalDb contract so tests validate sync status updates
when/where that functionality exists.
There was a problem hiding this comment.
we'll revisit this some other time
There was a problem hiding this comment.
@findolor, understood! I've noted your decision. The test mock will remain as-is without the sync status verification.
Motivation
See issue: #2144
This PR adds chain id and orderbook address to the local db tables to prepare the ground work for multi network/orderbook implentation.
Solution
Checks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit
New Features
Refactor