Add automatic sync logic on webapp for local db#2160
Conversation
WalkthroughAdds comprehensive Rust tests and scaffolding for local_db sync runner; updates a dump URL. Renames sqlite-web imports to @rainlanguage/sqlite-web across packages. Introduces a webapp service to start and manage periodic local DB sync with UI stores, integrates it into layout, and updates dev config and pages accordingly. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Layout as +layout.svelte
participant SyncSvc as startLocalDbSync
participant Stores as dbSync stores
participant Raindex as RaindexClient
participant LocalDB as SQLiteWasmDatabase
participant Timer as setInterval
Layout->>SyncSvc: startLocalDbSync({ raindexClient, localDb, chainId, intervalMs })
SyncSvc->>Stores: set active=true, running=false, status=""
SyncSvc->>Raindex: configure local DB client (chainId, LocalDB)
alt local DB client creation fails
SyncSvc->>Stores: active=false, running=false, status="error"
SyncSvc-->>Layout: return no-op cleanup
else success
SyncSvc->>Stores: update status from LocalDB
SyncSvc->>SyncSvc: performSync()
activate SyncSvc
SyncSvc->>Stores: running=true
SyncSvc->>Raindex: syncLocalDatabase(statusCb)
Raindex-->>SyncSvc: success or error
SyncSvc->>Stores: running=false, status update, lastBlock/time
deactivate SyncSvc
SyncSvc->>Timer: schedule performSync every intervalMs
SyncSvc-->>Layout: return cleanup (clearInterval, flags reset)
end
Layout-->>SyncSvc: onDestroy -> invoke cleanup
SyncSvc->>Timer: clearInterval
SyncSvc->>Stores: active=false, running=false
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
crates/cli/src/commands/local_db/sync/runner/window.rs (1)
48-56: Defensive add: avoid potential u64 overflow on last_synced_block + 1Use saturating_add to guard the unlikely u64::MAX edge case.
- let new_start = last_synced_block + 1; + let new_start = last_synced_block.saturating_add(1);crates/cli/src/commands/local_db/sync/runner/mod.rs (1)
104-107: Optional: guard against executing empty SQLIf prepare_sql can return an empty string, skip sqlite_execute to avoid no-op calls.
-println!("Applying SQL to {}", self.db_path); -sqlite_execute(self.db_path, &sql)?; +println!("Applying SQL to {}", self.db_path); +if !sql.trim().is_empty() { + sqlite_execute(self.db_path, &sql)?; +}crates/cli/src/commands/local_db/sync/runner/apply.rs (1)
50-58: Unify “end_block” naming for consistency.Other modules use end_block; this function uses target_block. Rename for consistency and to avoid confusion.
-pub(super) async fn prepare_sql<D, T>( +pub(super) async fn prepare_sql<D, T>( @@ - decoded_events: &[DecodedEventData<DecodedEvent>], - target_block: u64, + decoded_events: &[DecodedEventData<DecodedEvent>], + end_block: u64, ) -> Result<String> @@ - data_source.events_to_sql( + data_source.events_to_sql( decoded_events, - target_block, + end_block, &token_prep.decimals_by_addr, prefix_sql, )Also applies to: 84-90
crates/cli/src/commands/local_db/sync/token.rs (1)
69-71: Use tracing::info! instead of println!Replace the stdout print with structured logging via the tracing crate to integrate with the existing tracing-subscriber setup.
- println!("Fetching metadata for {} new token(s)", missing_addrs.len()); + tracing::info!("Fetching metadata for {} new token(s)", missing_addrs.len());
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (2)
crates/cli/bin/rain-orderbook-cli.tar.gzis excluded by!**/*.gzpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (17)
crates/cli/src/commands/local_db/sync/data_source.rs(4 hunks)crates/cli/src/commands/local_db/sync/runner/apply.rs(6 hunks)crates/cli/src/commands/local_db/sync/runner/mod.rs(12 hunks)crates/cli/src/commands/local_db/sync/runner/window.rs(2 hunks)crates/cli/src/commands/local_db/sync/storage.rs(1 hunks)crates/cli/src/commands/local_db/sync/token.rs(7 hunks)crates/common/src/raindex_client/local_db/sync.rs(1 hunks)crates/common/src/raindex_client/vaults.rs(2 hunks)package.json(1 hunks)packages/ui-components/package.json(0 hunks)packages/ui-components/src/lib/hooks/useLocalDb.ts(1 hunks)packages/ui-components/src/lib/providers/LocalDbProvider.svelte(1 hunks)packages/webapp/src/lib/services/startLocalDbSync.ts(1 hunks)packages/webapp/src/lib/stores/dbSync.ts(1 hunks)packages/webapp/src/routes/+layout.svelte(2 hunks)packages/webapp/src/routes/+layout.ts(2 hunks)packages/webapp/vite.config.ts(1 hunks)
💤 Files with no reviewable changes (1)
- packages/ui-components/package.json
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-07-16T10:40:05.717Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2000
File: crates/common/src/raindex_client/vaults.rs:183-183
Timestamp: 2025-07-16T10:40:05.717Z
Learning: In the rainlanguage/rain.orderbook codebase, user findolor considers breaking changes from Option<U256> to U256 for required fields like decimals in RaindexVaultToken to be acceptable and safe, even when they affect multiple usage sites across the codebase.
Applied to files:
crates/common/src/raindex_client/vaults.rs
📚 Learning: 2025-07-16T05:52:05.576Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#1994
File: crates/common/src/raindex_client/vaults.rs:59-59
Timestamp: 2025-07-16T05:52:05.576Z
Learning: User findolor prefers to handle documentation updates for getter methods in batch via dedicated PRs rather than addressing them individually during feature development, as mentioned for the formatted amount string fields in crates/common/src/raindex_client/vaults.rs.
Applied to files:
crates/common/src/raindex_client/vaults.rs
📚 Learning: 2025-10-06T11:13:29.938Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2123
File: crates/common/src/raindex_client/local_db/mod.rs:23-29
Timestamp: 2025-10-06T11:13:29.938Z
Learning: In `crates/common/src/raindex_client/local_db/mod.rs`, the `Default` implementation for `LocalDb` that creates an RPC client pointing to `http://localhost:4444` is acceptable because the RPC client must be explicitly configured before actual usage in production scenarios.
Applied to files:
crates/cli/src/commands/local_db/sync/storage.rs
📚 Learning: 2025-05-20T10:20:08.206Z
Learnt from: 0xgleb
PR: rainlanguage/rain.orderbook#1859
File: crates/quote/src/quote_debug.rs:472-492
Timestamp: 2025-05-20T10:20:08.206Z
Learning: In the Rain Orderbook codebase, the `#[tokio::test(flavor = "multi_thread")]` annotation is specifically needed for tests that use `LocalEvm`, not just for consistency across all async tests.
Applied to files:
crates/cli/src/commands/local_db/sync/token.rs
🔇 Additional comments (20)
packages/webapp/vite.config.ts (1)
19-19: LGTM! Package migration is consistent.The update to exclude
@rainlanguage/sqlite-webfrom optimization aligns with the project-wide migration fromsqlite-webto@rainlanguage/sqlite-web.package.json (1)
80-80: Verify intentional use of alpha version.The dependency uses version
^0.0.1-alpha.6, which is a pre-release version. While the caret (^) range allows updates within the same alpha series, alpha versions are typically unstable and may have breaking changes.Is this alpha version intentional for production use, or should this be pinned to a specific alpha version to prevent unexpected breaking changes?
packages/ui-components/src/lib/hooks/useLocalDb.ts (1)
2-2: LGTM! Import update aligns with package migration.The type import update from
sqlite-webto@rainlanguage/sqlite-webis consistent with the project-wide package migration.packages/webapp/src/routes/+layout.ts (2)
6-6: LGTM! Import update aligns with package migration.The import source update from
sqlite-webto@rainlanguage/sqlite-webis consistent with the project-wide migration.
57-57: Verify explicit database name is intentional
Hardcoding'worker.db'instead of using the default may impact other environments—ensure this filename is correct for your use case and update any relevant docs or upgrade guide in @rainlanguage/sqlite-web to reflect the API change.packages/ui-components/src/lib/providers/LocalDbProvider.svelte (1)
3-3: LGTM! Import update aligns with package migration.The import source update from
sqlite-webto@rainlanguage/sqlite-webis consistent with the project-wide migration.packages/webapp/src/routes/+layout.svelte (2)
21-22: LGTM! Proper lifecycle imports added.The imports for
startLocalDbSyncand Svelte lifecycle hooks (onMount,onDestroy) are appropriate for managing the DB sync lifecycle.
49-51: LGTM! Proper cleanup in onDestroy.The optional chaining (
stopDbSync?.()) ensures cleanup is safely handled even if the sync was never started.crates/common/src/raindex_client/vaults.rs (2)
848-850: LGTM! Consistent formatting change.Same formatting improvement as above - using generic
format()instead offormat18()for better handling of varying token decimals.
790-792: LGTM! Decimal-agnostic formatting viaFloat::format().
Verify that:
Float::format()respects thedecimalsfield set byfrom_fixed_decimal.- All existing consumers correctly handle strings with varying decimal precision.
crates/common/src/raindex_client/local_db/sync.rs (1)
22-22: Confirm commit exists & schema compatibility
URL returns 200 for commit 07d48a0dd5136d42a29f2b0d8950cc9d77dfb1c9; ensure the new dump’s schema matches current migrations (load it locally or run integration tests) and that this change is documented and tested.packages/webapp/src/lib/services/startLocalDbSync.ts (1)
115-194: Tests look solid and cover failure, bootstrap+periodic, and error pathsGood use of fake timers and store assertions. No changes requested.
crates/cli/src/commands/local_db/sync/storage.rs (1)
114-118: LGTM: rpc_urls access through rpc_client aligns with refactorChange is consistent with the new API and keeps URL exposure scoped to the RPC client.
crates/cli/src/commands/local_db/sync/runner/window.rs (1)
130-167: Trait/test updates align with typed events pipelineSwitch to LogEntryResponse/DecodedEventData types and updated signatures looks correct.
Also applies to: 150-170, 176-204
crates/cli/src/commands/local_db/sync/runner/mod.rs (2)
75-107: Run pipeline wiring looks correctEnd-to-end flow with typed fetch/decode and SQL generation/apply is sound.
146-155: Tests robustly validate fetch ranges, token metadata paths, and SQL contentsGood assertions on prefixes, decimals map, and final DB rows. Use of LocalDb::default in tests is acceptable here (no real RPC needed). Based on learnings.
Also applies to: 172-231, 259-296, 310-347, 363-409, 414-418
crates/cli/src/commands/local_db/sync/runner/apply.rs (1)
9-17: Typed event flow and explicit counts look good.Switch to concrete types and count derivation via len() is clean and clear.
Also applies to: 35-48
crates/cli/src/commands/local_db/sync/token.rs (2)
62-67: What if some tokens fail to fetch? Ensure downstream handles missing decimals.If fetched < missing, decimals_by_addr will be incomplete. Confirm LocalDb::decoded_events_to_sql tolerates missing entries (e.g., defaults, skips, or errors clearly). Otherwise, consider erroring early.
You can check for incomplete coverage and bail:
- let fetched = token_fetcher.fetch(rpc_urls, missing_addrs).await?; + let fetched = token_fetcher.fetch(rpc_urls, missing_addrs.clone()).await?; @@ - for (addr, info) in fetched.into_iter() { + for (addr, info) in fetched.into_iter() { decimals_by_addr.insert(addr, info.decimals); } + // Optional: ensure all missing are covered + if !missing_addrs.iter().all(|a| decimals_by_addr.contains_key(a)) { + anyhow::bail!("Missing decimals for some token addresses; aborting SQL preparation"); + }If downstream already handles this safely, please point to it or add a test case to demonstrate.
Also applies to: 69-76
14-17: Address-keyed decimals map: solid improvement.Switching to HashMap<Address, u8> and typed decoded events removes stringly-typed pitfalls.
Also applies to: 19-31
crates/cli/src/commands/local_db/sync/data_source.rs (1)
14-35: Adopting concrete types across the trait is the right call.The updated SyncDataSource/LocalDb signatures align with the new typed pipeline.
Also applies to: 63-110
| fn events_to_sql( | ||
| &self, | ||
| decoded_events: Value, | ||
| decoded_events: &[DecodedEventData<DecodedEvent>], | ||
| end_block: u64, | ||
| prefix_sql: &str, | ||
| decimals_by_token: &HashMap<Address, u8>, | ||
| prefix_sql: Option<&str>, | ||
| ) -> Result<String> { | ||
| self.decoded_events_to_sql_with_prefix(decoded_events, end_block, prefix_sql) | ||
| .map_err(|e| anyhow!("Failed to generate SQL: {}", e)) | ||
| LocalDb::decoded_events_to_sql( | ||
| self, | ||
| decoded_events, | ||
| end_block, | ||
| decimals_by_token, | ||
| prefix_sql, | ||
| ) | ||
| .map_err(|e| anyhow!("Failed to generate SQL: {}", e)) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Add context to SQL generation errors.
You already add context; mirror this pattern elsewhere for consistency (optional).
For fetch_events/decode_events/latest_block, prefer context() from anyhow:
-use anyhow::{anyhow, Result};
+use anyhow::{anyhow, Context, Result}
@@
- self.rpc_client()
- .get_latest_block_number()
- .await
- .map_err(|e| anyhow!(e))
+ self.rpc_client()
+ .get_latest_block_number()
+ .await
+ .context("Failed to get latest block number")Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/data_source.rs around lines 90 to 105,
several error returns (in fetch_events, decode_events, latest_block) lack the
same contextual wrapping used in events_to_sql; update those error paths to use
anyhow::Context (i.e., .context("...") ) with concise, specific messages (for
example "Failed to fetch events for block range X..Y", "Failed to decode events
at block N", "Failed to fetch latest block") so the errors mirror
events_to_sql's pattern; replace direct map_err/anyhow! usages with
.context(...) where possible and include relevant variables (block numbers,
addresses, or source) in the message for clarity.
| let mut decimals_by_addr: HashMap<Address, u8> = HashMap::new(); | ||
| let mut existing_set: HashSet<Address> = HashSet::new(); | ||
| for row in existing_rows.iter() { | ||
| let key = row.address.to_ascii_lowercase(); | ||
| existing_lower.insert(key.clone()); | ||
| decimals_by_addr.insert(key, row.decimals); | ||
| if let Ok(addr) = Address::from_str(&row.address) { | ||
| existing_set.insert(addr); | ||
| decimals_by_addr.insert(addr, row.decimals); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Log invalid DB addresses instead of silently skipping.
Helps diagnose data corruption or mismatched formats.
- for row in existing_rows.iter() {
- if let Ok(addr) = Address::from_str(&row.address) {
- existing_set.insert(addr);
- decimals_by_addr.insert(addr, row.decimals);
- }
- }
+ for row in existing_rows.iter() {
+ match Address::from_str(&row.address) {
+ Ok(addr) => {
+ existing_set.insert(addr);
+ decimals_by_addr.insert(addr, row.decimals);
+ }
+ Err(_) => {
+ #[allow(clippy::print_stdout)]
+ {
+ // Use tracing/log if available; fallback suppressed in tests.
+ #[cfg(feature = "logging")]
+ tracing::warn!("Invalid token address in DB: {}", row.address);
+ }
+ }
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let mut decimals_by_addr: HashMap<Address, u8> = HashMap::new(); | |
| let mut existing_set: HashSet<Address> = HashSet::new(); | |
| for row in existing_rows.iter() { | |
| let key = row.address.to_ascii_lowercase(); | |
| existing_lower.insert(key.clone()); | |
| decimals_by_addr.insert(key, row.decimals); | |
| if let Ok(addr) = Address::from_str(&row.address) { | |
| existing_set.insert(addr); | |
| decimals_by_addr.insert(addr, row.decimals); | |
| } | |
| } | |
| let mut decimals_by_addr: HashMap<Address, u8> = HashMap::new(); | |
| let mut existing_set: HashSet<Address> = HashSet::new(); | |
| for row in existing_rows.iter() { | |
| match Address::from_str(&row.address) { | |
| Ok(addr) => { | |
| existing_set.insert(addr); | |
| decimals_by_addr.insert(addr, row.decimals); | |
| } | |
| Err(_) => { | |
| #[allow(clippy::print_stdout)] | |
| { | |
| // Use tracing/log if available; fallback suppressed in tests. | |
| #[cfg(feature = "logging")] | |
| tracing::warn!("Invalid token address in DB: {}", row.address); | |
| } | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/token.rs around lines 46 to 53, the
loop silently skips rows whose row.address fails to parse into Address; update
the loop to log a warning (including the raw row.address and any row identifier
or index available) when Address::from_str(&row.address) returns Err so invalid
DB addresses are recorded for diagnostics, while still continuing to the next
row.
| const { raindexClient, localDb, chainId = 42161, intervalMs = 10_000 } = options; | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Clamp interval to a sane minimum (and optional jitter) to avoid tight loops
Prevent 0ms/very low intervals and reduce lockstep by adding a small random jitter.
-const { raindexClient, localDb, chainId = 42161, intervalMs = 10_000 } = options;
+const { raindexClient, localDb, chainId = 42161, intervalMs = 10_000 } = options;
+const baseInterval = Math.max(1_000, intervalMs); // clamp to >=1s
@@
- if (!stopped) {
- intervalId = setInterval(() => {
- void performSync();
- }, intervalMs);
- }
+ if (!stopped) {
+ const withJitter = () => baseInterval + Math.floor(Math.random() * 500);
+ intervalId = setInterval(() => {
+ void performSync();
+ }, withJitter());
+ }Also applies to: 96-100
🤖 Prompt for AI Agents
In packages/webapp/src/lib/services/startLocalDbSync.ts around lines 20-21 (and
also apply same change at lines 96-100), the intervalMs option is not clamped or
jittered which allows 0ms/very-low intervals and lockstep behaviour; clamp
intervalMs to a sane minimum (e.g., Math.max(intervalMs ?? 1000, 250) or choose
a project-appropriate MIN_INTERVAL_MS constant) and add small random jitter
before each wait (e.g., +/- up to 10% or a random 0..JITTER_MS) so loops cannot
run at zero/too-fast frequency and independent timers avoid lockstep; ensure you
validate and coerce the incoming option to a number, use the clamped value
throughout the function, and document the MIN_INTERVAL_MS and JITTER_MS
constants near the top of the file.
| const queryFn = localDb.query.bind(localDb); | ||
| // Ensure the Raindex client uses the WASM SQLite DB for its local queries | ||
| raindexClient.setDbCallback(queryFn); | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Confirm lifecycle of setDbCallback and cleanup behavior
setDbCallback sets a global-ish dependency. If the client provides a way to unset/restore it, consider doing so in the cleanup to avoid cross-instance leakage.
If available:
return () => {
stopped = true;
if (intervalId) clearInterval(intervalId);
dbSyncIsActive.set(false);
dbSyncIsRunning.set(false);
+ // raindexClient.clearDbCallback?.(); // or restore previous if API supports it
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const queryFn = localDb.query.bind(localDb); | |
| // Ensure the Raindex client uses the WASM SQLite DB for its local queries | |
| raindexClient.setDbCallback(queryFn); | |
| return () => { | |
| stopped = true; | |
| if (intervalId) clearInterval(intervalId); | |
| dbSyncIsActive.set(false); | |
| dbSyncIsRunning.set(false); | |
| // raindexClient.clearDbCallback?.(); // or restore previous if API supports it | |
| }; |
🤖 Prompt for AI Agents
In packages/webapp/src/lib/services/startLocalDbSync.ts around lines 22 to 25,
setDbCallback installs a global-ish DB callback on the raindexClient but there
is no teardown; update the function to capture any previous callback (if
raindexClient exposes a getter) or note the API that unsets it (e.g.,
raindexClient.clearDbCallback or setDbCallback(null/undefined)), and ensure the
cleanup path (returned disposer or finally block) restores the previous callback
or clears it to prevent cross-instance leakage; if the client has no unset API,
add a safe guard comment and open a follow-up to the raindexClient API to
provide one, then use that API in the cleanup.
| const syncTime = latestStatus.updated_at ? new Date(latestStatus.updated_at) : new Date(); | ||
| dbSyncLastSyncTime.set(syncTime); | ||
| } else if (statusResult.error) { |
There was a problem hiding this comment.
Don’t default last sync time to “now” when DB has no timestamp
Defaulting to current time can mislead the UI. Prefer null when updated_at is missing.
-const syncTime = latestStatus.updated_at ? new Date(latestStatus.updated_at) : new Date();
-dbSyncLastSyncTime.set(syncTime);
+dbSyncLastSyncTime.set(
+ latestStatus.updated_at ? new Date(latestStatus.updated_at) : null
+);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const syncTime = latestStatus.updated_at ? new Date(latestStatus.updated_at) : new Date(); | |
| dbSyncLastSyncTime.set(syncTime); | |
| } else if (statusResult.error) { | |
| dbSyncLastSyncTime.set( | |
| latestStatus.updated_at ? new Date(latestStatus.updated_at) : null | |
| ); | |
| } else if (statusResult.error) { |
🤖 Prompt for AI Agents
In packages/webapp/src/lib/services/startLocalDbSync.ts around lines 51 to 53,
the code defaults dbSyncLastSyncTime to the current time when
latestStatus.updated_at is missing; change this so that when updated_at is
absent you set dbSyncLastSyncTime to null (or clear/reset the value) instead of
new Date(), and update any typings or callers if necessary to accept null so the
UI can distinguish "no timestamp" from a real sync time.
| export const dbSyncStatus = writable<string>(''); | ||
| export const dbSyncLastBlock = writable<string | null>(null); | ||
| export const dbSyncLastSyncTime = writable<Date | null>(null); | ||
| export const dbSyncIsActive = writable<boolean>(false); | ||
| export const dbSyncIsRunning = writable<boolean>(false); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Narrow store types and add a small reset helper
- Consider a discriminated union for status and bigint/number for last block to avoid string parsing downstream.
- Optional: export a reset function to centralize clearing in tests/UI.
Example:
+export type DbSyncStatus = 'idle' | 'starting' | 'syncing' | 'ok' | 'error';
-export const dbSyncStatus = writable<string>('');
+export const dbSyncStatus = writable<DbSyncStatus | string>('idle');
-export const dbSyncLastBlock = writable<string | null>(null);
+export const dbSyncLastBlock = writable<bigint | number | null>(null);
+export function resetDbSyncStores() {
+ dbSyncStatus.set('idle');
+ dbSyncLastBlock.set(null);
+ dbSyncLastSyncTime.set(null);
+ dbSyncIsActive.set(false);
+ dbSyncIsRunning.set(false);
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const dbSyncStatus = writable<string>(''); | |
| export const dbSyncLastBlock = writable<string | null>(null); | |
| export const dbSyncLastSyncTime = writable<Date | null>(null); | |
| export const dbSyncIsActive = writable<boolean>(false); | |
| export const dbSyncIsRunning = writable<boolean>(false); | |
| // packages/webapp/src/lib/stores/dbSync.ts | |
| import { writable } from 'svelte/store'; | |
| export type DbSyncStatus = 'idle' | 'starting' | 'syncing' | 'ok' | 'error'; | |
| export const dbSyncStatus = writable<DbSyncStatus | string>('idle'); | |
| export const dbSyncLastBlock = writable<bigint | number | null>(null); | |
| export const dbSyncLastSyncTime = writable<Date | null>(null); | |
| export const dbSyncIsActive = writable<boolean>(false); | |
| export const dbSyncIsRunning = writable<boolean>(false); | |
| export function resetDbSyncStores() { | |
| dbSyncStatus.set('idle'); | |
| dbSyncLastBlock.set(null); | |
| dbSyncLastSyncTime.set(null); | |
| dbSyncIsActive.set(false); | |
| dbSyncIsRunning.set(false); | |
| } |
| onMount(() => { | ||
| if (!browser || !raindexClient || !localDb) return; | ||
|
|
||
| stopDbSync = startLocalDbSync({ | ||
| raindexClient, | ||
| localDb, | ||
| chainId: 42161, | ||
| intervalMs: 5_000 | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
Verify hard-coded chain ID and sync interval.
The implementation hard-codes chainId: 42161 (Arbitrum One) and uses a 5-second sync interval.
Considerations:
- Hard-coded chain ID: Should this be configurable or derived from application state (e.g., user-selected networks)?
- Sync interval: Is 5 seconds appropriate, or could this cause performance issues? Consider:
- Server load if many users sync simultaneously
- Battery/resource usage on mobile devices
- Whether a longer interval (e.g., 30-60 seconds) would suffice
🏁 Script executed:
#!/bin/bash
# Check if chain ID is referenced elsewhere or should be configurable
rg -n "42161" --type ts --type svelte -C2
# Check if there are any constants or configuration for sync intervals
rg -n "intervalMs|sync.*interval" --type ts --type svelteLength of output: 61
🏁 Script executed:
#!/bin/bash
# Search for hard-coded chain ID occurrences across .svelte and .ts files
rg -n "42161" -g "*.svelte" -g "*.ts" -C2
# Search for any existing constants or configuration for sync intervals
rg -n "intervalMs" -g "*.svelte" -g "*.ts" -C2
rg -n "sync.*interval" -g "*.svelte" -g "*.ts" -C2Length of output: 4838
Extract network and sync settings into configurable constants or derived state
- Multiple hard-coded
chainId: 42161in
• packages/webapp/src/routes/+layout.svelte (line 44)
• packages/webapp/src/routes/db-playground/+page.svelte (lines 10, 172, 197)
• default in startLocalDbSync.ts - Sync interval override
intervalMs: 5_000in +layout.svelte vs default10_000ms; consider a longer default (30–60 s) to reduce load and battery usage
Centralize these in a config or derive from the user-selected network to avoid duplication and improve flexibility.
6edde58 to
963f9be
Compare
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (6)
packages/webapp/src/lib/services/startLocalDbSync.ts (3)
20-20: The interval clamping and jitter suggestion from the previous review has not been addressed.The
intervalMsoption still lacks validation and jitter. This allows zero or very low intervals which could cause tight loops and lock-step behavior across multiple sync instances.
22-24: The DB callback lifecycle concern from the previous review has not been addressed.The
setDbCallbackinstalls a global-ish dependency onraindexClientwithout cleanup. The returned disposer function doesn't unset or restore the callback, which could cause cross-instance leakage.
50-51: The default sync time concern from the previous review has not been addressed.The code still defaults
dbSyncLastSyncTimeto the current time whenupdated_atis missing, which can mislead the UI into thinking a sync just occurred.crates/cli/src/commands/local_db/sync/runner/apply.rs (1)
218-240: Strengthen assertion to validate end_block substitution.The test asserts that the result contains
"INSERT INTO sync", but doesn't verify that the?end_blockplaceholder was replaced with the expected value100. A bug in substitution logic could go undetected.Apply this diff to add a concrete assertion:
.await .expect("prepare sql"); - assert!(result.contains("INSERT INTO sync")); + assert!(result.contains("INSERT INTO sync")); + assert!(result.contains("100"), "end_block placeholder should be replaced with 100");packages/webapp/src/routes/+layout.svelte (1)
38-47: Hard-coded network and sync interval should be configurable.The implementation hard-codes
chainId: 42161(Arbitrum One) andintervalMs: 5_000(5 seconds).Concerns:
- Chain ID: Hard-coding a specific chain prevents multi-chain support. Consider deriving this from user-selected network state or a configuration file.
- Sync interval: A 5-second interval may be too aggressive:
- Increases server load if many users sync simultaneously
- Drains battery/resources on mobile devices
- Consider a longer default (30-60 seconds) or make it user-configurable
As per coding guidelines and past review feedback, these values should be centralized in a config or derived from application state. For example:
const config = { chainId: $page.data.selectedChainId || 42161, intervalMs: 30_000 // More reasonable default }; stopDbSync = startLocalDbSync({ raindexClient, localDb, ...config });packages/webapp/src/lib/stores/dbSync.ts (1)
1-7: Improve type safety and add reset helper.The stores use loose types that could cause issues downstream:
Status store: Plain
stringallows any value. Consider a discriminated union:type DbSyncStatus = 'idle' | 'starting' | 'syncing' | 'ok' | 'error'; export const dbSyncStatus = writable<DbSyncStatus>('idle');Last block store: Using
string | nullrequires parsing. Considerbigint | number | nullfor direct numeric operations.Reset helper: Add a centralized function to clear all stores, useful for cleanup and testing:
export function resetDbSyncStores() { dbSyncStatus.set('idle'); dbSyncLastBlock.set(null); dbSyncLastSyncTime.set(null); dbSyncIsActive.set(false); dbSyncIsRunning.set(false); }Based on learnings
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (2)
crates/cli/bin/rain-orderbook-cli.tar.gzis excluded by!**/*.gzpackage-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (14)
crates/cli/src/commands/local_db/sync/runner/apply.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/mod.rs(1 hunks)crates/cli/src/commands/local_db/sync/runner/window.rs(4 hunks)crates/common/src/raindex_client/local_db/sync.rs(1 hunks)package.json(1 hunks)packages/ui-components/package.json(0 hunks)packages/ui-components/src/lib/hooks/useLocalDb.ts(1 hunks)packages/ui-components/src/lib/providers/LocalDbProvider.svelte(1 hunks)packages/webapp/src/lib/services/startLocalDbSync.ts(1 hunks)packages/webapp/src/lib/stores/dbSync.ts(1 hunks)packages/webapp/src/routes/+layout.svelte(2 hunks)packages/webapp/src/routes/+layout.ts(2 hunks)packages/webapp/src/routes/db-playground/+page.svelte(2 hunks)packages/webapp/vite.config.ts(1 hunks)
💤 Files with no reviewable changes (1)
- packages/ui-components/package.json
🧰 Additional context used
📓 Path-based instructions (7)
crates/*/{src,tests,benches,examples}/**/*.rs
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Rust code must pass clippy with all warnings denied (cargo clippy --workspace --all-targets --all-features -D warnings)
Files:
crates/common/src/raindex_client/local_db/sync.rscrates/cli/src/commands/local_db/sync/runner/apply.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/runner/window.rs
crates/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Rust code lives in the workspace under crates/* (e.g., cli, common, bindings, js_api, quote, subgraph, settings, math, integration_tests)
Files:
crates/common/src/raindex_client/local_db/sync.rscrates/cli/src/commands/local_db/sync/runner/apply.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/runner/window.rs
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs: Format Rust with nix develop -c cargo fmt --all
Lint Rust with nix develop -c rainix-rs-static (preconfigured flags)
Use snake_case for Rust crates/modules and PascalCase for types
Files:
crates/common/src/raindex_client/local_db/sync.rscrates/cli/src/commands/local_db/sync/runner/apply.rscrates/cli/src/commands/local_db/sync/runner/mod.rscrates/cli/src/commands/local_db/sync/runner/window.rs
packages/webapp/**/*.{svelte,ts}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Webapp must pass Svelte lint/format checks (npm run svelte-lint-format-check -w @rainlanguage/webapp)
Files:
packages/webapp/src/routes/+layout.tspackages/webapp/vite.config.tspackages/webapp/src/lib/stores/dbSync.tspackages/webapp/src/routes/+layout.sveltepackages/webapp/src/lib/services/startLocalDbSync.tspackages/webapp/src/routes/db-playground/+page.svelte
packages/{webapp,ui-components}/**/*.{svelte,ts}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
If you modify frontend code, you must include a screenshot of the built webapp reflecting the change
Files:
packages/webapp/src/routes/+layout.tspackages/webapp/vite.config.tspackages/webapp/src/lib/stores/dbSync.tspackages/webapp/src/routes/+layout.sveltepackages/webapp/src/lib/services/startLocalDbSync.tspackages/ui-components/src/lib/providers/LocalDbProvider.sveltepackages/ui-components/src/lib/hooks/useLocalDb.tspackages/webapp/src/routes/db-playground/+page.svelte
packages/**
📄 CodeRabbit inference engine (AGENTS.md)
JavaScript/Svelte projects live under packages/* (webapp, ui-components, orderbook)
Files:
packages/webapp/src/routes/+layout.tspackages/webapp/vite.config.tspackages/webapp/src/lib/stores/dbSync.tspackages/webapp/src/routes/+layout.sveltepackages/webapp/src/lib/services/startLocalDbSync.tspackages/ui-components/src/lib/providers/LocalDbProvider.sveltepackages/ui-components/src/lib/hooks/useLocalDb.tspackages/webapp/src/routes/db-playground/+page.svelte
packages/ui-components/**/*.{svelte,ts}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
UI components must pass Svelte lint/format checks (npm run svelte-lint-format-check -w @rainlanguage/ui-components)
Files:
packages/ui-components/src/lib/providers/LocalDbProvider.sveltepackages/ui-components/src/lib/hooks/useLocalDb.ts
🧠 Learnings (2)
📚 Learning: 2025-10-06T14:39:36.425Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2159
File: crates/cli/src/commands/local_db/sync/cli.rs:37-78
Timestamp: 2025-10-06T14:39:36.425Z
Learning: In the SyncLocalDb CLI command (crates/cli/src/commands/local_db/sync/cli.rs), block range validation (e.g., start_block <= end_block) is handled internally by the sync runner, not at the CLI entry point.
Applied to files:
crates/cli/src/commands/local_db/sync/runner/window.rs
📚 Learning: 2025-10-06T10:23:57.479Z
Learnt from: findolor
PR: rainlanguage/rain.orderbook#2111
File: packages/webapp/src/routes/db-playground/+page.svelte:149-153
Timestamp: 2025-10-06T10:23:57.479Z
Learning: The db-playground page at packages/webapp/src/routes/db-playground/+page.svelte is used for testing purposes, and production-level concerns (such as invalid RPC tokens) are acceptable in this context.
Applied to files:
packages/webapp/src/routes/db-playground/+page.svelte
🧬 Code graph analysis (4)
crates/cli/src/commands/local_db/sync/runner/apply.rs (3)
crates/cli/src/commands/local_db/sync/runner/mod.rs (3)
new(38-50)fetch_events(179-191)decode_events(193-198)crates/cli/src/commands/local_db/sync/data_source.rs (4)
fetch_events(17-22)fetch_events(72-81)decode_events(23-26)decode_events(83-88)crates/cli/src/commands/local_db/sqlite.rs (1)
sqlite_execute(8-41)
crates/cli/src/commands/local_db/sync/runner/mod.rs (2)
crates/cli/src/commands/local_db/sqlite.rs (2)
sqlite_execute(8-41)sqlite_query_json(44-71)crates/cli/src/commands/local_db/sync/data_source.rs (12)
fetch(39-40)fetch(47-60)latest_block(16-16)latest_block(65-70)fetch_events(17-22)fetch_events(72-81)decode_events(23-26)decode_events(83-88)events_to_sql(27-33)events_to_sql(90-105)rpc_urls(34-34)rpc_urls(107-109)
packages/webapp/src/lib/services/startLocalDbSync.ts (1)
packages/webapp/src/lib/stores/dbSync.ts (5)
dbSyncStatus(3-3)dbSyncIsActive(6-6)dbSyncIsRunning(7-7)dbSyncLastBlock(4-4)dbSyncLastSyncTime(5-5)
crates/cli/src/commands/local_db/sync/runner/window.rs (3)
crates/cli/src/commands/local_db/sqlite.rs (1)
sqlite_execute(8-41)crates/cli/src/commands/local_db/sync/runner/apply.rs (5)
fetch_events(19-33)fetch_events(119-126)decode_events(35-48)decode_events(128-133)events_to_sql(135-168)crates/cli/src/commands/local_db/sync/data_source.rs (6)
fetch_events(17-22)fetch_events(72-81)decode_events(23-26)decode_events(83-88)events_to_sql(27-33)events_to_sql(90-105)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: test
- GitHub Check: test
- GitHub Check: build-tauri (ubuntu-22.04, true)
- GitHub Check: Deploy-Preview-Push
- GitHub Check: Deploy-Docs-Preview
🔇 Additional comments (20)
packages/webapp/src/lib/services/startLocalDbSync.ts (2)
63-91: LGTM!The
performSyncfunction has proper concurrency control with theisSyncingguard, appropriate error handling, and correctly manages thedbSyncIsRunningstore state in the finally block.
117-287: LGTM!The test suite is comprehensive, covering error scenarios, bootstrap behavior, periodic syncing, and state management. The use of fake timers and proper cleanup in beforeEach/afterEach hooks demonstrates good testing practices.
packages/ui-components/src/lib/hooks/useLocalDb.ts (1)
2-2: LGTM!The import path update to
@rainlanguage/sqlite-webaligns with the package migration across the codebase. The type interface remains unchanged, ensuring backward compatibility.packages/webapp/vite.config.ts (1)
19-19: LGTM!The update to exclude
@rainlanguage/sqlite-webfrom Vite's dependency optimization is correct and necessary for the package migration. This ensures the WASM/worker code is not pre-bundled.packages/webapp/src/routes/+layout.ts (1)
6-6: Verify the database naming strategy across routes.The import has been updated to
@rainlanguage/sqlite-weband the initialization now uses a staticnewmethod with a required database name. The layout uses'worker.db'while the db-playground route (packages/webapp/src/routes/db-playground/+page.svelte:34) uses'rainlanguage_orderbook_webapp'.Confirm whether using different database names for different routes is intentional. If they should share the same database, consider extracting the database name to a constant to ensure consistency:
// In a constants file or at the top of the module const LOCAL_DB_NAME = 'worker.db'; // or 'rainlanguage_orderbook_webapp' // Then use it consistently: const localDbRes = SQLiteWasmDatabase.new(LOCAL_DB_NAME);If separate databases are intentional, consider adding a comment explaining why different names are used in different contexts.
Also applies to: 57-57
packages/ui-components/src/lib/providers/LocalDbProvider.svelte (1)
3-3: LGTM!The import path update to
@rainlanguage/sqlite-webis consistent with the package migration. No functional changes to the provider component.packages/webapp/src/routes/db-playground/+page.svelte (1)
6-6: LGTM! Different database name appears intentional for the playground.The import path has been updated to
@rainlanguage/sqlite-weband the initialization uses'rainlanguage_orderbook_webapp'as the database name. This differs from the main layout which uses'worker.db', but given this is a testing/playground page (based on learnings), using a separate database name provides isolation from the main application data, which is appropriate.Also applies to: 34-34
crates/common/src/raindex_client/local_db/sync.rs (1)
22-22: Validate SQL dump schema and data integrity. URL is reachable (HTTP 200) and file is a valid gzip; confirm the dump’s schema and data match expectations.packages/webapp/src/routes/+layout.svelte (1)
49-51: LGTM!The cleanup logic properly uses optional chaining to invoke the stop function when the component is destroyed, ensuring resources are released correctly.
crates/cli/src/commands/local_db/sync/runner/window.rs (5)
137-137: LGTM!The added imports support the updated test infrastructure, bringing in
HashMapfor type signatures and SQLite utilities for proper test database initialization.Also applies to: 141-142
155-179: LGTM!The updated method signatures with named parameters and expanded event-to-SQL arguments align with the trait definition and improve code clarity. The changes are consistent with the broader refactoring across the sync runner module.
186-194: LGTM!The
params()helper function follows good DRY principles by centralizing default test parameter construction. This makes tests more maintainable and reduces duplication.
196-290: LGTM!The test suite comprehensively covers sync window computation scenarios:
- Default startup behavior
- Start block adjustment when behind last synced
- End block clamping to chain head
- No-op detection when start exceeds target
Each test properly initializes a temporary SQLite database and validates the expected window state. The tests are well-structured, focused, and use the new
params()helper effectively.
293-296: LGTM!The test validates both branches of the
default_start_blockfunction logic, ensuring correct behavior for both initial sync (returning deployment block) and subsequent syncs (returning last synced + 1).crates/cli/src/commands/local_db/sync/runner/mod.rs (6)
127-142: LGTM!The expanded imports support the enhanced test infrastructure, bringing in concrete types for generating realistic test data (DepositV2 events, addresses, amounts) and utilities for in-memory state tracking.
144-159: LGTM!The
TestFetcherimplementation provides excellent test observability by recording all fetch calls. This enables tests to verify that the token metadata fetcher was invoked with the expected addresses, strengthening test coverage.
161-235: LGTM!The
TestDataSourceimplementation provides comprehensive observability by recording all interactions:
- Fetch calls with orderbook address and block ranges
- SQL generation calls with event counts and end blocks
- Token decimals and prefix SQL
- Proper handling of SQL prefix concatenation and placeholder substitution
This enables tests to validate the complete data flow through the sync pipeline.
237-266: LGTM!The helper functions
sample_decoded_eventandsample_logprovide clean, reusable test data builders. Parameterizing by token address while using realistic values for other fields strikes a good balance between flexibility and simplicity.
268-340: LGTM!This comprehensive end-to-end test validates the entire sync pipeline:
- Database initialization
- Event fetching with correct parameters
- Token metadata retrieval
- SQL generation with proper decimals and prefixes
- Database state update
The test's thorough assertions on captured interactions (fetch calls, SQL calls, token fetcher calls) and final database state provide strong confidence in the runner's correctness.
342-374: LGTM!The test thoroughly validates the metadata RPC selection logic, covering both fallback to data source RPCs (when no override is provided) and the use of override RPCs (when provided). The assertions clearly verify the expected behavior in both scenarios.
| #[tokio::test] | ||
| async fn fetch_events_counts_results() { | ||
| let data_source = MockDataSource { | ||
| sql_result: String::new(), | ||
| rpc_urls: vec![], | ||
| captured_prefixes: Mutex::new(vec![]), | ||
| captured_events: Mutex::new(vec![]), | ||
| captured_decimals: Mutex::new(vec![]), | ||
| }; | ||
|
|
||
| let result = fetch_events(&data_source, "0xorder", 1, 10) | ||
| .await | ||
| .expect("fetch events"); | ||
| assert_eq!(result.events.len(), 0); | ||
| assert_eq!(result.raw_count, 0); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider consolidating trivial test cases.
This test validates that fetch_events returns zero events when the data source has no events, which is a straightforward pass-through. Consider merging this with a more comprehensive test that exercises both empty and non-empty scenarios, or adding assertions that verify the function properly delegates to the data source's fetch_events method.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/runner/apply.rs around lines 186 to
201, the test fetch_events_counts_results is trivial and should be consolidated
or expanded; either merge it into an existing comprehensive test that covers
both empty and non-empty data source responses or extend this test to also
assert that fetch_events delegates correctly to the data source by verifying the
MockDataSource recorded the expected call (e.g., inspect its captured_* fields
for the expected prefixes/events/decimals and that fetch_events was invoked with
the correct arguments), then remove the redundant standalone assertion-only
test.
| #[tokio::test] | ||
| async fn decode_events_counts_results() { | ||
| let data_source = MockDataSource { | ||
| sql_result: String::new(), | ||
| rpc_urls: vec![], | ||
| captured_prefixes: Mutex::new(vec![]), | ||
| captured_events: Mutex::new(vec![]), | ||
| captured_decimals: Mutex::new(vec![]), | ||
| }; | ||
|
|
||
| let decoded = decode_events(&data_source, vec![]).expect("decode events"); | ||
| assert_eq!(decoded.decoded.len(), 0); | ||
| assert_eq!(decoded.decoded_count, 0); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider consolidating trivial test cases.
Similar to the previous test, this validates the empty case for decode_events. Consider merging with a richer test that includes both empty and populated event arrays to reduce test redundancy.
🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/runner/apply.rs around lines 203 to
216, the test decode_events_counts_results is a trivial empty-case duplicate of
an earlier test; remove this standalone empty-only test and consolidate its
assertions into the richer decode_events test (or replace both with a single
table-driven/parameterized test) so one test covers both the empty input and a
populated-event scenario; ensure the consolidated test calls decode_events with
an empty vec and with a populated vec, asserts decoded.decoded.len() and
decoded.decoded_count for each case, and remove the now-redundant test function.
| #[tokio::test] | ||
| async fn prepare_sql_uses_metadata_rpcs_if_provided() { | ||
| let temp_dir = TempDir::new().unwrap(); | ||
| let db_path = temp_dir.path().join("sync.db"); | ||
| let db_path_str = db_path.to_string_lossy(); | ||
|
|
||
| sqlite_execute(&db_path_str, DEFAULT_SCHEMA_SQL).unwrap(); | ||
|
|
||
| let data_source = MockDataSource { | ||
| sql_result: String::new(), | ||
| rpc_urls: vec![Url::parse("http://default.com").unwrap()], | ||
| captured_prefixes: Mutex::new(vec![]), | ||
| captured_events: Mutex::new(vec![]), | ||
| captured_decimals: Mutex::new(vec![]), | ||
| }; | ||
|
|
||
| let token_fetcher = MockFetcher { metadata: vec![] }; | ||
| prepare_sql( | ||
| &data_source, | ||
| &token_fetcher, | ||
| &db_path_str, | ||
| &[Url::parse("http://override.com").unwrap()], | ||
| 1, | ||
| &[], | ||
| 100, | ||
| ) | ||
| .await | ||
| .expect("prepare sql"); | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Add assertions to validate metadata RPC override behavior.
The test name suggests it verifies that the override RPC URL is used, but the test only confirms the function completes. To properly validate this behavior, add assertions that check the override URL was passed to the token fetcher or captured in the mock's state.
If the MockFetcher were enhanced to capture the RPC URLs passed to it (similar to TestFetcher in mod.rs lines 144-159), you could assert:
// After the prepare_sql call:
let fetcher_rpc_calls = token_fetcher.captured_rpcs.lock().unwrap();
assert_eq!(fetcher_rpc_calls[0][0].as_str(), "http://override.com/");🤖 Prompt for AI Agents
In crates/cli/src/commands/local_db/sync/runner/apply.rs around lines 242 to
270, the test prepare_sql_uses_metadata_rpcs_if_provided currently only ensures
success but doesn't assert the override RPC was used; modify the MockFetcher (or
add a captured_rpcs Mutex<Vec<Vec<Url>>> to the existing MockFetcher) to record
RPC URL lists it receives, then after the prepare_sql call lock that
captured_rpcs and assert that the first recorded RPC list contains the override
URL "http://override.com/" (or compare .as_str()), ensuring the override takes
precedence.
| "dependencies": { | ||
| "@fast-check/vitest": "0.2.0", | ||
| "@rainlanguage/orderbook": "workspace:*", | ||
| "@rainlanguage/sqlite-web": "^0.0.1-alpha.6", |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
Consider pinning the exact alpha version for stability.
The @rainlanguage/sqlite-web dependency uses an alpha pre-release version with a caret range (^0.0.1-alpha.6). Alpha versions can have breaking changes, and the caret allows automatic updates to newer alpha versions (e.g., alpha.7, alpha.8) which might introduce instability.
Consider:
- Pinning to the exact version:
"@rainlanguage/sqlite-web": "0.0.1-alpha.6"(without the caret) to prevent unexpected updates. - If this is for production, plan to upgrade to a stable release version when available.
Additionally, verify the package exists and is accessible:
🏁 Script executed:
#!/bin/bash
# Description: Verify the package exists on npm and check available versions
echo "Checking @rainlanguage/sqlite-web on npm..."
npm view @rainlanguage/sqlite-web versions --json
echo ""
echo "Checking version 0.0.1-alpha.6 specifically..."
npm view @rainlanguage/sqlite-web@0.0.1-alpha.6 versionLength of output: 241
Pin alpha dependency to exact version.
The caret range (^0.0.1-alpha.6) allows automatic updates to future alpha releases; replace it with "0.0.1-alpha.6" to prevent unintended breakages.
🤖 Prompt for AI Agents
In package.json around line 80, the dependency "@rainlanguage/sqlite-web" is
currently using a caret range "^0.0.1-alpha.6" which allows unintended updates;
change the version specifier to the exact version "0.0.1-alpha.6" (remove the
leading ^) to pin the alpha dependency and prevent automatic updates.
|
@coderabbitai assess this PR size classification for the totality of the PR with the following criterias and report it in your comment: S/M/L PR Classification Guidelines:This guide helps classify merged pull requests by effort and complexity rather than just line count. The goal is to assess the difficulty and scope of changes after they have been completed. Small (S)Characteristics:
Review Effort: Would have taken 5-10 minutes Examples:
Medium (M)Characteristics:
Review Effort: Would have taken 15-30 minutes Examples:
Large (L)Characteristics:
Review Effort: Would have taken 45+ minutes Examples:
Additional Factors to ConsiderWhen deciding between sizes, also consider:
Notes:
|
|
Tip For best results, initiate chat on the files or code changes. SIZE=M 🧠 Learnings used |
Motivation
This PR enables automatic local db syncing for webapp. We previously needed to sync with push of a button but now we start syncing automatically if raindex client has local db enabled.
Solution
Checks
By submitting this for review, I'm confirming I've done the following:
Summary by CodeRabbit
New Features
Improvements
Chores