Skip to content

Conversation

@PastaPastaPasta
Copy link
Member

@PastaPastaPasta PastaPastaPasta commented Aug 27, 2025

Summary by CodeRabbit

  • New Features

    • Wallet-driven monitoring and block/filter processing; new APIs to view masternode lists and quorums and access the wallet.
  • Refactor

    • Removed manual watch-item/watch-manager APIs; tracking now relies on wallet integration.
    • Mempool/filter matching and per-block balance reporting simplified: only address-based mempool checks remain; script/outpoint matching is disabled/placeholder.
  • Bug Fixes

    • Status display now reads filter tip height from storage for more accurate progress.
  • Chores

    • Removed watch-item tests and error variant; startup now initializes a wallet from a mnemonic.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 27, 2025

Walkthrough

Removes WatchItem/watch-manager APIs and error variant; threads a WalletInterface generic and Arc<RwLock> through sync and client components; replaces local watch-item matching with wallet-driven placeholders; updates mempool/filter/block processing, client API, and initialization to use wallet integration points.

Changes

Cohort / File(s) Summary
Remove WatchItem type and APIs
dash-spv/src/types.rs, dash-spv/src/client/watch_manager.rs, dash-spv/src/client/watch_manager_test.rs, dash-spv/src/error.rs, dash-spv/src/lib.rs, dash-spv/src/client/config.rs
Deletes WatchItem enum and serde impls; removes watch_manager module and its tests; drops SpvError::WatchItem; removes WatchItem re-export; removes ClientConfig watch_items field and watch builder methods.
Wallet generic integration across client & sync
dash-spv/src/sync/sequential/mod.rs, dash-spv/src/client/mod.rs, dash-spv/src/client/message_handler.rs, dash-spv/src/client/filter_sync.rs, dash-spv/src/sync/filters.rs
Adds generic W: WalletInterface and Arc<RwLock<W>> to SequentialSyncManager and consumers; updates structs/impls/constructors to accept wallet; removes watch_items parameters from many public APIs; exposes client.wallet() and new wallet-driven monitoring/query methods.
Block and mempool processing updates
dash-spv/src/client/block_processor.rs, dash-spv/src/mempool_filter.rs
Removes watch_items from BlockProcessor; per-transaction watch matching code is commented/TODO and replaced by wallet-derived relevant tx counts; MempoolFilter now stores HashSet<Address> (watched_addresses) and matches transactions only by address (scripts/outpoints removed for now).
Filter sync & processor API shift
dash-spv/src/sync/filters.rs
Removes watch_items parameters from public filter APIs; many matching functions return placeholders pending wallet wiring; adds check_filter_for_matches<W: WalletInterface> wallet-based entry; simplifies spawn_filter_processor signature.
Client status & main init changes
dash-spv/src/client/status_display.rs, dash-spv/src/main.rs
Reads filter tip heights from storage (get_filter_tip_height) instead of calculated helper; initializes SPV wallet from mnemonic and passes wallet to client; replaces active watch-item adds with log-only messages and displays monitored addresses from wallet.
Formatting-only storage changes
dash-spv/src/storage/disk.rs
Formatting and assertion reflows only; no logic/API changes.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Main as main.rs
  participant Client as DashSpvClient<S,N,W>
  participant Sync as SequentialSyncManager<S,N,W>
  participant Net as Network
  participant Store as Storage
  participant Wallet as WalletInterface (W)

  User->>Main: start()
  Main->>Wallet: create_wallet_from_mnemonic(...)
  Main->>Client: new(..., wallet)
  Client->>Sync: new(config, ..., wallet)

  Note over Client,Sync: header & filter sync start

  Sync->>Store: get header/filter tip heights
  Sync->>Net: request filters / cfilters
  Net-->>Sync: cfilter(block_hash, filter)
  Sync->>Wallet: check_filter_for_matches(filter, block_hash)
  alt match found
    Sync->>Net: request block(block_hash)
  else no match
    Note right of Sync: continue
  end

  Net-->>Sync: block(block_hash, block)
  Sync->>Wallet: process_block(block)
  Wallet-->>Sync: relevant tx count/details
  Sync-->>Client: emit BlockProcessed / logs
Loading
sequenceDiagram
  autonumber
  participant Client
  participant BlockProc as BlockProcessor
  participant Wallet as WalletInterface (W)

  Client->>BlockProc: process_block(...)
  note over BlockProc: WatchItem scanning removed (TODO: wallet integration)
  BlockProc-->>Client: BlockProcessed{ relevant_transactions: <wallet-derived> }
  Note over Client,Wallet: future enrichment via wallet.process_block
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested reviewers

  • QuantumExplorer

Poem

In burrows of bytes I hop and weave,
Watch-lists gone, new wallets I cleave.
Filters whisper, “match or not?”
I twitch an ear—block fetched on the spot.
With keys in paws and syncs anew,
I thump the ground: wallet leads the crew. 🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/integrate-keywalletmanager

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

PastaPastaPasta and others added 3 commits August 26, 2025 21:36
… sync

This change simplifies the SPV client by removing the separate watch manager
infrastructure and directly integrating wallet functionality into the sync process.

Changes:
- Remove WatchManager and related test files
- Remove WatchItem types from types.rs
- Pass wallet reference directly to SequentialSyncManager
- Simplify filter checking to use wallet's built-in check_compact_filter
- Clean up client module to remove watch-related complexity
- Download full filter range instead of limited recent blocks
- Streamline mempool filter and filter sync logic

This refactoring reduces code complexity and better leverages the wallet's
existing filter checking capabilities.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@PastaPastaPasta PastaPastaPasta force-pushed the feat/integrate-keywalletmanager branch from 98e0e6a to 928ea28 Compare August 27, 2025 02:36
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (8)
dash-spv/src/storage/disk.rs (2)

343-346: Fix underflow when computing tip heights with empty segments.

If valid_count == 0 or a filter segment has zero headers, ... - 1 underflows (u32::MAX). Guard before subtracting and skip updating the tip in that case.

Apply this diff:

-                    let tip_height =
-                        segment_id * HEADERS_PER_SEGMENT + segment.valid_count as u32 - 1;
-                    *self.cached_tip_height.write().await = Some(tip_height);
+                    if segment.valid_count > 0 {
+                        let tip_height =
+                            segment_id * HEADERS_PER_SEGMENT + (segment.valid_count as u32) - 1;
+                        *self.cached_tip_height.write().await = Some(tip_height);
+                    } else {
+                        tracing::debug!(
+                            "Segment {} has zero valid headers; not updating tip height",
+                            segment_id
+                        );
+                    }
-                    let storage_index =
-                        segment_id * HEADERS_PER_SEGMENT + segment.filter_headers.len() as u32 - 1;
+                    let storage_index_base = segment_id * HEADERS_PER_SEGMENT;
+                    if segment.filter_headers.is_empty() {
+                        tracing::debug!(
+                            "Filter segment {} has zero headers; not updating filter tip",
+                            segment_id
+                        );
+                        return Ok(());
+                    }
+                    let storage_index =
+                        storage_index_base + (segment.filter_headers.len() as u32) - 1;

Also applies to: 350-367


451-466: Avoid concurrent double-writes on eviction; don’t sync-save segments in Saving state.

Evict paths save synchronously whenever state != Clean. If state == Saving, a background write may be in-flight, causing racy, interleaved writes to the same file.

Apply this diff:

-                // Save if dirty or saving before evicting - do it synchronously to ensure data consistency
-                if oldest_segment.state != SegmentState::Clean {
+                // Save only if Dirty; skip if Saving to avoid concurrent double-writes
+                if oldest_segment.state == SegmentState::Dirty {
                     tracing::debug!(
                         "Synchronously saving segment {} before eviction (state: {:?})",
                         oldest_segment.segment_id,
                         oldest_segment.state
                     );
                     let segment_path = self
                         .base_path
                         .join(format!("headers/segment_{:04}.dat", oldest_segment.segment_id));
                     save_segment_to_disk(&segment_path, &oldest_segment.headers).await?;
                     tracing::debug!(
                         "Successfully saved segment {} to disk",
                         oldest_segment.segment_id
                     );
+                } else if oldest_segment.state == SegmentState::Saving {
+                    tracing::trace!(
+                        "Evicting header segment {} in Saving state; skipping sync save",
+                        oldest_segment.segment_id
+                    );
                 }
-            // Save if dirty or saving before evicting - do it synchronously to ensure data consistency
-            if oldest_segment.state != SegmentState::Clean {
+            // Save only if Dirty; skip if Saving to avoid concurrent double-writes
+            if oldest_segment.state == SegmentState::Dirty {
                 tracing::trace!(
                     "Synchronously saving filter segment {} before eviction (state: {:?})",
                     oldest_segment.segment_id,
                     oldest_segment.state
                 );
                 let segment_path = self
                     .base_path
                     .join(format!("filters/filter_segment_{:04}.dat", oldest_segment.segment_id));
                 save_filter_segment_to_disk(&segment_path, &oldest_segment.filter_headers).await?;
                 tracing::debug!(
                     "Successfully saved filter segment {} to disk",
                     oldest_segment.segment_id
                 );
+            } else if oldest_segment.state == SegmentState::Saving {
+                tracing::trace!(
+                    "Evicting filter segment {} in Saving state; skipping sync save",
+                    oldest_segment.segment_id
+                );
             }

Also applies to: 526-541

dash-spv/src/sync/filters.rs (3)

1623-1639: Complete the wallet integration for filter matching.

The check_filters_for_matches method currently returns an empty Vec with a TODO comment. This is a critical function for the wallet integration. The empty implementation could cause downstream issues where filter matches are expected.

Would you like me to help implement the wallet integration for this method or create an issue to track this work?


1772-1811: Remove unnecessary TODO comment and clarify method behavior.

The download_and_check_filter method has a TODO comment but the implementation appears incomplete. The method always returns false regardless of the filter check result, which could be misleading.

Apply this diff to clarify the method's behavior:

 pub async fn download_and_check_filter(
     &mut self,
     block_hash: BlockHash,
     network: &mut N,
     storage: &mut S,
 ) -> SyncResult<bool> {
-    // TODO: Will check with wallet once integrated
-
     // Get the block height for this hash by scanning headers
     let header_tip_height = storage
         .get_tip_height()
         .await
         .map_err(|e| SyncError::Storage(format!("Failed to get header tip height: {}", e)))?
         .unwrap_or(0);

     let height = self
         .find_height_for_block_hash(&block_hash, storage, 0, header_tip_height)
         .await?
         .ok_or_else(|| {
             SyncError::Validation(format!(
                 "Cannot find height for block {} - header not found",
                 block_hash
             ))
         })?;

     tracing::info!(
-        "📥 Requesting compact filter for block {} at height {}",
+        "📥 Requesting compact filter for block {} at height {} (check will be performed when filter is received)",
         block_hash,
-        height
+        height
     );

     // Request the compact filter using getcfilters
     self.request_filters(network, height, block_hash).await?;

     // Note: The actual filter checking will happen when we receive the CFilter message
-    // This method just initiates the download. The client will need to handle the response.
+    // This method just initiates the download and always returns false.
+    // The actual match result will be determined when the CFilter message is processed.

-    Ok(false) // Return false for now, will be updated when we process the response
+    Ok(false) // Always returns false - actual matching happens asynchronously
 }

2304-2336: Remove unused parameters in spawn_filter_processor.

The function has unused parameters that trigger compiler warnings. Since wallet integration is pending, these parameters might be used in the future, but for now they should either be prefixed with underscore or removed.

Apply this diff to fix the warnings:

 pub fn spawn_filter_processor(
-    network_message_sender: mpsc::Sender<NetworkMessage>,
-    processing_thread_requests: std::sync::Arc<
+    _network_message_sender: mpsc::Sender<NetworkMessage>,
+    _processing_thread_requests: std::sync::Arc<
         std::sync::Mutex<std::collections::HashSet<BlockHash>>,
     >,
     stats: std::sync::Arc<tokio::sync::RwLock<crate::types::SpvStats>>,
 ) -> FilterNotificationSender {
dash-spv/src/sync/sequential/mod.rs (1)

1899-1899: Remove unused parameter in handle_post_sync_cfilter.

The network parameter is unused and triggers a compiler warning.

Apply this diff to fix the warning:

 async fn handle_post_sync_cfilter(
     &mut self,
     cfilter: dashcore::network::message_filter::CFilter,
-    network: &mut N,
+    _network: &mut N,
     storage: &mut S,
 ) -> SyncResult<()> {
dash-spv/src/client/mod.rs (2)

2178-2191: Use declarations occur after use within the same function.
Version and CompactTarget are used before they’re imported; move the imports earlier or fully-qualify.

Apply either:

  • Move the use dashcore::{ block::{Header as BlockHeader, Version}, pow::CompactTarget }; to the top of the function, before first usage; or
  • Fully-qualify here:
-                        version: Version::from_consensus(536870912),
+                        version: dashcore::block::Version::from_consensus(536870912),
...
-                        bits: CompactTarget::from_consensus(
+                        bits: dashcore::pow::CompactTarget::from_consensus(

And remove the late use if fully-qualifying:

-        use dashcore::{
-            block::{Header as BlockHeader, Version},
-            pow::CompactTarget,
-        };
-        use dashcore_hashes::Hash;
+        use dashcore_hashes::Hash;

Also applies to: 2258-2263


2386-2388: Fix E0107: SequentialSyncManager now has 3 generics.
This test helper signature is the CI error source.

Apply:

-    pub fn sync_manager_mut(&mut self) -> &mut SequentialSyncManager<S, N> {
+    pub fn sync_manager_mut(&mut self) -> &mut SequentialSyncManager<S, N, W> {
         &mut self.sync_manager
     }
🧹 Nitpick comments (31)
dash-spv/src/storage/disk.rs (2)

867-871: Use storage index for periodic save cadence checks to keep cadence stable across checkpoints.

Using blockchain height makes cadence dependent on sync_base_height. Prefer storage index.

Apply this diff:

-        if headers.len() >= 1000 || blockchain_height % 1000 == 0 {
+        if headers.len() >= 1000 || final_storage_index % 1000 == 0 {
             self.save_dirty_segments().await?;
         }
-        if headers.len() >= 1000 || next_height % 1000 == 0 {
+        if headers.len() >= 1000 || final_height % 1000 == 0 {
             self.save_dirty_segments().await?;
         }
-        if headers.len() >= 1000 || next_blockchain_height % 1000 == 0 {
+        if headers.len() >= 1000
+            || (next_blockchain_height.saturating_sub(sync_base_height)) % 1000 == 0
+        {
             self.save_dirty_segments().await?;
-        }
+        }

Also applies to: 1076-1080, 1247-1250


171-171: Use tracing::error! instead of eprintln! for consistency.

Keep logging unified with tracing.

-                            eprintln!("Failed to save segment {}: {}", segment_id, e);
+                            tracing::error!("Failed to save segment {}: {}", segment_id, e);
-                            eprintln!("Failed to save filter segment {}: {}", segment_id, e);
+                            tracing::error!("Failed to save filter segment {}: {}", segment_id, e);
-                            eprintln!("Failed to save index: {}", e);
+                            tracing::error!("Failed to save index: {}", e);

Also applies to: 191-191, 209-209

dash-spv/src/client/status_display.rs (1)

88-93: Don’t swallow storage errors when reading filter tip; log and default.

Currently .ok().flatten().unwrap_or(0) hides failures. Log at least warn to aid ops.

-        let storage = self.storage.lock().await;
-        let filter_header_height =
-            storage.get_filter_tip_height().await.ok().flatten().unwrap_or(0);
-        drop(storage);
+        let storage = self.storage.lock().await;
+        let filter_header_height = match storage.get_filter_tip_height().await {
+            Ok(Some(h)) => h,
+            Ok(None) => 0,
+            Err(e) => {
+                tracing::warn!("Failed to read filter tip height from storage: {}", e);
+                0
+            }
+        };
+        drop(storage);

Apply the same pattern in both UI and non-UI paths.

Also applies to: 131-135, 182-186

dash-spv/src/types.rs (1)

529-531: Update docstring to reflect wallet-driven matching (no “watch items”).

-    /// Number of compact filters that matched watch items.
+    /// Number of compact filters that matched wallet-tracked items.
dash-spv/src/client/config.rs (2)

7-10: Remove unused imports to keep clippy clean.

Address and ScriptBuf are unused and trigger warnings. Trim the import to only Network.

-use dashcore::{Address, Network, ScriptBuf};
+use dashcore::Network;

375-416: Consider using a typed error for validate().

Returning Result<(), String> makes error handling brittle and violates the codebase guideline to use thiserror-based types.

I recommend introducing a ConfigValidationError (thiserror) and changing validate() to return Result<(), ConfigValidationError>, propagating it where mapped into SpvError::Config.

dash-spv/src/client/message_handler.rs (2)

27-33: Tighten W bounds to match downstream requirements.

SequentialSyncManager stores Arc<RwLock> and its methods often require Send + Sync + 'static. Add these bounds here to avoid surprising compile errors when calling sync_manager methods.

-impl<
-        'a,
-        S: StorageManager + Send + Sync + 'static,
-        N: NetworkManager + Send + Sync + 'static,
-        W: WalletInterface,
-    > MessageHandler<'a, S, N, W>
+impl<
+        'a,
+        S: StorageManager + Send + Sync + 'static,
+        N: NetworkManager + Send + Sync + 'static,
+        W: WalletInterface + Send + Sync + 'static,
+    > MessageHandler<'a, S, N, W>

353-369: Avoid double-awaiting should_fetch_transaction().

The current code awaits should_fetch_transaction twice. Compute once and reuse.

-                        if self.config.fetch_mempool_transactions
-                            && filter.should_fetch_transaction(&txid).await
-                        {
+                        let should_fetch = self.config.fetch_mempool_transactions
+                            && filter.should_fetch_transaction(&txid).await;
+                        if should_fetch {
                             tracing::info!("📥 Requesting transaction {}", txid);
                             // Request the transaction
                             let getdata = NetworkMessage::GetData(vec![item]);
                             if let Err(e) = self.network.send_message(getdata).await {
                                 tracing::error!("Failed to request transaction {}: {}", txid, e);
                             }
                         } else {
                             tracing::debug!("Not fetching transaction {} (fetch_mempool_transactions={}, should_fetch={})", 
                                 txid,
                                 self.config.fetch_mempool_transactions,
-                                filter.should_fetch_transaction(&txid).await
+                                should_fetch
                             );
                         }
dash-spv/src/client/block_processor.rs (4)

3-3: Drop unused HashMap import (clippy warning).

HashMap is unused after removing watch-item logic.

-use std::collections::{HashMap, HashSet};
+use std::collections::HashSet;

9-11: Prune unused AddressBalance import.

AddressBalance is only referenced by an unused method below. Removing both keeps warnings at zero.

-use crate::types::{AddressBalance, SpvEvent, SpvStats};
+use crate::types::{SpvEvent, SpvStats};

577-587: Remove unused get_address_balance() helper.

This method is unused and triggers warnings. Remove it (or gate with #[cfg(test)] if kept for tests).

-    /// Get the balance for a specific address.
-    async fn get_address_balance(&self, _address: &dashcore::Address) -> Result<AddressBalance> {
-        // WalletInterface doesn't expose per-address balance
-        // Return empty balance for now
-        Ok(AddressBalance {
-            confirmed: dashcore::Amount::from_sat(0),
-            unconfirmed: dashcore::Amount::from_sat(0),
-            pending: dashcore::Amount::from_sat(0),
-            pending_instant: dashcore::Amount::from_sat(0),
-        })
-    }

615-619: Stat name likely incorrect (requested vs processed).

Updating blocks_requested after processing a block is confusing; consider a separate counter (e.g., blocks_state_updated) or increment blocks_requested only on request dispatch.

dash-spv/src/main.rs (2)

397-442: Disable --add-example-addresses on mainnet to avoid privacy/UX surprises.

Currently it logs mainnet example addresses. Guard this flag to non-mainnet networks.

-    if matches.get_flag("add-example-addresses") {
-        let network = config.network;
+    if matches.get_flag("add-example-addresses") {
+        let network = config.network;
+        if network == dashcore::Network::Dash {
+            tracing::warn!("--add-example-addresses is disabled on mainnet; ignoring.");
+        } else {
             let example_addresses = match network {
                 dashcore::Network::Dash => vec![
                     // Some example mainnet addresses (these are from block explorers/faucets)
                     "Xesjop7V9xLndFMgZoCrckJ5ZPgJdJFbA3", // Crowdnode
                 ],
                 dashcore::Network::Testnet => vec![
                     // Testnet addresses
                     "yNEr8u4Kx8PTH9A9G3P7NwkJRmqFD7tKSj", // Example testnet address
                     "yMGqjKTqr2HKKV6zqSg5vTPQUzJNt72h8h", // Another testnet example
                 ],
                 dashcore::Network::Regtest => vec![
                     // Regtest addresses (these would be from local testing)
                     "yQ9J8qK3nNW8JL8h5T6tB3VZwwH9h5T6tB", // Example regtest address
                     "yeRZBWYfeNE4yVUHV4ZLs83Ppn9aMRH57A", // Another regtest example
                 ],
                 _ => vec![],
             };
             for addr_str in example_addresses {
                 // ...
             }
-    }
+        }
+    }

444-459: Avoid logging full wallet addresses on mainnet (privacy).

Suppress or redact addresses on mainnet to prevent PII leakage in logs.

-        if !monitored.is_empty() {
-            tracing::info!("Wallet monitoring {} addresses:", monitored.len());
-            for (i, addr) in monitored.iter().take(10).enumerate() {
-                tracing::info!("  {}: {}", i + 1, addr);
-            }
+        if !monitored.is_empty() {
+            tracing::info!("Wallet monitoring {} addresses:", monitored.len());
+            if config.network == dashcore::Network::Dash {
+                tracing::info!("  (address list suppressed on mainnet for privacy)");
+            } else {
+                for (i, addr) in monitored.iter().take(10).enumerate() {
+                    tracing::info!("  {}: {}", i + 1, addr);
+                }
+            }
dash-spv/src/client/filter_sync.rs (2)

72-75: Derive earliest height from wallet state.

Defaulting to last 100 blocks is fine short-term, but integrating wallet birth height or creation time (ClientConfig.wallet_creation_time) will avoid missing early matches.


97-117: Clamp requested count to available tip range.

When count overshoots the available range from start to tip, clamp it to prevent unnecessary queueing and confusing logs.

-        let start = start_height.unwrap_or(filter_tip_height.saturating_sub(99));
-        let num_blocks = count.unwrap_or(100);
+        let start = start_height.unwrap_or(filter_tip_height.saturating_sub(99));
+        let num_blocks = count.unwrap_or(100);
+        let available = filter_tip_height.saturating_sub(start).saturating_add(1);
+        let num_blocks = num_blocks.min(available);
dash-spv/src/sync/filters.rs (3)

325-327: Simplify redundant blockchain-to-storage height conversions.

There are multiple instances where blockchain_to_storage_height is called and the result is immediately stored in a variable that's only used once. Consider inlining these conversions for better readability.

For example, simplify this pattern:

-let storage_height =
-    self.blockchain_to_storage_height(next_batch_end_height);
-match storage.get_header(storage_height).await {
+match storage.get_header(self.blockchain_to_storage_height(next_batch_end_height)).await {

Also applies to: 409-411, 420-422, 523-525, 681-692, 746-752, 802-806


1990-1993: Improve code formatting for better readability.

The checkpoint filter header storage code has poor formatting that makes it harder to read.

Apply this diff to improve formatting:

 if self.sync_base_height > 0
     && start_height == self.sync_base_height + 1
     && current_filter_tip < self.sync_base_height
 {
     // Store the previous_filter_header as the filter header for the checkpoint block
     let checkpoint_header = vec![cfheaders.previous_filter_header];
-    storage.store_filter_headers(&checkpoint_header).await.map_err(
-        |e| {
-            SyncError::Storage(format!(
-                "Failed to store checkpoint filter header: {}",
-                e
-            ))
-        },
-    )?;
+    storage
+        .store_filter_headers(&checkpoint_header)
+        .await
+        .map_err(|e| {
+            SyncError::Storage(format!("Failed to store checkpoint filter header: {}", e))
+        })?;

Also applies to: 1996-2003


2338-2473: Consider removing large commented-out code blocks.

There are extensive commented-out code sections for wallet integration. These should either be removed and tracked in an issue, or converted to proper documentation explaining the future integration plan.

Consider either:

  1. Remove the commented code and create a GitHub issue with the implementation details
  2. Convert to a documentation comment explaining the planned wallet integration approach

This will keep the codebase cleaner while preserving the implementation strategy.

dash-spv/src/sync/sequential/mod.rs (1)

1917-1920: Track wallet integration TODO appropriately.

The TODO comment indicates incomplete wallet integration for post-sync filter checking. This should be tracked properly.

The post-sync filter checking is currently disabled. Would you like me to:

  1. Help implement the wallet integration for this path?
  2. Create a GitHub issue to track this work?
  3. Add more detailed documentation about what needs to be done?
dash-spv/src/client/mod.rs (11)

104-107: Return an Arc clone instead of &Arc to avoid leaking the field’s reference type.
Returning a cloned Arc is idiomatic and more ergonomic for callers (can be moved into tasks).

Apply:

-    pub fn wallet(&self) -> &Arc<RwLock<W>> {
-        &self.wallet
-    }
+    pub fn wallet(&self) -> Arc<RwLock<W>> {
+        self.wallet.clone()
+    }

211-214: Remove unnecessary mut; address clippy warning.
Variable is never mutated after initialization.

Apply:

-        let mut sync_manager =
-            SequentialSyncManager::new(&config, received_filter_heights, wallet.clone())
-                .map_err(SpvError::Sync)?;
+        let sync_manager =
+            SequentialSyncManager::new(&config, received_filter_heights, wallet.clone())
+                .map_err(SpvError::Sync)?;

270-279: MempoolFilter initialized with empty address set – likely ineffective.
Until wallet integration populates addresses, filtering won’t match anything.

Consider centralizing via update_mempool_filter() and wiring it to wallet address updates. Minimal change now:

-        if self.config.enable_mempool_tracking {
-            // TODO: Get monitored addresses from wallet
-            self.mempool_filter = Some(Arc::new(MempoolFilter::new(
+        if self.config.enable_mempool_tracking {
+            // Initialize from wallet (will be empty until wallet integration is complete)
+            self.update_mempool_filter().await;
+            if self.mempool_filter.is_none() {
+                self.mempool_filter = Some(Arc::new(MempoolFilter::new(
                 self.config.mempool_strategy,
                 Duration::from_secs(self.config.recent_send_window_secs),
                 self.config.max_mempool_transactions,
                 self.mempool_state.clone(),
-                vec![], // Will be populated from wallet's monitored addresses
+                vec![],
                 self.config.network,
-            )));
+            )));
+            }

Follow-up: does WalletInterface expose a read-only “monitored_addresses() -> Vec

”? If yes, we can implement it inside update_mempool_filter().


451-459: Same as above: enable_mempool_tracking initializes filter with empty addresses.
Call update_mempool_filter() instead for a single source of truth.

Apply:

-        if self.mempool_filter.is_none() {
-            // TODO: Get monitored addresses from wallet
-            self.mempool_filter = Some(Arc::new(MempoolFilter::new(
+        if self.mempool_filter.is_none() {
+            self.update_mempool_filter().await;
+            if self.mempool_filter.is_none() {
+                self.mempool_filter = Some(Arc::new(MempoolFilter::new(
                 self.config.mempool_strategy,
                 Duration::from_secs(self.config.recent_send_window_secs),
                 self.config.max_mempool_transactions,
                 self.mempool_state.clone(),
-                vec![], // Will be populated from wallet's monitored addresses
+                vec![],
                 self.config.network,
-            )));
+            )));
+            }

556-569: update_mempool_filter() still builds an empty filter.
Wire this to wallet as soon as WalletInterface exposes addresses; otherwise, document the limitation explicitly.

Proposed structure when wallet API is available:

let addresses = {
    let w = self.wallet.read().await;
    // e.g., w.monitored_addresses(self.config.network)
    Vec::<dashcore::Address>::new()
};
self.mempool_filter = Some(Arc::new(MempoolFilter::new(
    self.config.mempool_strategy,
    Duration::from_secs(self.config.recent_send_window_secs),
    self.config.max_mempool_transactions,
    self.mempool_state.clone(),
    addresses,
    self.config.network,
)));

1191-1193: Balance reporting TODO.
Once wallet integration lands, use wallet’s address set and balances to report changes.

I can wire this to the wallet interface once the address/balance APIs are confirmed.


1218-1221: Returning Ok(empty) may mask missing wallet integration.
Prefer a clear error to avoid silent misbehavior.

Apply:

-        // TODO: Get balances from wallet instead of tracking separately
-        // Will be implemented when wallet integration is complete
-        Ok(std::collections::HashMap::new())
+        Err(SpvError::Config(
+            "get_all_balances is handled by the external wallet; call the wallet API instead"
+                .to_string(),
+        ))

14-14: Remove unused import (HashSet).
Cleans clippy warning.

Apply:

- use std::collections::HashSet;

939-951: Duplicate timeout checks share the same last_timeout_check timer.
Both blocks gate on the same Instant; only the second resets it. Consider consolidating to a single check to avoid skewed cadence.


1234-1245: Downcast to MultiPeerNetworkManager ties API to a concrete impl.
If possible, expose a disconnect_peer on NetworkManager to avoid downcasting.


1235-1238: Two peer count APIs (peer_count and get_peer_count).
Redundant; consider keeping only one to simplify public surface.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d2b7008 and 928ea28.

📒 Files selected for processing (16)
  • dash-spv/src/client/block_processor.rs (7 hunks)
  • dash-spv/src/client/config.rs (1 hunks)
  • dash-spv/src/client/filter_sync.rs (2 hunks)
  • dash-spv/src/client/message_handler.rs (2 hunks)
  • dash-spv/src/client/mod.rs (10 hunks)
  • dash-spv/src/client/status_display.rs (3 hunks)
  • dash-spv/src/client/watch_manager.rs (0 hunks)
  • dash-spv/src/client/watch_manager_test.rs (0 hunks)
  • dash-spv/src/error.rs (0 hunks)
  • dash-spv/src/lib.rs (1 hunks)
  • dash-spv/src/main.rs (7 hunks)
  • dash-spv/src/mempool_filter.rs (6 hunks)
  • dash-spv/src/storage/disk.rs (7 hunks)
  • dash-spv/src/sync/filters.rs (23 hunks)
  • dash-spv/src/sync/sequential/mod.rs (9 hunks)
  • dash-spv/src/types.rs (1 hunks)
💤 Files with no reviewable changes (3)
  • dash-spv/src/client/watch_manager.rs
  • dash-spv/src/client/watch_manager_test.rs
  • dash-spv/src/error.rs
🧰 Additional context used
📓 Path-based instructions (5)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with the thiserror crate and propagate errors appropriately
Use the tokio runtime for async operations
Adhere to MSRV: code must compile on Rust 1.89 (avoid newer language/features)
Keep code formatted with rustfmt (cargo fmt)
Keep the codebase clippy-clean (clippy with -D warnings)

Files:

  • dash-spv/src/types.rs
  • dash-spv/src/lib.rs
  • dash-spv/src/client/status_display.rs
  • dash-spv/src/client/config.rs
  • dash-spv/src/storage/disk.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/message_handler.rs
  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/sequential/mod.rs
  • dash-spv/src/client/filter_sync.rs
  • dash-spv/src/sync/filters.rs
{**/*.rs,**/Cargo.toml}

📄 CodeRabbit inference engine (CLAUDE.md)

Use feature flags and conditional compilation for optional features

Files:

  • dash-spv/src/types.rs
  • dash-spv/src/lib.rs
  • dash-spv/src/client/status_display.rs
  • dash-spv/src/client/config.rs
  • dash-spv/src/storage/disk.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/message_handler.rs
  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/sequential/mod.rs
  • dash-spv/src/client/filter_sync.rs
  • dash-spv/src/sync/filters.rs
{dash-network,rpc-client,rpc-json,dash-spv}/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Always validate inputs from untrusted sources

Files:

  • dash-spv/src/types.rs
  • dash-spv/src/lib.rs
  • dash-spv/src/client/status_display.rs
  • dash-spv/src/client/config.rs
  • dash-spv/src/storage/disk.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/message_handler.rs
  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/sequential/mod.rs
  • dash-spv/src/client/filter_sync.rs
  • dash-spv/src/sync/filters.rs
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/types.rs
  • dash-spv/src/lib.rs
  • dash-spv/src/client/status_display.rs
  • dash-spv/src/client/config.rs
  • dash-spv/src/storage/disk.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/message_handler.rs
  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/sequential/mod.rs
  • dash-spv/src/client/filter_sync.rs
  • dash-spv/src/sync/filters.rs
dash-spv/src/storage/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Provide both MemoryStorageManager and DiskStorageManager behind the StorageManager trait

Files:

  • dash-spv/src/storage/disk.rs
🧠 Learnings (9)
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`

Applied to files:

  • dash-spv/src/types.rs
  • dash-spv/src/lib.rs
  • dash-spv/src/storage/disk.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/mod.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Expose complex Rust types to C as opaque pointers (e.g., FFIDashSpvClient*)

Applied to files:

  • dash-spv/src/types.rs
  • dash-spv/src/lib.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes

Applied to files:

  • dash-spv/src/lib.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/SPVClient.swift : Do not place type definitions in SPVClient.swift; keep types in dedicated files instead

Applied to files:

  • dash-spv/src/lib.rs
  • dash-spv/src/client/mod.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/*.swift : When accessing SPV functionality, use DashSDK public APIs instead of direct client access; add public wrapper methods to DashSDK if needed

Applied to files:

  • dash-spv/src/lib.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/mod.rs
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.

Applied to files:

  • dash-spv/src/client/config.rs
  • dash-spv/src/main.rs
  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/filters.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/storage/**/*.rs : Provide both `MemoryStorageManager` and `DiskStorageManager` behind the `StorageManager` trait

Applied to files:

  • dash-spv/src/storage/disk.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : Enforce Rust formatting via `cargo fmt --check` on all Rust source files

Applied to files:

  • dash-spv/src/storage/disk.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Coordinate synchronization phases with `SequentialSyncManager`; each phase must complete before the next begins

Applied to files:

  • dash-spv/src/sync/sequential/mod.rs
🧬 Code graph analysis (8)
dash-spv/src/client/status_display.rs (1)
dash-spv/src/client/mod.rs (1)
  • storage (2396-2398)
dash-spv/src/storage/disk.rs (3)
dash-spv/src/storage/mod.rs (1)
  • get_filter_header (130-130)
dash-spv/src/storage/memory.rs (1)
  • get_filter_header (132-135)
dash-spv/tests/error_handling_test.rs (1)
  • get_filter_header (265-270)
dash-spv/src/main.rs (3)
key-wallet-manager/src/spv_wallet_manager.rs (1)
  • with_base (105-114)
dash-spv/src/client/mod.rs (4)
  • new (192-255)
  • network (436-438)
  • network (1236-1239)
  • wallet (105-107)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
dash-spv/src/client/message_handler.rs (4)
dash-spv/src/client/block_processor.rs (1)
  • new (46-64)
dash-spv/src/client/filter_sync.rs (1)
  • new (30-44)
dash-spv/src/client/mod.rs (1)
  • new (192-255)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
dash-spv/src/client/mod.rs (4)
dash-spv/src/client/block_processor.rs (1)
  • new (46-64)
dash-spv/src/client/message_handler.rs (1)
  • new (35-59)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
dash-spv/src/mempool_filter.rs (2)
  • new (31-47)
  • new (266-272)
dash-spv/src/sync/sequential/mod.rs (2)
dash-spv/src/client/message_handler.rs (1)
  • new (35-59)
dash-spv/src/client/mod.rs (5)
  • new (192-255)
  • wallet (105-107)
  • network (436-438)
  • network (1236-1239)
  • storage (2396-2398)
dash-spv/src/client/filter_sync.rs (3)
dash-spv/src/client/message_handler.rs (1)
  • new (35-59)
dash-spv/src/client/mod.rs (1)
  • new (192-255)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
dash-spv/src/sync/filters.rs (2)
dash-spv/src/client/mod.rs (5)
  • storage (2396-2398)
  • new (192-255)
  • network (436-438)
  • network (1236-1239)
  • wallet (105-107)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
🪛 GitHub Actions: Continuous integration
dash-spv/src/client/config.rs

[warning] 7-7: warning: unused imports: Address and ScriptBuf

dash-spv/src/client/block_processor.rs

[warning] 578-578: warning: method get_address_balance is never used


[warning] 3-3: warning: unused import: HashMap

dash-spv/src/client/mod.rs

[error] 2386-2386: E0107: struct takes 3 generic arguments but 2 generic arguments were supplied


[warning] 14-14: warning: unused import: std::collections::HashSet


[warning] 211-211: warning: variable does not need to be mutable

dash-spv/src/mempool_filter.rs

[warning] 205-205: warning: unexpected cfg condition name: test_disabled

dash-spv/src/sync/sequential/mod.rs

[warning] 1899-1899: warning: unused variable: network


[warning] 28-28: warning: unused import: WalletInterface

dash-spv/src/sync/filters.rs

[warning] 2304-2304: warning: unused variable: network_message_sender


[warning] 2305-2305: warning: unused variable: processing_thread_requests

🔇 Additional comments (21)
dash-spv/src/storage/disk.rs (3)

680-711: I/O offloading is solid; error mapping looks good.

Blocking file I/O is correctly offloaded with spawn_blocking and mapped to domain errors.

Also applies to: 713-744, 746-759


1951-1997: Great coverage for checkpoint indexing and index rebuild.

The tests validate storage-index mapping, reverse index, and rebuild behavior across instances.


1303-1327: get_filter_header semantics are already aligned across backends

Both implementations interpret the height parameter as the blockchain height, internally mapping it to whatever storage index is appropriate:

  • MemoryStorageManager stores every header starting at genesis (height 0), so calling
    get_filter_header(h) simply does self.filter_headers[h as usize] (vector index = blockchain height).
  • DiskStorageManager must account for a non-zero sync_base_height, so it converts the blockchain height into an internal index via blockchain_height – sync_base_height and then reads the entry at that index.

Because both treat the argument as a blockchain height—and simply perform the necessary offset only within the disk backend—there is no mismatch in the public API. Renaming the parameter or changing the trait signature to “storage_index” would break or confuse clients that expect to pass a blockchain height.

If anything, you may optionally clarify the trait documentation to explicitly state “height is the blockchain height; implementations map to internal storage indices as needed.” Beyond that, no code changes are required here.

Likely an incorrect or invalid review comment.

dash-spv/src/client/status_display.rs (1)

39-65: Header height calculation looks correct with checkpoint offset.

Derivation via sync_base_height + storage_tip is sound and logged appropriately.

dash-spv/src/types.rs (2)

503-503: Public WatchItem removal: call out API break and migration guidance.

Replacing the type with a comment is fine internally, but this is an API removal. Ensure downstream crates are updated and changelog/semver reflect the breaking change (e.g., bump minor/major as appropriate).

Would you like me to scan the repo/workspace for lingering WatchItem references and open a migration checklist?


241-251: ChainState::tip_height semantics are consistent with checkpointing.

Additive base plus contiguous headers is correct.

dash-spv/src/mempool_filter.rs (1)

49-68: Fetch policy logic is reasonable; capacity checks are correct.

Nothing to change here.

dash-spv/src/lib.rs (2)

77-77: Public re-export removal of WatchItem is a breaking change; bump version and update docs.

Ensure semver bump and changelog entry. Also consider updating crate-level docs mentioning script/outpoint monitoring to clarify wallet-driven tracking.

I can open a follow-up PR to adjust docs and bump version. Do you want that?


94-119: Logging init helper looks good.

Consistent mapping to tracing levels with graceful error handling.

dash-spv/src/client/message_handler.rs (1)

287-300: Confirm generic selection for FilterSyncManager.

You retained FilterSyncManager::<S, N> (no W), which is fine if FilterSyncManager itself remains wallet-agnostic. If it now consults the wallet, ensure the generic params stay consistent across modules.

dash-spv/src/client/block_processor.rs (1)

178-183: Avoid holding write lock across await if possible.

wallet.check_compact_filter is awaited while a write lock is held. If that method awaits internally or re-enters wallet locks, this can deadlock or reduce concurrency.

If the API allows, refactor wallet to expose a read-only check or perform the check without holding the lock across await.

dash-spv/src/main.rs (1)

314-326: Client construction with wallet clone looks good.

Generic wiring of W, N, S aligns with the broader refactor.

dash-spv/src/client/filter_sync.rs (1)

120-149: Flow-control initiation looks good; ensure background processor is running.

This relies on the monitoring loop to process arriving CFilters. Verify the filter processor task is started before invoking this to avoid lost progress events.

dash-spv/src/sync/filters.rs (1)

1813-1834: LGTM! Wallet integration properly implemented.

The new check_filter_for_matches method correctly integrates with the wallet's check_compact_filter method. The implementation properly creates a BlockFilter, calls the wallet method, and logs matches appropriately.

dash-spv/src/sync/sequential/mod.rs (4)

40-44: LGTM! Clean wallet integration in SequentialSyncManager.

The wallet integration is well-designed with:

  • Proper generic type parameter W with WalletInterface bound
  • Arc<RwLock> for thread-safe shared access
  • Clean initialization in the new method

Also applies to: 80-81, 83-87, 113-114


336-338: Good improvement to filter download range calculation.

Using header_sync.get_sync_base_height().max(1) ensures proper handling of checkpoint-based sync and prevents downloading filters for blocks before the sync base.


1307-1330: Excellent wallet integration for filter checking!

The implementation properly:

  1. Acquires a write lock on the wallet
  2. Calls the wallet's filter checking method with appropriate parameters
  3. Drops the lock after use
  4. Requests the full block when a match is found

The use of deref_mut() to get the mutable reference is correct.


1445-1470: Well-implemented wallet-based block processing.

The block processing correctly:

  1. Gets the block height from storage
  2. Calls wallet.process_block with all necessary parameters
  3. Logs relevant transaction information appropriately
  4. Properly manages the wallet lock lifecycle
dash-spv/src/client/mod.rs (3)

25-26: Import of SyncProgress looks good.
Aligns with new return type usage.


75-75: SequentialSyncManager now requires 3 generics (S, N, W).
Good change; remember to fix the test helper signature below (see Line 2386 comment).


1379-1379: Watch-item removal noted.
Public surface reduction is clear; ensure downstreams have migrated to wallet-driven APIs.

Comment on lines +226 to 239
let mut spv_wallet =
key_wallet_manager::spv_wallet_manager::SPVWalletManager::with_base(WalletManager::<
ManagedWalletInfo,
>::new());
spv_wallet.base.create_wallet_from_mnemonic(
WalletId::default(),
"Default".to_string(),
"enemy check owner stumble unaware debris suffer peanut good fabric bleak outside",
"",
Some(network),
None,
key_wallet::wallet::initialization::WalletAccountCreationOptions::default(),
)?;
let wallet = Arc::new(tokio::sync::RwLock::new(spv_wallet));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Do not hard-code a mnemonic (security risk). Gate mainnet and read from env.

Bundling a fixed mnemonic is dangerous, especially with default mainnet. Require DASH_SPV_MNEMONIC on mainnet and only fall back to a demo mnemonic on testnet/regtest.

-    spv_wallet.base.create_wallet_from_mnemonic(
-        WalletId::default(),
-        "Default".to_string(),
-        "enemy check owner stumble unaware debris suffer peanut good fabric bleak outside",
-        "",
-        Some(network),
-        None,
-        key_wallet::wallet::initialization::WalletAccountCreationOptions::default(),
-    )?;
+    // Use env var on mainnet; allow demo mnemonic only on non-mainnet
+    let mnemonic_env = std::env::var("DASH_SPV_MNEMONIC").ok();
+    let mnemonic = match (network, mnemonic_env) {
+        (dash_spv::Network::Dash, Some(m)) => m,
+        (dash_spv::Network::Dash, None) => {
+            eprintln!("DASH_SPV_MNEMONIC must be set when running on mainnet; refusing to use a built-in mnemonic.");
+            process::exit(1);
+        }
+        (_, Some(m)) => m,
+        (_, None) => "enemy check owner stumble unaware debris suffer peanut good fabric bleak outside".to_string(),
+    };
+    spv_wallet.base.create_wallet_from_mnemonic(
+        WalletId::default(),
+        "Default".to_string(),
+        &mnemonic,
+        "",
+        Some(network),
+        None,
+        key_wallet::wallet::initialization::WalletAccountCreationOptions::default(),
+    )?;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let mut spv_wallet =
key_wallet_manager::spv_wallet_manager::SPVWalletManager::with_base(WalletManager::<
ManagedWalletInfo,
>::new());
spv_wallet.base.create_wallet_from_mnemonic(
WalletId::default(),
"Default".to_string(),
"enemy check owner stumble unaware debris suffer peanut good fabric bleak outside",
"",
Some(network),
None,
key_wallet::wallet::initialization::WalletAccountCreationOptions::default(),
)?;
let wallet = Arc::new(tokio::sync::RwLock::new(spv_wallet));
let mut spv_wallet =
key_wallet_manager::spv_wallet_manager::SPVWalletManager::with_base(WalletManager::<
ManagedWalletInfo,
>::new());
// Use env var on mainnet; allow demo mnemonic only on non-mainnet
let mnemonic_env = std::env::var("DASH_SPV_MNEMONIC").ok();
let mnemonic = match (network, mnemonic_env) {
(dash_spv::Network::Dash, Some(m)) => m,
(dash_spv::Network::Dash, None) => {
eprintln!(
"DASH_SPV_MNEMONIC must be set when running on mainnet; refusing to use a built-in mnemonic."
);
std::process::exit(1);
}
(_, Some(m)) => m,
(_, None) => {
// Demo mnemonic is only allowed on testnet/regtest
"enemy check owner stumble unaware debris suffer peanut good fabric bleak outside"
.to_string()
}
};
spv_wallet.base.create_wallet_from_mnemonic(
WalletId::default(),
"Default".to_string(),
&mnemonic,
"",
Some(network),
None,
key_wallet::wallet::initialization::WalletAccountCreationOptions::default(),
)?;
let wallet = Arc::new(tokio::sync::RwLock::new(spv_wallet));
🤖 Prompt for AI Agents
In dash-spv/src/main.rs around lines 226 to 239, the code currently hard-codes a
mnemonic string into create_wallet_from_mnemonic; remove the literal and instead
read the mnemonic from an environment variable (DASH_SPV_MNEMONIC) using a
secure env read (trim and validate non-empty), require that the env var is
present when running on mainnet and exit with an error if missing, and only
allow falling back to a demo/test mnemonic when network is testnet or regtest;
also ensure any secrets are not logged and treat the value as sensitive (do not
print it).

Comment on lines +203 to 206
// Tests temporarily disabled during WatchItem removal
// TODO: Rewrite tests to work with wallet integration
#[cfg(test_disabled)]
mod tests {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace nonstandard cfg gate causing CI warning.

#[cfg(test_disabled)] triggers “unexpected cfg condition name” and will fail with clippy -D warnings. Gate legacy tests behind a feature instead.

-// Tests temporarily disabled during WatchItem removal
-// TODO: Rewrite tests to work with wallet integration
-#[cfg(test_disabled)]
+// Tests temporarily disabled during WatchItem removal
+// Enable with `--features legacy_watchitem_tests`
+#[cfg(all(test, feature = "legacy_watchitem_tests"))]
 mod tests {
🧰 Tools
🪛 GitHub Actions: Continuous integration

[warning] 205-205: warning: unexpected cfg condition name: test_disabled

🤖 Prompt for AI Agents
In dash-spv/src/mempool_filter.rs around lines 203-206, replace the nonstandard
#[cfg(test_disabled)] with a proper feature gate such as #[cfg(feature =
"legacy-tests")] to avoid the unexpected cfg name warning; then add the
corresponding optional feature "legacy-tests" to Cargo.toml (under [features])
so the tests are only compiled when the feature is enabled (and update any
CI/test invocations to enable the feature if you need to run them).

Comment on lines +1204 to 1210
tracing::warn!(
"Attempting to store filter header at height {} below sync_base_height {}",
next_blockchain_height,
sync_base_height
);
next_blockchain_height
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Do not write filter headers “below” sync_base_height; current code misindexes.

When next_blockchain_height < sync_base_height, the code warns but still uses next_blockchain_height as a storage index, corrupting layout. Skip storing that header (or return an error) to avoid out-of-range writes.

Apply this restructuring:

-            // Convert blockchain height to storage index
-            let storage_index = if sync_base_height > 0 {
-                // For checkpoint sync, storage index is relative to sync_base_height
-                if next_blockchain_height >= sync_base_height {
-                    next_blockchain_height - sync_base_height
-                } else {
-                    // This shouldn't happen in normal operation
-                    tracing::warn!(
-                        "Attempting to store filter header at height {} below sync_base_height {}",
-                        next_blockchain_height,
-                        sync_base_height
-                    );
-                    next_blockchain_height
-                }
-            } else {
-                // For genesis sync, storage index equals blockchain height
-                next_blockchain_height
-            };
+            // Convert blockchain height to storage index
+            let storage_index;
+            if sync_base_height > 0 {
+                if next_blockchain_height >= sync_base_height {
+                    storage_index = next_blockchain_height - sync_base_height;
+                } else {
+                    // Skip to avoid corrupting storage indices
+                    tracing::warn!(
+                        "Skipping filter header at height {} below sync_base_height {}",
+                        next_blockchain_height,
+                        sync_base_height
+                    );
+                    next_blockchain_height += 1;
+                    continue;
+                }
+            } else {
+                storage_index = next_blockchain_height;
+            }

Also applies to: 1196-1215

PastaPastaPasta and others added 3 commits August 26, 2025 22:17
Replace hardcoded 0 with txids.len() to accurately report the number of relevant transactions found when processing blocks. The wallet.process_block() call already returns the list of relevant transaction IDs, so we now use that count in the event.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
Replace silent failure pattern with explicit error handling:
- Log warning for missing headers with storage height context
- Log error for storage failures with full debugging information
- Continue processing remaining batches instead of breaking on errors
- Ensure no batches are silently dropped due to storage issues

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
…rk param

- Replace Vec<Address> with HashSet<Address> for O(1) address lookups
- Remove redundant network parameter from is_transaction_relevant()
- Update all callers to use new signature
- Convert linear scans to efficient HashSet::contains() lookups
- Update construction to accept HashSet<Address> parameter
- Fix test constructors to use HashSet::new() instead of vec![]

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
@PastaPastaPasta PastaPastaPasta changed the title Feat/integrate keywalletmanager @coderabbitai Aug 27, 2025
@QuantumExplorer QuantumExplorer changed the title @coderabbitai feat: integrate key wallet manager Aug 27, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
dash-spv/src/client/mod.rs (2)

467-471: Don’t hold and drop an unused wallet lock (clippy::let_underscore_lock).

This holds the read lock for the whole function and will trip clippy -D warnings. Remove it.

-        let _wallet = self.wallet.read().await;

2386-2388: Fix generic signature mismatch in tests.

Method returns the wrong SequentialSyncManager type; won’t compile with W added.

-    pub fn sync_manager_mut(&mut self) -> &mut SequentialSyncManager<S, N> {
+    pub fn sync_manager_mut(&mut self) -> &mut SequentialSyncManager<S, N, W> {
         &mut self.sync_manager
     }
♻️ Duplicate comments (2)
dash-spv/src/sync/filters.rs (1)

3081-3129: Good fix: robust header lookup on batched retries.

Handles Ok(Some)/Ok(None)/Err with logging and continues per-batch. This addresses the earlier silent break.

dash-spv/src/mempool_filter.rs (1)

198-201: Replace nonstandard cfg gate; will fail clippy -D warnings.

#[cfg(test_disabled)] triggers “unexpected cfg condition name”. Gate behind a feature.

-// Tests temporarily disabled during WatchItem removal
-// TODO: Rewrite tests to work with wallet integration
-#[cfg(test_disabled)]
+// Tests temporarily disabled during WatchItem removal
+// Enable with `--features legacy_watchitem_tests`
+#[cfg(all(test, feature = "legacy_watchitem_tests"))]
 mod tests {

Also add to Cargo.toml:

[features]
legacy_watchitem_tests = []
🧹 Nitpick comments (5)
dash-spv/src/sync/filters.rs (2)

475-485: Use Storage error for storage access failure (consistency).

Erroneously maps a storage read failure to Validation. Align with nearby calls using SyncError::Storage.

-                            Err(e) => {
-                                return Err(SyncError::Validation(format!(
-                                    "Failed to get tip header: {}",
-                                    e
-                                )));
-                            }
+                            Err(e) => {
+                                return Err(SyncError::Storage(format!(
+                                    "Failed to get tip header: {}",
+                                    e
+                                )));
+                            }

1682-1698: Stubbed wallet matching path returns no results.

Method unconditionally returns empty matches; ensure call sites don’t depend on it or wire it to wallet soon.

Want me to open a follow-up issue to track replacing this with wallet.check_compact_filter-based batching?

dash-spv/src/mempool_filter.rs (2)

44-45: Remove needless collect; move the set directly.

Avoid re-building an identical HashSet.

-            mempool_state,
-            watched_addresses: watched_addresses.into_iter().collect(),
+            mempool_state,
+            watched_addresses,

79-82: Avoid unnecessary clone.

Address is owned; insert by value.

-            if let Ok(address) = Address::from_script(&output.script_pubkey, self.network) {
-                addresses.insert(address.clone());
+            if let Ok(address) = Address::from_script(&output.script_pubkey, self.network) {
+                addresses.insert(address);
dash-spv/src/client/mod.rs (1)

556-569: DRY: factor mempool filter construction.

Filter initialization is duplicated (here and in start/enable_mempool_tracking). Extract a helper.

+    fn make_mempool_filter(&self) -> MempoolFilter {
+        MempoolFilter::new(
+            self.config.mempool_strategy,
+            Duration::from_secs(self.config.recent_send_window_secs),
+            self.config.max_mempool_transactions,
+            self.mempool_state.clone(),
+            HashSet::new(), // TODO: fill from wallet
+            self.config.network,
+        )
+    }
@@
-        self.mempool_filter = Some(Arc::new(MempoolFilter::new(
-            self.config.mempool_strategy,
-            Duration::from_secs(self.config.recent_send_window_secs),
-            self.config.max_mempool_transactions,
-            self.mempool_state.clone(),
-            HashSet::new(), // Will be populated from wallet's monitored addresses
-            self.config.network,
-        )));
+        self.mempool_filter = Some(Arc::new(self.make_mempool_filter()));
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 928ea28 and cf89720.

📒 Files selected for processing (4)
  • dash-spv/src/client/block_processor.rs (7 hunks)
  • dash-spv/src/client/mod.rs (10 hunks)
  • dash-spv/src/mempool_filter.rs (23 hunks)
  • dash-spv/src/sync/filters.rs (37 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with the thiserror crate and propagate errors appropriately
Use the tokio runtime for async operations
Adhere to MSRV: code must compile on Rust 1.89 (avoid newer language/features)
Keep code formatted with rustfmt (cargo fmt)
Keep the codebase clippy-clean (clippy with -D warnings)

Files:

  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/filters.rs
{**/*.rs,**/Cargo.toml}

📄 CodeRabbit inference engine (CLAUDE.md)

Use feature flags and conditional compilation for optional features

Files:

  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/filters.rs
{dash-network,rpc-client,rpc-json,dash-spv}/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Always validate inputs from untrusted sources

Files:

  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/filters.rs
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/filters.rs
🧠 Learnings (4)
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.

Applied to files:

  • dash-spv/src/client/block_processor.rs
  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
  • dash-spv/src/sync/filters.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/*.swift : When accessing SPV functionality, use DashSDK public APIs instead of direct client access; add public wrapper methods to DashSDK if needed

Applied to files:

  • dash-spv/src/client/mod.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`

Applied to files:

  • dash-spv/src/client/mod.rs
  • dash-spv/src/mempool_filter.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/SPVClient.swift : Do not place type definitions in SPVClient.swift; keep types in dedicated files instead

Applied to files:

  • dash-spv/src/client/mod.rs
🧬 Code graph analysis (3)
dash-spv/src/client/mod.rs (4)
dash-spv/src/mempool_filter.rs (2)
  • new (31-47)
  • new (261-267)
dash-spv/src/client/block_processor.rs (1)
  • new (46-64)
dash-spv/src/client/message_handler.rs (1)
  • new (35-59)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
dash-spv/src/mempool_filter.rs (3)
dash-spv/src/client/mod.rs (3)
  • network (436-438)
  • network (1236-1239)
  • new (192-255)
dash-spv/src/types.rs (2)
  • new (197-199)
  • new (893-913)
dash-spv/src/client/message_handler.rs (1)
  • new (35-59)
dash-spv/src/sync/filters.rs (3)
dash-spv/src/client/mod.rs (7)
  • network (436-438)
  • network (1236-1239)
  • storage (2396-2398)
  • start (258-422)
  • new (192-255)
  • wallet (105-107)
  • stats (2344-2365)
dash-spv/src/client/filter_sync.rs (1)
  • new (30-44)
dash-spv/src/sync/sequential/mod.rs (1)
  • new (90-117)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (hashes_sha1)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: Strict Warnings and Clippy Checks
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: RPC Tests (stable, true)
🔇 Additional comments (4)
dash-spv/src/sync/filters.rs (1)

1832-1870: Download path defers matching to message handler — OK.

Initiates getcfilters and correctly documents deferred matching. No issues.

dash-spv/src/client/block_processor.rs (1)

236-242: relevant_transactions is already usize – no cast required

The relevant_transactions field on SpvEvent::BlockProcessed is defined as usize (dash-spv/src/types.rs:791), so using txids.len() (which returns a usize) matches the field’s type exactly. No additional casting is necessary.

dash-spv/src/client/mod.rs (2)

104-107: Wallet accessor looks good.


211-214: SequentialSyncManager wiring with wallet cloned — OK.

@QuantumExplorer QuantumExplorer merged commit 674325e into v0.40-dev Aug 27, 2025
15 of 25 checks passed
@QuantumExplorer QuantumExplorer deleted the feat/integrate-keywalletmanager branch August 27, 2025 03:36
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv/src/sync/sequential/mod.rs (1)

334-345: Fix potential underflow/panic in filter count calculation (checkpoint case).

If filter_header_tip < start_height (e.g., sparse filter headers or high checkpoint), filter_header_tip - start_height + 1 underflows and can panic in debug builds. Guard and early-transition.

-                if filter_header_tip > 0 {
-                    // Download all filters for complete blockchain history
-                    // This ensures the wallet can find transactions from any point in history
-                    let start_height = self.header_sync.get_sync_base_height().max(1);
-                    let count = filter_header_tip - start_height + 1;
+                if filter_header_tip > 0 {
+                    // Start from sync base (or 1 if no checkpoint)
+                    let start_height = self.header_sync.get_sync_base_height().max(1);
+                    if filter_header_tip < start_height {
+                        tracing::info!(
+                            "No filters to download: tip {} < start {}",
+                            filter_header_tip,
+                            start_height
+                        );
+                        self.transition_to_next_phase(
+                            storage,
+                            network,
+                            "No filter range to download",
+                        ).await?;
+                        return Ok(());
+                    }
+                    let count = filter_header_tip - start_height + 1;
🧹 Nitpick comments (4)
dash-spv/src/sync/sequential/mod.rs (4)

12-12: Avoid explicit deref; remove DerefMut import and use deref coercion to satisfy clippy.

- use std::ops::DerefMut;
-        // Check filter against wallet if available
+        // Check filter against wallet
         let matches = self
             .filter_sync
             .check_filter_for_matches(
                 &cfilter.filter,
                 &cfilter.block_hash,
-                wallet.deref_mut(),
+                &mut *wallet,
                 self.config.network,
             )
             .await?;

Also applies to: 1307-1319


1061-1063: Don’t swallow storage errors; propagate instead of unwrap_or(0).

-        let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0);
+        let blockchain_height = self.get_blockchain_height_from_storage(storage).await?;
-        let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0);
+        let blockchain_height = self.get_blockchain_height_from_storage(storage).await?;

Also applies to: 1107-1109


1104-1135: Avoid cloning headers; precompute metrics and pass by value.

-        let continue_sync =
-            self.header_sync.handle_headers_message(headers.clone(), storage, network).await?;
+        let headers_len = headers.len() as u32;
+        let is_empty = headers_len == 0;
+        let continue_sync =
+            self.header_sync.handle_headers_message(headers, storage, network).await?;
...
-            *headers_downloaded += headers.len() as u32;
+            *headers_downloaded += headers_len;
...
-            if headers.is_empty() {
+            if is_empty {
                 *received_empty_response = true;
             }

838-845: Align “expected message” logic with actual handling; don’t mark Block as expected in MnList.

Blocks are ignored during MnList earlier in handle_message; this arm is misleading.

-            (
-                SyncPhase::DownloadingMnList {
-                    ..
-                },
-                NetworkMessage::Block(_),
-            ) => true, // Allow blocks during masternode sync
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between cf89720 and 240fc5d.

📒 Files selected for processing (1)
  • dash-spv/src/sync/sequential/mod.rs (9 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with the thiserror crate and propagate errors appropriately
Use the tokio runtime for async operations
Adhere to MSRV: code must compile on Rust 1.89 (avoid newer language/features)
Keep code formatted with rustfmt (cargo fmt)
Keep the codebase clippy-clean (clippy with -D warnings)

Files:

  • dash-spv/src/sync/sequential/mod.rs
{**/*.rs,**/Cargo.toml}

📄 CodeRabbit inference engine (CLAUDE.md)

Use feature flags and conditional compilation for optional features

Files:

  • dash-spv/src/sync/sequential/mod.rs
{dash-network,rpc-client,rpc-json,dash-spv}/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Always validate inputs from untrusted sources

Files:

  • dash-spv/src/sync/sequential/mod.rs
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/sync/sequential/mod.rs
🧠 Learnings (3)
📓 Common learnings
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`

Applied to files:

  • dash-spv/src/sync/sequential/mod.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Coordinate synchronization phases with `SequentialSyncManager`; each phase must complete before the next begins

Applied to files:

  • dash-spv/src/sync/sequential/mod.rs
🧬 Code graph analysis (1)
dash-spv/src/sync/sequential/mod.rs (3)
dash-spv/src/client/message_handler.rs (1)
  • new (35-59)
dash-spv/src/client/mod.rs (5)
  • new (192-255)
  • wallet (105-107)
  • network (436-438)
  • network (1236-1239)
  • storage (2396-2398)
dash-spv/src/sync/filters.rs (1)
  • new (139-173)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (16)
  • GitHub Check: fuzz (hashes_sha1)
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (dash_deserialize_script)
  • GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: Strict Warnings and Clippy Checks
🔇 Additional comments (2)
dash-spv/src/sync/sequential/mod.rs (2)

28-28: Keep WalletInterface import; it’s used in generic bounds.

The bound W: WalletInterface requires the trait in scope; this import is necessary. The prior “unused import” bot note is now obsolete.


396-419: Confirm wallet block‐processing path

It appears that handle_block_message is only invoked while the sync manager is in the DownloadingBlocks phase—but in execute_current_phase this phase immediately transitions out with the “No blocks to download” reason, before any blocks are requested or downloaded. As a result, the wallet’s process_block (in handle_block_message) may never be called, and all block processing would instead fall through to the client’s block processor.

Please verify and adjust as needed:

  • In dash-spv/src/sync/sequential/mod.rs around lines 370–378, the SyncPhase::DownloadingBlocks arm calls
    transition_to_next_phase(..., "No blocks to download") immediately
  • In the same file around lines 402–418, handle_message only dispatches to handle_block_message when in DownloadingBlocks
  • The implementation of handle_block_message (lines 1437–1500) invokes wallet.process_block but never forwards the block to the client processor
  • Block requests (RequestType::GetBlock) are enqueued via request control (see dash-spv/src/sync/sequential/request_control.rs:379), yet pending_blocks is initialized empty and may never be populated before exiting the phase

Ensure that:

  • Filter matches populate pending_blocks and trigger block download requests before transitioning out of DownloadingBlocks, or
  • Unconditionally process wallet updates (e.g., in the client block processor), to avoid missing or duplicating wallet state updates

Comment on lines +1917 to 1920
// TODO: Check filter against wallet instead of watch items
// This will be integrated with wallet's check_compact_filter method
tracing::debug!("Filter checking disabled until wallet integration is complete");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enable wallet-based matching for post-sync filters (misses relevant txs otherwise).

-        // TODO: Check filter against wallet instead of watch items
-        // This will be integrated with wallet's check_compact_filter method
-        tracing::debug!("Filter checking disabled until wallet integration is complete");
+        // Check filter against wallet and request block on match
+        let mut wallet = self.wallet.write().await;
+        let matches = self
+            .filter_sync
+            .check_filter_for_matches(
+                &cfilter.filter,
+                &cfilter.block_hash,
+                &mut *wallet,
+                self.config.network,
+            )
+            .await?;
+        drop(wallet);
+
+        if matches {
+            let inv = Inventory::Block(cfilter.block_hash);
+            network
+                .send_message(NetworkMessage::GetData(vec![inv]))
+                .await
+                .map_err(|e| SyncError::Network(format!("Failed to request block: {}", e)))?;
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// TODO: Check filter against wallet instead of watch items
// This will be integrated with wallet's check_compact_filter method
tracing::debug!("Filter checking disabled until wallet integration is complete");
// Check filter against wallet and request block on match
let mut wallet = self.wallet.write().await;
let matches = self
.filter_sync
.check_filter_for_matches(
&cfilter.filter,
&cfilter.block_hash,
&mut *wallet,
self.config.network,
)
.await?;
drop(wallet);
if matches {
let inv = Inventory::Block(cfilter.block_hash);
network
.send_message(NetworkMessage::GetData(vec![inv]))
.await
.map_err(|e| SyncError::Network(format!("Failed to request block: {}", e)))?;
}
🤖 Prompt for AI Agents
In dash-spv/src/sync/sequential/mod.rs around lines 1917 to 1920, replace the
placeholder debug and TODO with a real wallet-based compact-filter check: call
the wallet's check_compact_filter (or equivalent) to match filters against the
wallet state instead of using watch items, wire any necessary wallet reference
into this sync context, and handle the wallet response (match/miss) to include
matching transactions in post-sync processing; remove the debug message and
ensure proper error handling and tests cover cases where wallet matching changes
which txs are considered relevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants