- 
                Notifications
    
You must be signed in to change notification settings  - Fork 3
 
feat: initial implementation of SPV client in rust-dashcode #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| 
          
 Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the  You can disable this status message by setting the  WalkthroughThis change introduces a new "dash-spv" Rust package implementing a modular, async Dash SPV client library, including its configuration, documentation, CLI, core modules, and comprehensive test suite. It also updates Dash mainnet genesis block parameters, fixes version-aware coinbase payload decoding, and adds support for the MNHF Signal special transaction type. The update includes major new modules for networking, storage, sync, validation, wallet, and error handling. Changes
 Sequence Diagram(s)sequenceDiagram
    participant User
    participant CLI/Main
    participant DashSpvClient
    participant NetworkManager
    participant StorageManager
    participant Wallet
    participant SyncManager
    User->>CLI/Main: Run dash-spv with config
    CLI/Main->>DashSpvClient: new(config)
    DashSpvClient->>NetworkManager: connect()
    DashSpvClient->>StorageManager: load headers, state
    DashSpvClient->>Wallet: load watched addresses, UTXOs
    CLI/Main->>DashSpvClient: start()
    DashSpvClient->>SyncManager: sync_to_tip()
    SyncManager->>NetworkManager: request headers/filters
    NetworkManager->>DashSpvClient: deliver network messages
    DashSpvClient->>SyncManager: handle headers/filters
    SyncManager->>StorageManager: store headers/filters
    DashSpvClient->>Wallet: update UTXOs/balances on relevant txs
    DashSpvClient->>CLI/Main: report sync progress
    User->>CLI/Main: Ctrl-C (shutdown)
    CLI/Main->>DashSpvClient: stop()
    DashSpvClient->>NetworkManager: disconnect
    DashSpvClient->>StorageManager: shutdown
    Possibly related PRs
 Poem
 Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit: 
 SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
 Other keywords and placeholders
 CodeRabbit Configuration File (
 | 
    
- Replace placeholder filter_matches_scripts with real BIP158 GCS implementation - Add comprehensive integration test framework with Docker support - Implement network monitoring for ChainLocks and InstantLocks with signature verification - Enhance masternode engine with proper block header feeding and state management - Add watch item persistence and improved transaction discovery - Increase filter search range from 50 to 1000 blocks for better coverage - Enable X11 hashing and BLS signature verification in dependencies - Add proper error handling and logging throughout the sync pipeline 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Ping and Pong Handling: Added mechanisms to send periodic pings and handle incoming pings/pongs, enhancing network reliability. - Block Processing: Implemented functions to process new block hashes immediately and manage block headers and filters effectively. - Filter Headers and Filters: Added logic to handle CFHeaders and CFilter network messages and check them against watch items. - Logging Enhancements: Improved logging for better traceability, including filter matches and network message receipt. - Error Handling: Strengthened error handling for network messages and block processing errors. This update enhances network responsiveness and block synchronization, enabling better SPV client performance."
- Add get_header_height_by_hash() method for O(1) hash-to-height lookups - Add get_headers_batch() method for efficient bulk header loading - Implement reverse index in both disk and memory storage - Add as_any_mut() trait for storage downcasting - Leverage existing segmented file structure for batch operations These optimizations enable efficient masternode sync by reducing individual storage reads from millions to thousands. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Replace inefficient strategy that fed ALL 2.2+ million headers individually with selective feeding of only required headers: - Use reverse index for O(1) hash-to-height lookups - Feed only target, base, and quorum block hashes - Use batch loading for recent header ranges (~1000 headers) - Eliminate "Feeding 2278524 block headers" bottleneck Performance improvement: ~2.2M individual reads → ~1K batch operations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Implement a status bar showing sync progress at the bottom of the terminal: - Headers count and filter headers count - Latest ChainLock height and peer count - Network name (Dash/Testnet/Regtest) - Updates every 100ms without interfering with log output Features: - Uses crossterm for cross-platform terminal control - RAII cleanup with TerminalGuard - Logs stream normally above persistent status bar - Optional --no-terminal-ui flag to disable 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive terminal UI integration to the SPV client: - enable_terminal_ui() and get_terminal_ui() methods - Real-time status updates after network connections - Status updates after header processing and ChainLock events - update_status_display() method with storage data integration - Proper shutdown sequence ensuring storage persistence - Network configuration getter for UI display The client now displays live sync progress including header counts from storage, peer connections, and ChainLock heights. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
CLI improvements: - Add --no-terminal-ui flag to disable status bar - Proper terminal UI initialization timing - Network name display integration - Remove unused Arc import Logging improvements: - Fix log level handling in init_logging() - Improve tracing-subscriber configuration - Remove thread IDs for cleaner output The CLI now provides a modern terminal experience with optional real-time status display alongside streaming logs. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Small enhancements to header and filter sync: - Improve logging and error handling - Better progress reporting during sync operations - Consistent formatting across sync modules These changes support the terminal UI integration and provide better visibility into sync progress. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
8d4a501    to
    4364cf9      
    Compare
  
    - Add thread-safe Mutex wrapper around BufReader to prevent race conditions - Implement sticky peer selection for sync consistency during operations - Increase peer count limits (2-5 peers) for better network resilience - Add single-peer message routing for sync operations requiring consistency - Improve connection error handling and peer disconnection detection - Add timeout-based message receiving to prevent indefinite blocking - Reduce log verbosity for common sync messages to improve readability 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add comprehensive sync state management with timeout detection - Implement overlapping header handling for improved sync reliability - Add coordinated message routing between sync managers and main client - Enhance filter sync with batch processing and progress tracking - Add sync timeout detection and recovery mechanisms - Improve masternode sync coordination and state management - Add detailed sync progress logging and error handling - Implement proper chain validation during sync operations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add centralized network message handling to prevent race conditions - Implement message routing between monitoring loop and sync operations - Add comprehensive sync timeout detection and recovery mechanisms - Enhance filter sync coordination with monitoring loop management - Add detailed documentation for network message architecture - Improve sync progress reporting and status updates - Reduce debug noise from transaction input checking - Add sync_and_check_filters_with_monitoring method for better coordination 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
- Add filter header verification test for chain validation - Enhance multi-peer test with better error handling and timeouts - Add checksum utility for data integrity verification - Improve consensus encoding with better error messages - Add test infrastructure for sync coordination scenarios 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Remove premature finish_sync() calls that were marking header and filter header synchronization as complete immediately after starting. The sync should only be marked as finished when handle_*_message() returns false, indicating actual sync completion. - Remove finish_sync() calls after starting header sync - Remove finish_sync() calls after starting filter header sync - Add sync_state_mut() accessor for proper state management - Add proper sync completion in client message handlers This fixes the issue where sync would complete with 0 headers because the sync state was marked as finished before any headers were processed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
          
✅ Actions performedReview triggered. 
  | 
    
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 57
🔭 Outside diff range comments (3)
dash/src/blockdata/transaction/special_transaction/coinbase.rs (3)
45-47: Type regression:best_cl_heightmust remainu64The field was downgraded to
u32but all consensus helpers (write_compact_size,compact_size_len,read_compact_size) work withu64.
This currently fails to compile and truncates values ≥ 4 294 967 295.- pub best_cl_height: Option<u32>, + pub best_cl_height: Option<u64>,
55-67: Fix compile errors in size/encode helpers after type change
compact_size_lenandwrite_compact_sizeexpectu64. Cast the value:- size += compact_size_len(best_cl_height); + size += compact_size_len(best_cl_height as u64);and
- len += write_compact_size(w, best_cl_height)?; + len += write_compact_size(w, best_cl_height as u64)?;
108-117: Decode path: preserve full 64-bit height
read_compact_sizereturnsu64; assigning toOption<u32>overflows.
Return the fullu64:- let best_cl_height = if version >= 3 { - Some(read_compact_size(r)?) - } else { - None - }; + let best_cl_height = if version >= 3 { + Some(read_compact_size(r)?) + } else { + None + };(with the struct field already switched to
u64).
🧹 Nitpick comments (68)
Cargo.toml (1)
2-2: Workspace entry looks good – remember to keep the list sorted alphabetically next timePure nit. The new
dash-spvmember is correctly added, nothing else to flag.dash/src/blockdata/transaction/mod.rs (1)
675-677: SegWit flag override duplicated – propose consolidating the checks
MnhfSignalnow gets its ownif, but the three consecutiveifstatements (AssetUnlock, QuorumCommitment, MnhfSignal) do exactly the same thing.
Consider collapsing them into a singlematches!to avoid copy-paste growth:- if special_transaction_type == TransactionType::AssetUnlock { - segwit = false; - } - if special_transaction_type == TransactionType::QuorumCommitment { - segwit = false; - } - if special_transaction_type == TransactionType::MnhfSignal { - segwit = false; - } + if matches!( + special_transaction_type, + TransactionType::AssetUnlock + | TransactionType::QuorumCommitment + | TransactionType::MnhfSignal + ) { + segwit = false; + }Same readability, fewer branches.
dash-spv/src/filters/mod.rs (1)
1-14: Placeholder acknowledged – add#[derive(Default)]for ergonomic constructionSince
FilterManager::new()just returnsSelf {}, derivingDefaultlets callers writeFilterManager::default()and eases future mocking.-pub struct FilterManager { - // Placeholder for future filter management functionality -} +#[derive(Default)] +pub struct FilterManager { + // Placeholder for future filter management functionality +}Minor, but costs nothing.
dash-spv/examples/simple_sync.rs (2)
18-33: Prefer structured logging overprintln!in examples
println!sidesteps thetracing/logecosystem you already wired up withinit_logging.
Replacing these calls withtracing::info!()keeps the example consistent with the rest of the crate and allows users to control verbosity via env-filters.- println!("Starting header synchronization..."); + tracing::info!("Starting header synchronization..."); ... - println!("Synchronization completed!"); - println!("Synced {} headers", progress.header_height); + tracing::info!("Synchronization completed"); + tracing::info!("Synced {} headers", progress.header_height); ... - println!("Headers downloaded: {}", stats.headers_downloaded); - println!("Bytes received: {}", stats.bytes_received); + tracing::info!("Headers downloaded: {}", stats.headers_downloaded); + tracing::info!("Bytes received: {}", stats.bytes_received); ... - println!("Done!"); + tracing::info!("Done!");
18-20: Handle shutdown on early errorsIf any awaited call after
client.start()fails, the example returns early and leaves background tasks running until Tokio drops them.
Wrappingclient.start()and subsequent calls in amatch/if letor using a guard type that stops the client onDropwould guarantee clean shutdown even on errors.dash-spv/tests/simple_segmented_test.rs (2)
10-20: Duplicatecreate_test_headerhelper across testsThis exact helper already exists in
segmented_storage_test.rsandsegmented_storage_debug.rs.
Consider moving it totests/common/mod.rs(or similar) andpub use-ing it to avoid drift.
24-50: Excessiveprintln!output in unit testUnit tests should stay quiet unless they fail.
Replace the debug prints withtracing::debug!(behindRUST_LOG) or remove them entirely to keep CI logs clean.dash-spv/Cargo.toml (1)
33-47: Bothlogandtracingenabled without a bridgeYou depend on
tracing,tracing-subscriber, andlog, but notracing-log/tracing-appenderfeature is enabled to routelogrecords intotracing.
This can lead to missing or duplicated messages.dash-spv/examples/filter_sync.rs (2)
13-18:unwrap()on network check can panic
require_network()already guarantees correctness in tests, but in example code a panic is user-visible.
Propagate the error instead:- let config = ClientConfig::mainnet() - .watch_address(watch_address.clone().require_network(Network::Dash).unwrap()) + let watch_address = watch_address.require_network(Network::Dash)?; + let config = ClientConfig::mainnet() + .watch_address(watch_address)
24-42: Consistent logging styleSame recommendation as
simple_sync.rs– prefertracing::info!overprintln!for progress messages.block_with_pro_reg_tx.data (1)
1-1: Large raw test vectors should be compressed or moved undertests/data/Committing ~500 KB of hex inflates the repository and hurts diff readability.
Consider:
- Compressing with gzip and decoding at test time, or
 - Using
 include_bytes!("tests/data/pro_reg_tx.bin").This keeps the workspace lean while preserving deterministic test data.
dash-spv/tests/segmented_storage_debug.rs (1)
24-41: Reduce noisyprintln!in testsExcessive
println!calls can bloat CI logs and slow test runs. Consider replacing them withtracing::{debug,info}!(captured with--nocapturewhen needed) or remove them entirely.dash-spv/tests/test_plan.md (2)
70-80: Fix duplicated wordA repeated word appears (“validation validation”). Drop the duplicate to keep the plan crisp.
95-105: Typo: duplicate “filter”In “Compact filter download”, the word filter is repeated. Remove the extra occurrence.
dash-spv/src/storage/mod.rs (1)
105-114: Remove redundantAsAnyMuthelper
StorageManageralready requiresas_any_mut; the separateAsAnyMuttrait duplicates functionality and adds confusion.-/// Helper trait to provide as_any_mut for all StorageManager implementations -pub trait AsAnyMut { - fn as_any_mut(&mut self) -> &mut dyn Any; -} - -impl<T: 'static> AsAnyMut for T { - fn as_any_mut(&mut self) -> &mut dyn Any { - self - } -}Instead, give
as_any_muta default implementation insideStorageManager:#[async_trait] pub trait StorageManager: Send + Sync + 'static { - fn as_any_mut(&mut self) -> &mut dyn Any; + fn as_any_mut(&mut self) -> &mut dyn Any where Self: Sized { + self + }This keeps the API leaner and avoids extra blanket impls.
dash-spv/src/validation/chainlock.rs (1)
39-44: Implement null-hash checkA zero
block_hashis as invalid as height 0. Add the check now; it requires no quorum data.-// Check block hash is not zero (we'll skip this check for now) -// TODO: Implement proper null hash check +if chain_lock.block_hash.is_all_zeros() { + return Err(ValidationError::InvalidChainLock( + "ChainLock block hash cannot be zero".to_string() + )); +}Completing this small validation step tightens security at negligible cost.
dash-spv/README.md (1)
75-85: Add a language identifier to the architecture fenced block
markdownlint(MD040) warns because the fenced block that lists the directory tree is missing a language identifier.
Addingtext(orbash) after the opening triple back-ticks silences the linter and renders syntax highlighting consistently across tooling.-``` +```text dash-spv/ ├── client/ # High-level client API and configuration ...dash-spv/src/network/tests.rs (2)
41-49: Shut down theMultiPeerNetworkManagerto avoid background-task leaks
MultiPeerNetworkManager::newspawns background tasks (heartbeat, peer monitors, etc.).
Because the manager is dropped without an explicit shutdown, those tasks can out-live the test and interfere with subsequent tests running in the same Tokio runtime.let manager = MultiPeerNetworkManager::new(&config).await.unwrap(); // Should start with zero peers assert_eq!(manager.peer_count_async().await, 0); // clean-up -manager // implicitly dropped +manager.shutdown().await.expect("failed to shut down manager");(Replace
shutdownwith the actual async teardown method if it has a different name.)
45-48: Duplicate assertion – can be removed
peer_count_async()is called twice in succession with no state change in between, yielding the same result.-// Note: is_connected() still uses sync approach, so we'll check async -assert_eq!(manager.peer_count_async().await, 0);dash-spv/tests/simple_header_test.rs (1)
59-60: Graceful shutdown of the client
client.start()spawns multiple background tasks; consider callingclient.stop().awaitbefore the test returns to ensure all tasks are torn down cleanly.dash-spv/src/network/discovery.rs (2)
28-36: Propagate DNS-lookup errors instead of silently swallowing them
discover_peersreturns an empty vector on any failure, which is indistinguishable from “no peers found”.
ReturningResult<Vec<SocketAddr>, Error>lets callers differentiate between a network/DNS problem and an honest lack of peers.-pub async fn discover_peers(&self, network: Network) -> Vec<SocketAddr> { +pub async fn discover_peers(&self, network: Network) -> Result<Vec<SocketAddr>, Error> { ... - for seed in seeds { + for seed in seeds { ... - match self.resolver.lookup_ip(*seed).await { - Ok(lookup) => { ... } - Err(e) => { - log::warn!(...); - } - } + let lookup = self + .resolver + .lookup_ip(*seed) + .await + .map_err(|e| Error::Network(format!("DNS lookup failed: {}", e)))?; + ... } ... - addresses + Ok(addresses) }Migrating the two call-sites (
discover_peersanddiscover_peers_limited) is mechanical and makes upstream error handling far more robust.
40-56: Parallelise DNS look-ups for better latencySequential
for-loop look-ups block on each seed.
Usingfutures::stream::FuturesUnordered(orjoin_all) reduces overall discovery time, especially on high-latency links.This is an optimisation; keep it in mind once correctness is locked down.
dash-spv/src/storage/types.rs (2)
20-36: Consider derivingSerialize/DeserializeforStorageStatsUnlike
MasternodeState,StorageStatsisn’t (de)serialisable.
Persisting or emitting stats via RPC/metrics currently requires manual conversion.-#[derive(Debug, Clone, Default)] +#[derive(Debug, Clone, Default, Serialize, Deserialize)] pub struct StorageStats {Only a small dependency footprint increase, but hugely improves observability.
40-66: Expose a builder forStorageConfig
StorageConfighas many knobs; constructing it ad-hoc is verbose and error-prone.
A fluent builder (or#[derive(Default)]+..Default::default()) improves ergonomics and guarantees defaults for unspecified fields.No immediate bug, just a usability enhancement.
dash-spv/tests/storage_test.rs (1)
158-162: Redundant assertion – consider validating content instead of callingis_some()twice
retrieved_stateis checked withis_some()on two consecutive lines.
Only one of those checks is necessary. Moreover, it would be more valuable to assert on the actual state (e.g., compare heights / hashes) rather than just its existence.-assert!(retrieved_state.is_some()); -// Note: ChainState doesn't store network directly, but we can verify it was created properly -assert!(retrieved_state.is_some()); +let retrieved_state = retrieved_state.expect("ChainState should be persisted"); +// TODO: add fine-grained assertions once ChainState exposes fields (e.g. best_height)dash-spv/src/sync/state.rs (1)
8-18:Cloneimpl can mislead – each clone resets internal mutabilityBecause
Clonecreates a copy of theHashSet/HashMap, twoSyncStateinstances diverge immediately after cloning. If cloning is only for tests consider removing the derive; otherwise document this clearly.dash-spv/run_integration_tests.md (1)
119-125: Minor wording/typography nits
- “with success” → just “successfully”.
 - Use an en dash in ranges:
 30–120 seconds.Not critical, but improves polish.
dash-spv/tests/handshake_test.rs (1)
74-81: Timing assertion brittle across OS/network stacks
assert!(elapsed >= 2s)may fail on platforms where TCP connect returns “connection refused” immediately.
Better: only assert the upper bound (respecting the timeout) and drop the lower-bound check.-assert!(elapsed >= Duration::from_secs(2), ...); assert!(elapsed < Duration::from_secs(15), ...);dash-spv/src/wallet/utxo.rs (2)
40-46: Prefer a constant for coinbase maturityHard-coding
99/100scatters consensus rules. Expose aconst COINBASE_MATURITY: u32 = 100;near the top (or reuse one fromdashcore) and reference it from bothis_spendableand tests for clarity and future maintenance.
52-54: Micro-nit: avoidAmount::from_satallocation
TxOut.valueis alreadyu64; returning it directly (or viaamount_sat()helpers) avoids constructing anAmounteach call if callers only need the sats.dash-spv/src/client/watch_manager.rs (1)
69-73: Inefficient cloning of the watch listYou rebuild and clone the whole
watch_listtwice per update. After dropping the lock you can reuse the originalVecwithout cloning:- if let Some(updater) = self.watch_item_updater { - if let Err(e) = updater.send(watch_list.clone()) { + if let Some(updater) = self.watch_item_updater { + if let Err(e) = updater.send(watch_list) {dash-spv/src/network/persist.rs (1)
84-90: Ignore invalid peer strings? consider logging
filter_map(|p| p.address.parse().ok())silently skips malformed addresses. Emit a debug log so operators know some peers were discarded.dash-spv/src/validation/mod.rs (1)
52-66:validate_powparameter ignored inBasicmode
validate_header_chainacceptsvalidate_pow, but the flag is only honoured inFull. Either:
- Document that PoW checks are always skipped in
 Basic, or- Forward
 validate_powtovalidate_chain_basicso callers control it.Otherwise callers may assume the flag is respected.
dash/src/blockdata/transaction/special_transaction/mnhf_signal.rs (2)
40-46:size()can be aconstand avoid &selfThe size is fixed; make it a
pub const SIZE: usize = 130;and use it in tests andsize():+pub const SIZE: usize = 130; + impl MnhfSignalPayload { - /// The size of the payload in bytes. - /// version(1) + version_bit(1) + quorum_hash(32) + sig(96) = 130 bytes - pub fn size(&self) -> usize { - 130 - } + pub fn size(&self) -> usize { + Self::SIZE + } }This removes the unnecessary
&selfborrow.
154-175: Test helper re-implements hex parsingThe manual
hex_decode/hex_digitis error-prone and slower. Thehexcrate (already in dep-tree for many projects) orrustc_hexconverts safely in one line:let payload_bytes = hex::decode(payload_hex).unwrap();Consider replacing the custom code.
dash-spv/tests/reverse_index_test.rs (1)
98-112: Duplicate test helpers – extract to a shared util
create_test_header()is now defined in three separate test modules with slightly different implementations. Move a single canonical version totests/common/mod.rs(or similar) and reuse it to keep the tests DRY and consistent.dash-spv/src/network/constants.rs (1)
42-49: Tight 10 ms poll interval can burn CPU
MESSAGE_POLL_INTERVAL = 10 msresults in 100 wake-ups/sec even when idle. Consider backing this off (e.g. 50-100 ms) or making it configurable so battery-powered/mobile environments aren’t penalised.dash-spv/src/main.rs (1)
305-315: Hard panic on no-peer condition
panic!("SPV client failed to connect …")aborts the whole process. Prefer returning an error so callers (or integration tests) can handle startup failure gracefully.dash-spv/src/network/peer.rs (1)
82-87: Height should beu32, not signedBlock heights are never negative; using
Option<u32>avoids an unnecessary sign bit and prevents accidental negative values.dash-spv/src/network/addrv2.rs (2)
47-85: Avoid O(n log n) sort + full-vector dedup on every AddrV2 batch
handle_addrv2sorts the entireknown_peerslist and then performsretain-based dedup for every incoming batch.
WithMAX_ADDR_TO_STOREin the thousands this quickly becomes a hot spot.Consider keeping the collection keyed by
SocketAddr(e.g.IndexMap<SocketAddr, AddrV2Message>or anLruCache) so that:
- insertion is
 O(1)- dedup happens implicitly
 - you only touch the single updated entry instead of the whole vector
 This also lets you drop the extra
seenHashSetand the second traversal.
47-48: Remove unused local variables
_initial_countand_processed_countare never used after assignment. They can be safely removed or used in the log statement to make the metrics useful.dash-spv/src/client/wallet_utils.rs (1)
148-156: Avoid N × lock / unlock inside the sync loopInside
sync_watch_items_with_walleta lock is taken for every address.
Acquire a single write-lock outside the loop to amortise locking cost:let mut wallet = self.wallet.write().await; for item in watch_items { if let WatchItem::Address { address, .. } = item { if let Err(e) = wallet.add_watched_address(address.clone()).await { … } else { synced_count += 1; } } }dash-spv/src/client/status_display.rs (1)
44-50: Blocking mutex inside async context
received_filter_heightsis protected bystd::sync::Mutex(lock()is sync).
In an async environment this can block the entire executor.
Prefertokio::sync::Mutexor refactor to avoid long critical sections.dash-spv/CLAUDE.md (1)
140-145: Eliminate accidental word duplication and punctuation glitchesThe phrase “Tests gracefully handle node unavailability” is repeated in two consecutive bullet-points and a stray dash ends one of the items.
Tighten the wording to avoid noise in the doc and keep the bullet list crisp.- - Dash Core node at `127.0.0.1:9999` for integration tests - - Tests gracefully handle node unavailability + - Dash Core node at `127.0.0.1:9999` for integration tests – tests gracefully handle node unavailabilitydash-spv/tests/multi_peer_test.rs (1)
40-44:env_logger::init()called in three async tests may panic
env_loggercan only be initialised once per process; running tests in parallel will causeset_loggerpanics.Guard initialisation with
once_cell:static LOGGER: once_cell::sync::OnceCell<()> = once_cell::sync::OnceCell::new(); fn init_logger() { LOGGER.get_or_init(|| { env_logger::init(); }); }and call
init_logger()instead ofenv_logger::init().Also applies to: 73-77, 119-123
dash-spv/src/validation/headers.rs (2)
149-161: Genesis-link check too strict for mid-chain segments
validate_connects_to_genesismandates that the first header’sprev_blockhashequals the genesis hash.
This fails for common cases where the caller validates a window that starts after height 1 (e.g. reorg checks, IBD catch-up).
Recommend accepting any known ancestor or exposing height as a parameter instead of hard-coding genesis.
165-179: Difficulty adjustment stub needs follow-up
validate_difficulty_adjustmentis effectively a no-op. If that’s intentional for MVP, please add a clearTODO:with issue reference; otherwise this leaves a silent security gap (difficulty spoofing).dash-spv/tests/transaction_calculation_test.rs (1)
5-15: Dash, not BTCAll comments label amounts as “BTC” while the library targets Dash.
Purely cosmetic, but could confuse readers when debugging amounts vs. duffs.dash-spv/src/sync/masternodes.rs (1)
355-363: Engine state discarded on persist
engine_stateis stored as an empty vec with a TODO. On restart the client will resync from scratch, negating the benefit of persistence and stretching initial sync time.If full serialization is complex, at least store a minimal checkpoint (e.g. last known merkle root / masternode hash) to skip redundant diffs.
dash-spv/src/network/message_handler.rs (1)
33-39: Losenoncein Pong pathFor
Pingyou forward the nonce, butPongdiscards it. Keeping the nonce lets the caller correlate RTTs the same wayhandle_pongdoes.- MessageHandleResult::Pong + MessageHandleResult::Pong(nonce)(and adjust enum accordingly)
dash-spv/tests/block_download_test.rs (1)
114-118:get_message_sender()returns a new channel every call – breaks real sender semantics
get_message_sender()creates and returns a freshmpsc::channel(1)each time.
Anything relying on the original sender (e.g. background writers) will silently drop messages because they never share the same channel.Consider returning a cloned
Senderthat is stored in the struct:struct MockNetworkManager { sent_messages: Arc<RwLock<Vec<NetworkMessage>>>, received_messages: Arc<RwLock<Vec<NetworkMessage>>>, connected: bool, + msg_tx: tokio::sync::mpsc::Sender<NetworkMessage>, } impl MockNetworkManager { fn new() -> Self { let (tx, _rx) = tokio::sync::mpsc::channel(32); Self { sent_messages: Arc::new(RwLock::new(Vec::new())), received_messages: Arc::new(RwLock::new(Vec::new())), connected: true, + msg_tx: tx, } } ... fn get_message_sender(&self) -> tokio::sync::mpsc::Sender<NetworkMessage> { - let (tx, _rx) = tokio::sync::mpsc::channel(1); - tx + self.msg_tx.clone() }dash-spv/tests/integration_real_node_test.rs (1)
118-145: Timeout too generous for CI; consider parametrising
HEADER_SYNC_TIMEOUTis set to 2 minutes; later tests use 5 minutes & 3 minutes.
CI jobs frequently impose per-test or per-job limits; these numbers risk hitting those limits and masking real regressions.Expose them via an env-var (e.g.
DASH_SLOW_TEST_TIMEOUT_SECS) or reduce defaults.dash-spv/tests/segmented_storage_test.rs (1)
440-479:randused without deterministic seed – perf numbers fluctuateThe performance test relies on 1000 random look-ups. On each run different blocks are hit, affecting cache behaviour and timing.
Seed the RNG so results are comparable:
let mut rng = rand::rngs::StdRng::seed_from_u64(42); for _ in 0..1000 { let height = rng.gen_range(0..200_000); ... }Requires
rand::SeedableRng.dash-spv/src/client/block_processor.rs (2)
419-423: Stat counter looks wrong – updatingblocks_requestedafter processingIn
update_chain_state_with_blockyou incrementstats.blocks_requested; logically this should beblocks_processedor another dedicated field. Double-check the intended metric.
254-256: Log message duplicated txid
"TX {} input {}:{}"printstxidtwice; second placeholder should bevin.-tracing::info!("💸 TX {} input {}:{} ...", txid, txid, vin, ...); +tracing::info!("💸 TX {} input {}:{} ...", txid, vin, input.previous_output, ...);dash-spv/src/types.rs (1)
214-226: Consider derivingSerialize/DeserializeforFilterMatch
FilterMatchtravels through async channels and is logged; adding serde derives avoids ad-hoc wrappers elsewhere.-#[derive(Debug, Clone, PartialEq, Eq)] +#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]dash-spv/src/network/pool.rs (1)
115-138: Holdingconnections.read()while awaiting each peer health check hurts scalabilityYou hold the pool read-lock for the entire loop and await on per-connection locks – blocking writers unnecessarily.
Snapshot addresses first:
let addrs: Vec<_> = { let conns = self.connections.read().await; conns.iter().map(|(a, c)| (*a, c.clone())).collect() }; for (addr, conn) in addrs { ... }dash-spv/src/network/multi_peer.rs (2)
470-519: Incorrect error mapping insend_to_single_peerThe final
map_errwraps aNetworkErrorinside anotherNetworkError::ProtocolError, losing type information and making pattern matching harder.- conn_guard.send_message(message).await - .map_err(|e| NetworkError::ProtocolError(format!("Failed to send to {}: {}", addr, e))) + conn_guard + .send_message(message) + .await + .map_err(NetworkError::from)Preserves the original variant and back-trace.
360-367: Cloning the whole manager inside maintenance loop is heavy
let this = self.clone();duplicates every Arc and large data field on every tick; you only need a lightweight handle (e.g.Arc<Self>or even justArc<ConnectionPool>). This bloats memory and increases contention.Refactor the closure to take the minimal shared state instead of full
MultiPeerNetworkManager.dash-spv/tests/storage_consistency_test.rs (1)
75-77: Replace fixed sleep with deterministic flushRelying on
sleep(1s)to wait for the background saver is brittle and lengthens the test suite.Expose/await a
storage.flush().await(orawait_background_tasks()) API inDiskStorageManager, then call it here instead of sleeping.dash-spv/src/wallet/mod.rs (1)
205-214: Default tip-height placeholder will skew balances
get_current_tip_height()falls back to1_000_000, which silently inflates confirmations and marks almost everything as confirmed.Until real height plumbing is in place, fail fast instead of guessing:
let current_height = self.get_current_tip_height() .await .ok_or(SpvError::Sync("Tip height unavailable".into()))?;dash-spv/src/client/filter_sync.rs (1)
46-50: Remove stale commentThe “real fix is in sync_filters_coordinated” note is historical and no longer helps future readers.
dash-spv/src/network/connection.rs (1)
137-154: Socket-buffer back-pressure ignoredWhen
write_allreturnsWouldBlockthe code treats it as an error (NetworkError::Timeout) instead of registering writable interest and retrying later, which will drop healthy peers under load.Refactor
send_messageto use async writable readiness (viatokio::io::Interest::WRITABLEorAsyncWriteExt::write_all).dash-spv/src/wallet/transaction_processor.rs (1)
158-166: Drop the unusedstorageparameter.
_storageis never referenced, generating a warn-by-default lint.
Unless you plan to persist per-tx data shortly, remove the parameter to keep the API clean.- wallet: &Wallet, - _storage: &mut dyn StorageManager, + wallet: &Wallet,dash-spv/src/storage/disk.rs (2)
510-515: UTXO cache may never flush during wallet-only activity.
save_dirty_segmentstriggers UTXO persistence only when header / filter segments are already being saved.
If the node runs in watch-only mode (no new headers) the cache stays dirty until shutdown, risking data loss on crash.Consider:
• A periodic flush task (e.g., every 30 s if dirty), or
• Forcing aSaveUtxoCachewheneverutxo_cache_dirtyflips totrue.
1124-1133:stats()under-counts storage size.
read_dirscans only the root directory and skips nested sub-directories (headers,filters,state).
Usewalkdiror a recursiveread_dirto provide accurate totals.dash-spv/tests/wallet_integration_test.rs (1)
37-51: Use a real merkle root for robustnessThe helper builds blocks with
merkle_root = Hash::all_zeros(). If future validation logic checks merkle roots, these tests will break unexpectedly. Compute the merkle root fromtransactionsinstead of hard-coding zeros.dash-spv/src/sync/mod.rs (1)
60-70: Avoid redundant DB hits in header → filter-header catch-upInside
handle_headers_messageyou fetch heights for the first and last header hash separately, issuing two storage queries. Both heights are available during header processing; returning them fromHeaderSyncManager::handle_headers_message(or caching locally) would cut the latency in half for large batches.
[performance]Also applies to: 71-77, 81-99
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
data-test/state/watch_items.datis excluded by!**/*.dat
📒 Files selected for processing (74)
Cargo.toml(1 hunks)block_with_pro_reg_tx.data(1 hunks)dash-network/src/lib.rs(1 hunks)dash-spv/CLAUDE.md(1 hunks)dash-spv/Cargo.toml(1 hunks)dash-spv/README.md(1 hunks)dash-spv/examples/filter_sync.rs(1 hunks)dash-spv/examples/simple_sync.rs(1 hunks)dash-spv/run_integration_tests.md(1 hunks)dash-spv/src/client/block_processor.rs(1 hunks)dash-spv/src/client/config.rs(1 hunks)dash-spv/src/client/consistency.rs(1 hunks)dash-spv/src/client/filter_sync.rs(1 hunks)dash-spv/src/client/message_handler.rs(1 hunks)dash-spv/src/client/mod.rs(1 hunks)dash-spv/src/client/status_display.rs(1 hunks)dash-spv/src/client/wallet_utils.rs(1 hunks)dash-spv/src/client/watch_manager.rs(1 hunks)dash-spv/src/error.rs(1 hunks)dash-spv/src/filters/mod.rs(1 hunks)dash-spv/src/lib.rs(1 hunks)dash-spv/src/main.rs(1 hunks)dash-spv/src/network/addrv2.rs(1 hunks)dash-spv/src/network/connection.rs(1 hunks)dash-spv/src/network/constants.rs(1 hunks)dash-spv/src/network/discovery.rs(1 hunks)dash-spv/src/network/handshake.rs(1 hunks)dash-spv/src/network/message_handler.rs(1 hunks)dash-spv/src/network/mod.rs(1 hunks)dash-spv/src/network/multi_peer.rs(1 hunks)dash-spv/src/network/peer.rs(1 hunks)dash-spv/src/network/persist.rs(1 hunks)dash-spv/src/network/pool.rs(1 hunks)dash-spv/src/network/tests.rs(1 hunks)dash-spv/src/storage/disk.rs(1 hunks)dash-spv/src/storage/memory.rs(1 hunks)dash-spv/src/storage/mod.rs(1 hunks)dash-spv/src/storage/types.rs(1 hunks)dash-spv/src/sync/headers.rs(1 hunks)dash-spv/src/sync/masternodes.rs(1 hunks)dash-spv/src/sync/mod.rs(1 hunks)dash-spv/src/sync/state.rs(1 hunks)dash-spv/src/terminal.rs(1 hunks)dash-spv/src/types.rs(1 hunks)dash-spv/src/validation/chainlock.rs(1 hunks)dash-spv/src/validation/headers.rs(1 hunks)dash-spv/src/validation/instantlock.rs(1 hunks)dash-spv/src/validation/mod.rs(1 hunks)dash-spv/src/wallet/mod.rs(1 hunks)dash-spv/src/wallet/transaction_processor.rs(1 hunks)dash-spv/src/wallet/utxo.rs(1 hunks)dash-spv/tests/block_download_test.rs(1 hunks)dash-spv/tests/cfheader_gap_test.rs(1 hunks)dash-spv/tests/edge_case_filter_sync_test.rs(1 hunks)dash-spv/tests/filter_header_verification_test.rs(1 hunks)dash-spv/tests/handshake_test.rs(1 hunks)dash-spv/tests/header_sync_test.rs(1 hunks)dash-spv/tests/integration_real_node_test.rs(1 hunks)dash-spv/tests/multi_peer_test.rs(1 hunks)dash-spv/tests/reverse_index_test.rs(1 hunks)dash-spv/tests/segmented_storage_debug.rs(1 hunks)dash-spv/tests/segmented_storage_test.rs(1 hunks)dash-spv/tests/simple_gap_test.rs(1 hunks)dash-spv/tests/simple_header_test.rs(1 hunks)dash-spv/tests/simple_segmented_test.rs(1 hunks)dash-spv/tests/storage_consistency_test.rs(1 hunks)dash-spv/tests/storage_test.rs(1 hunks)dash-spv/tests/test_plan.md(1 hunks)dash-spv/tests/transaction_calculation_test.rs(1 hunks)dash-spv/tests/wallet_integration_test.rs(1 hunks)dash/src/blockdata/constants.rs(1 hunks)dash/src/blockdata/transaction/mod.rs(1 hunks)dash/src/blockdata/transaction/special_transaction/coinbase.rs(6 hunks)dash/src/blockdata/transaction/special_transaction/mnhf_signal.rs(1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (17)
dash-spv/tests/simple_header_test.rs (3)
dash-spv/src/client/mod.rs (2)
storage(378-378)new(196-253)dash-spv/src/storage/memory.rs (2)
new(36-48)get_tip_height(86-92)dash-spv/tests/integration_real_node_test.rs (1)
check_node_availability(24-36)
dash-spv/src/storage/types.rs (3)
dash-spv/src/storage/memory.rs (2)
std(178-178)std(179-179)dash-spv/src/client/config.rs (1)
default(101-132)dash-spv/src/types.rs (4)
default(51-66)default(95-105)default(184-186)default(458-481)
dash-spv/tests/reverse_index_test.rs (3)
dash-spv/tests/simple_segmented_test.rs (1)
create_test_header(11-20)dash-spv/tests/segmented_storage_debug.rs (1)
create_test_header(11-20)dash-spv/tests/segmented_storage_test.rs (1)
create_test_header(14-23)
dash-spv/src/client/watch_manager.rs (5)
dash-spv/src/client/mod.rs (6)
storage(378-378)new(196-253)add_watch_item(1209-1212)remove_watch_item(1215-1218)get_watch_items(1221-1224)load_watch_items(1439-1442)dash-spv/src/sync/filters.rs (2)
mpsc(1725-1725)new(130-155)dash-spv/src/client/filter_sync.rs (2)
new(25-41)get_watch_items(139-142)dash-spv/src/wallet/mod.rs (2)
new(52-58)new(81-87)dash-spv/src/types.rs (1)
address(245-250)
dash/src/blockdata/transaction/special_transaction/mnhf_signal.rs (1)
dash/src/blockdata/transaction/special_transaction/mod.rs (4)
consensus_encode(91-103)len(123-136)consensus_decode(356-359)consensus_decode(373-404)
dash-spv/tests/segmented_storage_test.rs (3)
dash-spv/tests/reverse_index_test.rs (1)
create_test_header(99-112)dash-spv/tests/simple_segmented_test.rs (1)
create_test_header(11-20)dash-spv/tests/segmented_storage_debug.rs (1)
create_test_header(11-20)
dash-spv/tests/integration_real_node_test.rs (4)
dash-spv/src/storage/memory.rs (2)
new(36-48)get_tip_height(86-92)dash-spv/src/client/mod.rs (5)
network(363-365)network(1086-1087)storage(378-378)new(196-253)peer_count(1074-1076)dash-spv/tests/simple_header_test.rs (1)
check_node_availability(15-27)dash-spv/src/network/connection.rs (3)
connect(52-82)new(36-49)is_connected(276-278)
dash-spv/src/lib.rs (3)
dash-spv/src/client/mod.rs (3)
network(363-365)network(1086-1087)storage(378-378)dash-spv/src/storage/memory.rs (2)
std(178-178)std(179-179)dash-spv/src/types.rs (1)
fmt(158-167)
dash-spv/src/client/consistency.rs (2)
dash-spv/src/client/mod.rs (5)
storage(378-378)new(196-253)validate_wallet_consistency(1477-1522)recover_wallet_consistency(1525-1589)ensure_wallet_consistency(1592-1620)dash-spv/src/types.rs (1)
address(245-250)
dash-spv/src/network/mod.rs (4)
dash-spv/src/client/mod.rs (5)
network(363-365)network(1086-1087)peer_count(1074-1076)peer_info(1079-1081)new(196-253)dash-spv/src/network/multi_peer.rs (15)
as_any(659-661)connect(663-666)disconnect(668-671)send_message(673-696)receive_message(698-722)is_connected(724-731)peer_count(733-738)peer_info(740-753)send_ping(755-766)handle_ping(768-771)handle_pong(773-776)should_ping(778-781)cleanup_old_pings(783-785)get_message_sender(787-818)new(56-78)dash-spv/src/network/connection.rs (12)
connect(52-82)disconnect(112-122)send_message(125-164)receive_message(167-273)is_connected(276-278)peer_info(316-326)send_ping(334-347)handle_ping(350-357)handle_pong(360-378)should_ping(381-400)cleanup_old_pings(403-419)new(36-49)dash-spv/src/network/handshake.rs (1)
new(37-44)
dash/src/blockdata/transaction/special_transaction/coinbase.rs (2)
dash/src/consensus/encode.rs (12)
len(510-517)consensus_decode(456-458)consensus_decode(549-578)consensus_decode(591-593)consensus_decode(608-611)consensus_decode(626-630)consensus_decode(673-679)consensus_decode(948-950)consensus_decode(967-969)consensus_decode(979-981)consensus_decode(991-993)consensus_decode(1003-1005)hashes/src/lib.rs (1)
all_zeros(233-233)
dash-spv/tests/header_sync_test.rs (6)
dash-spv/src/storage/memory.rs (6)
std(178-178)std(179-179)new(36-48)get_tip_height(86-92)load_headers(71-80)stats(175-195)dash-spv/src/client/mod.rs (5)
storage(378-378)new(196-253)chain_state(1696-1699)stats(1690-1693)start(256-349)dash-spv/src/sync/mod.rs (1)
new(41-49)dash-spv/src/sync/headers.rs (1)
new(31-40)dash-spv/src/validation/headers.rs (1)
new(20-25)dash-spv/src/types.rs (2)
new_for_network(110-121)tip_height(124-126)
dash-spv/tests/simple_segmented_test.rs (3)
dash-spv/src/client/mod.rs (1)
storage(378-378)dash-spv/tests/segmented_storage_debug.rs (1)
create_test_header(11-20)dash-spv/tests/segmented_storage_test.rs (1)
create_test_header(14-23)
dash-spv/src/network/multi_peer.rs (8)
dash-spv/src/client/mod.rs (7)
network(363-365)network(1086-1087)new(196-253)start(256-349)disconnect_peer(1084-1091)peer_count(1074-1076)peer_info(1079-1081)dash-spv/src/client/config.rs (1)
new(137-142)dash-spv/src/network/connection.rs (12)
new(36-49)connect(52-82)disconnect(112-122)send_message(125-164)receive_message(167-273)is_connected(276-278)peer_info(316-326)send_ping(334-347)handle_ping(350-357)handle_pong(360-378)should_ping(381-400)cleanup_old_pings(403-419)dash-spv/src/network/pool.rs (2)
new(22-27)is_connected(90-92)dash-spv/src/network/discovery.rs (1)
new(18-25)dash-spv/src/network/handshake.rs (1)
new(37-44)dash-spv/src/network/addrv2.rs (1)
new(26-31)dash-spv/src/network/persist.rs (1)
new(31-39)
dash-spv/src/wallet/transaction_processor.rs (5)
dash-spv/src/wallet/mod.rs (4)
new(52-58)new(81-87)create_test_wallet(328-331)create_test_address(333-340)dash-spv/src/storage/memory.rs (1)
new(36-48)dash-spv/src/wallet/utxo.rs (3)
new(33-49)value(52-54)script_pubkey(57-59)dash/src/blockdata/transaction/mod.rs (1)
txid(208-223)dash-spv/tests/wallet_integration_test.rs (4)
create_test_wallet(24-27)create_test_address(30-34)create_coinbase_transaction(54-70)create_regular_transaction(73-96)
dash-spv/tests/wallet_integration_test.rs (5)
dash-spv/src/storage/memory.rs (1)
new(36-48)dash-spv/src/wallet/mod.rs (7)
create_test_wallet(328-331)new(52-58)new(81-87)create_test_address(333-340)total(61-63)get_balance(126-128)get_balance_for_address(131-133)dash-spv/src/wallet/transaction_processor.rs (5)
create_test_wallet(338-341)new(56-58)create_test_address(343-347)create_coinbase_transaction(365-381)create_regular_transaction(383-406)dash/src/blockdata/script/owned.rs (1)
new_p2pkh(97-105)dash/src/blockdata/transaction/mod.rs (1)
txid(208-223)
dash-spv/src/storage/disk.rs (3)
dash-spv/src/storage/memory.rs (28)
std(178-178)std(179-179)new(36-48)as_any_mut(53-55)store_headers(57-69)load_headers(71-80)get_header(82-84)get_tip_height(86-92)store_filter_headers(94-99)load_filter_headers(101-110)get_filter_header(112-115)get_filter_tip_height(117-124)store_masternode_state(126-129)load_masternode_state(131-133)store_chain_state(135-138)load_chain_state(140-142)store_filter(144-147)load_filter(149-151)store_metadata(153-156)load_metadata(158-160)clear(162-173)stats(175-195)get_header_height_by_hash(197-199)get_headers_batch(201-215)store_utxo(217-228)remove_utxo(230-242)get_utxos_for_address(244-256)get_all_utxos(258-260)dash-spv/src/wallet/utxo.rs (3)
deserialize(103-171)serialize(84-99)value(52-54)dash-spv/src/storage/mod.rs (27)
as_any_mut(30-30)as_any_mut(107-107)as_any_mut(111-113)store_headers(32-32)load_headers(35-35)get_header(38-38)get_tip_height(41-41)store_filter_headers(44-44)load_filter_headers(47-47)get_filter_header(50-50)get_filter_tip_height(53-53)store_masternode_state(56-56)load_masternode_state(59-59)store_chain_state(62-62)load_chain_state(65-65)store_filter(68-68)load_filter(71-71)store_metadata(74-74)load_metadata(77-77)clear(80-80)stats(83-83)get_header_height_by_hash(86-86)get_headers_batch(90-90)store_utxo(93-93)remove_utxo(96-96)get_utxos_for_address(99-99)get_all_utxos(102-102)
🪛 LanguageTool
dash-spv/tests/test_plan.md
[duplication] ~75-~75: Possible typo: you repeated a word.
Context: ...aders accepted  - [ ] ValidationMode::Basic   - Basic structure validation   - Timestamp vali...
(ENGLISH_WORD_REPEAT_RULE)
[duplication] ~100-~100: Possible typo: you repeated a word.
Context: ... filter headers  - [ ] Compact filter download   - Download filters for specific blocks   - Validat...
(ENGLISH_WORD_REPEAT_RULE)
dash-spv/README.md
[grammar] ~93-~93: It appears that a hyphen is missing in the plural noun “to-dos”?
Context: ...esting against live network - 
(TO_DO_HYPHEN)
[style] ~99-~99: ‘mixed together’ might be wordy. Consider a shorter alternative.
Context: ...s** in single file - ❌ 28 functions mixed together   - ❌ No separation of concerns - ❌...
(EN_WORDINESS_PREMIUM_MIXED_TOGETHER)
dash-spv/run_integration_tests.md
[style] ~117-~117: ‘with success’ might be wordy. Consider a shorter alternative.
Context: ...thout failing 3. Return immediately with success  Sample output: ``` test test_real_node...
(EN_WORDINESS_PREMIUM_WITH_SUCCESS)
[typographical] ~134-~134: If specifying a range, consider using an en dash instead of a hyphen.
Context: ...on node and network) - 10k headers: 30-120 seconds (full sync from genesis)  ### M...
(HYPHEN_TO_EN)
dash-spv/CLAUDE.md
[uncategorized] ~21-~21: Loose punctuation mark.
Context: ...de, WatchItem, etc.) - **error.rs`**: Unified error handling with domain-spec...
(UNLIKELY_OPENING_PUNCTUATION)
[duplication] ~142-~142: Possible typo: you repeated a word.
Context: ...ode at 127.0.0.1:9999 for integration tests - Tests gracefully handle node unavailability -...
(ENGLISH_WORD_REPEAT_RULE)
🪛 markdownlint-cli2 (0.17.2)
dash-spv/README.md
75-75: Fenced code blocks should have a language specified
null
(MD040, fenced-code-language)
🔇 Additional comments (13)
dash/src/blockdata/constants.rs (1)
123-125: Genesis constants changed – update assertions or the test suite will break
time,bits, andnoncefor mainnet were updated, buttests::bitcoin_genesis_full_block(lines 261-263) still asserts the old Bitcoin values (1231006505,0x1d00ffff,2083236893).
Either:
- Update the expected values to the new Dash genesis header, or
 - Gate those assertions on
 Network::Dashvs legacy Bitcoin block.Failing to do so will cause
cargo testto panic.dash-spv/Cargo.toml (1)
8-9: Verify minimum supported Rust version
rust-version = "1.80"points to a compiler that does not yet exist on stable (current stable is 1.79).
Either lower the MSRV or gate CI on nightly until 1.80 lands.dash-spv/tests/simple_gap_test.rs (1)
31-34: [web_search]What are the enum variants of dashcore::Network in the dashcore crate?dash-spv/tests/handshake_test.rs (1)
34-56: Test passes even when handshake fails – consider explicit skip insteadWhen
network.connect()fails the test only logs a warning and still returnsOk(()), so CI can silently ignore a real regression.Recommend using
tokio::test(flavor = "multi_thread", worker_threads = 1)+ earlyreturnwith#[cfg_attr]or theassumepattern:if let Err(e) = result { println!("Skipping – peer unavailable: {e}"); return; // mark as skipped }Alternatively gate the test behind
#[ignore]and run it explicitly.dash-spv/src/lib.rs (1)
20-23: Docs reference non-existentClientConfig::mainnet()The public
ClientConfigAPI (used inmain.rs) exposesClientConfig::new(network)but nomainnet()constructor. Update the example or add the missing helper to avoid confusing users.dash-spv/tests/cfheader_gap_test.rs (1)
166-204: MockNetworkManager lacksSend + SyncboundsThe blanket
#[async_trait]impl compiles only if the type isSend.
Add#[derive(Debug)]andSend + Syncimpls or constrain the trait to avoid build-breaks on some toolchains.dash/src/blockdata/transaction/special_transaction/coinbase.rs (1)
151-165: Unit-test constants must follow the type fixAfter widening
best_cl_heighttou64, the test literals need an explicitu64suffix to avoid inference errors.- best_cl_height: Some(900), + best_cl_height: Some(900u64),Apply to all test cases.
dash-spv/CLAUDE.md (1)
170-176: Re-check the stated MSRV (Rust 1.80).Rust 1.80 is not released as of June 2025; the current stable is 1.79. Pinning to a non-existent version will break CI for anyone installing stable via rustup.
Verify the true minimum version (probably 1.70-1.79) and update the doc and
Cargo.toml’srust-versionfield accordingly.dash-spv/tests/integration_real_node_test.rs (1)
38-80: I’d like to confirm all the real-node tests that need#[ignore]. Let’s list them:#!/bin/bash set -e # List all async test functions in the real-node integration test file rg -n "async fn test_" dash-spv/tests/integration_real_node_test.rsdash-spv/tests/header_sync_test.rs (1)
298-311:DashSpvClient::sync_progress()initial height assumption may be wrongThe assertion
assert_eq!(stats.header_height, 0);only holds when the client starts from an empty storage.
If a previous test polluted the default storage path (or persistence is enabled by default later), this will break.Either create the client with an in-memory storage or explicitly clear persistence before this test.
dash-spv/src/client/config.rs (1)
260-266: Duplicate / localhost entries in default mainnet peersIncluding
127.0.0.1twice and shipping localhost as a public seed harms out-of-box connectivity.Replace duplicates with real public seeds or leave the vector empty so the caller must supply peers.
dash-spv/src/client/consistency.rs (1)
174-183:addresses_syncedstays 0 – metric is wrongYou never modify
recovery.addresses_synced.
Either increment after each successful sync or remove the field.+// after successfully adding an address: +recovery.addresses_synced += 1;Likely an incorrect or invalid review comment.
dash-spv/src/wallet/transaction_processor.rs (1)
256-273: To locate any existing chain-height accessor on theWallettype, let’s inspect its definition and impl block:#!/bin/bash set -e # Search for the Wallet struct and its impl in the wallet module rg -n "struct Wallet" -C2 dash-spv/src/wallet rg -n "impl Wallet" -C2 dash-spv/src/wallet
| Network::Regtest => 0xDCB7C1FC, | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Magic-byte update is only half-done – networking will mis-detect Regtest
You updated Network::magic() to 0xDCB7C1FC, but:
Network::from_magic()(line 39) still returns the old value (0xDAB5BFFA).- Unit tests below expect the old constant and now fail.
 
-            0xDAB5BFFA => Some(Network::Regtest),
+            0xDCB7C1FC => Some(Network::Regtest),Don’t forget to adjust the corresponding expectations in tests::test_network_magic and tests::test_network_from_magic; otherwise the crate will not compile.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Network::Regtest => 0xDCB7C1FC, | |
| } | |
| // dash-network/src/lib.rs | |
| impl Network { | |
| pub fn from_magic(magic: u32) -> Option<Network> { | |
| match magic { | |
| 0xF9BEB4D9 => Some(Network::Mainnet), | |
| 0x0F1C2E3A => Some(Network::Testnet), | |
| - 0xDAB5BFFA => Some(Network::Regtest), | |
| + 0xDCB7C1FC => Some(Network::Regtest), | |
| _ => None, | |
| } | |
| } | |
| pub fn magic(&self) -> u32 { | |
| match self { | |
| Network::Mainnet => 0xF9BEB4D9, | |
| Network::Testnet => 0x0F1C2E3A, | |
| Network::Regtest => 0xDCB7C1FC, | |
| } | |
| } | |
| } | 
🤖 Prompt for AI Agents
In dash-network/src/lib.rs around lines 39 and 61-62, the magic byte for
Network::Regtest was updated to 0xDCB7C1FC in Network::magic() but not in
Network::from_magic(), which still returns the old value 0xDAB5BFFA. Update
Network::from_magic() to return the new magic byte 0xDCB7C1FC for Regtest as
well. Also, update the expected values in the unit tests test_network_magic and
test_network_from_magic to reflect the new magic byte so the tests pass and the
crate compiles.
| dashcore = { path = "../dash", features = ["std", "serde", "core-block-hash-use-x11", "message_verification"] } | ||
| dashcore_hashes = { path = "../hashes" } | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
path dependencies block publishing
Crates with path = "../dash" / "../hashes" cannot be published to crates.io.
If public release is a goal, switch to versioned dependencies or add:
[package]
publish = false🤖 Prompt for AI Agents
In dash-spv/Cargo.toml around lines 13 to 15, the dependencies dashcore and
dashcore_hashes use local path references which prevent publishing to crates.io.
To fix this, either replace these path dependencies with versioned dependencies
from crates.io if available, or add 'publish = false' under the [package]
section to explicitly mark the crate as non-publishable.
| // Create fresh storage | ||
| let mut storage = MemoryStorageManager::new().await | ||
| .expect("Failed to create storage"); | ||
| 
               | 
          ||
| // Verify starting from empty state | ||
| assert_eq!(storage.get_tip_height().await.unwrap(), None); | ||
| 
               | 
          ||
| let mut client = DashSpvClient::new(config.clone()).await | ||
| .expect("Failed to create SPV client"); | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MemoryStorageManager instance is unused – test does not inspect the client’s real storage
A fresh MemoryStorageManager is created and queried, but the DashSpvClient constructed below instantiates its own storage internally.
Consequently, the assert_eq!(storage.get_tip_height() …) always yields None, and the later final_height check never reflects the sync result.
Remove the dead variable or query the storage held inside client (e.g. through an accessor).
-// Create fresh storage 
-let mut storage = MemoryStorageManager::new().await
-    .expect("Failed to create storage");
-
-// Verify starting from empty state
-assert_eq!(storage.get_tip_height().await.unwrap(), None);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create fresh storage | |
| let mut storage = MemoryStorageManager::new().await | |
| .expect("Failed to create storage"); | |
| // Verify starting from empty state | |
| assert_eq!(storage.get_tip_height().await.unwrap(), None); | |
| let mut client = DashSpvClient::new(config.clone()).await | |
| .expect("Failed to create SPV client"); | |
| let mut client = DashSpvClient::new(config.clone()).await | |
| .expect("Failed to create SPV client"); | 
🤖 Prompt for AI Agents
In dash-spv/tests/simple_header_test.rs around lines 48 to 57, the
MemoryStorageManager instance is created and queried but not used by the
DashSpvClient, which has its own internal storage. To fix this, either remove
the unused MemoryStorageManager variable or modify the test to access and query
the storage inside the DashSpvClient (for example, via a getter method) so that
the assertions reflect the actual client state.
| let final_height = storage.get_tip_height().await | ||
| .expect("Failed to get tip height"); | ||
| 
               | 
          ||
| info!("Final header height: {:?}", final_height); | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
final_height still reads from the stale storage instance
If you keep the explicit storage removal above, also delete this block or fetch the tip height from the client’s storage:
-// Check final state
-let final_height = storage.get_tip_height().await
-    .expect("Failed to get tip height");
-
-info!("Final header height: {:?}", final_height);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let final_height = storage.get_tip_height().await | |
| .expect("Failed to get tip height"); | |
| info!("Final header height: {:?}", final_height); | 
🤖 Prompt for AI Agents
In dash-spv/tests/simple_header_test.rs around lines 85 to 88, the variable
final_height is obtained from a stale storage instance after explicit storage
removal. To fix this, either remove this block entirely if the storage removal
is kept, or update the code to fetch the tip height from the client's current
storage instance to ensure the data is fresh and consistent.
| // Create a minimal valid header for testing | ||
| // Note: These are not real headers, just valid structures for testing | ||
| let header = BlockHeader { | ||
| version: Version::from_consensus(1), | ||
| prev_blockhash: if i == 0 { | ||
| dashcore::BlockHash::all_zeros() | ||
| } else { | ||
| // In real implementation, this would be the hash of the previous header | ||
| dashcore::BlockHash::from_byte_array([i as u8; 32]) | ||
| }, | ||
| merkle_root: dashcore::TxMerkleNode::from_byte_array([(i + 1) as u8; 32]), | ||
| time: 1234567890 + i as u32, | ||
| bits: dashcore::CompactTarget::from_consensus(0x1d00ffff), | ||
| nonce: i as u32, | ||
| }; | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
create_test_headers may produce duplicate prev_blockhash after height 255
The helper fills the entire 32-byte array with i as u8. Once i ≥ 256 the value wraps, producing identical hashes and breaking uniqueness assumptions in longer tests.
- dashcore::BlockHash::from_byte_array([i as u8; 32])
+ dashcore::BlockHash::from_byte_array(blake3::hash(&i.to_le_bytes()).into())(or use any other deterministic hash)
This keeps tests lightweight while guaranteeing distinct hashes for arbitrarily large count.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create a minimal valid header for testing | |
| // Note: These are not real headers, just valid structures for testing | |
| let header = BlockHeader { | |
| version: Version::from_consensus(1), | |
| prev_blockhash: if i == 0 { | |
| dashcore::BlockHash::all_zeros() | |
| } else { | |
| // In real implementation, this would be the hash of the previous header | |
| dashcore::BlockHash::from_byte_array([i as u8; 32]) | |
| }, | |
| merkle_root: dashcore::TxMerkleNode::from_byte_array([(i + 1) as u8; 32]), | |
| time: 1234567890 + i as u32, | |
| bits: dashcore::CompactTarget::from_consensus(0x1d00ffff), | |
| nonce: i as u32, | |
| }; | |
| // Create a minimal valid header for testing | |
| // Note: These are not real headers, just valid structures for testing | |
| let header = BlockHeader { | |
| version: Version::from_consensus(1), | |
| prev_blockhash: if i == 0 { | |
| dashcore::BlockHash::all_zeros() | |
| } else { | |
| // In real implementation, this would be the hash of the previous header | |
| - dashcore::BlockHash::from_byte_array([i as u8; 32]) | |
| + dashcore::BlockHash::from_byte_array(blake3::hash(&i.to_le_bytes()).into()) | |
| }, | |
| merkle_root: dashcore::TxMerkleNode::from_byte_array([(i + 1) as u8; 32]), | |
| time: 1234567890 + i as u32, | |
| bits: dashcore::CompactTarget::from_consensus(0x1d00ffff), | |
| nonce: i as u32, | |
| }; | 
🤖 Prompt for AI Agents
In dash-spv/tests/storage_test.rs around lines 270 to 284, the prev_blockhash is
generated by filling the 32-byte array with i as u8, which wraps after 255
causing duplicate hashes. To fix this, replace the current prev_blockhash
generation with a deterministic method that produces unique 32-byte arrays for
all i values, such as hashing i or using a counter-based approach that ensures
distinctness beyond 255, maintaining test validity for large counts.
| let mut state = serializer.serialize_struct("WatchItem", 2)?; | ||
| state.serialize_field("type", "Script")?; | ||
| state.serialize_field("value", &script.to_hex_string())?; | ||
| state.end() | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to_hex_string will not compile
dashcore::ScriptBuf has no to_hex_string() method.
Use hex::encode(script.as_bytes()) (or bitcoin::hex::encode) instead.
- state.serialize_field("value", &script.to_hex_string())?;
+ state.serialize_field("value", &hex::encode(script.as_bytes()))?;🤖 Prompt for AI Agents
In dash-spv/src/types.rs around lines 286 to 289, the code calls to_hex_string()
on a ScriptBuf, but ScriptBuf does not have this method. Replace the call to
to_hex_string() with hex::encode(script.as_bytes()) to correctly convert the
script bytes to a hex string.
| stdout.queue(cursor::MoveTo(0, height - 2))?; | ||
| stdout.queue(terminal::Clear(ClearType::CurrentLine))?; | ||
| stdout.queue(Print("─".repeat(width as usize).dark_grey()))?; | ||
| 
               | 
          ||
| // Draw status bar | ||
| stdout.queue(cursor::MoveTo(0, height - 1))?; | ||
| stdout.queue(terminal::Clear(ClearType::CurrentLine))?; | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential u16 underflow on very small terminals
height - 2 / height - 1 will panic if the terminal is only 0 or 1 rows tall (CI and some dumb TTYs report 1×1).
Guard first:
-stdout.queue(cursor::MoveTo(0, height - 2))?;
+let baseline = height.saturating_sub(2);
+stdout.queue(cursor::MoveTo(0, baseline))?;
...
-stdout.queue(cursor::MoveTo(0, height - 1))?;
+stdout.queue(cursor::MoveTo(0, height.saturating_sub(1)))?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| stdout.queue(cursor::MoveTo(0, height - 2))?; | |
| stdout.queue(terminal::Clear(ClearType::CurrentLine))?; | |
| stdout.queue(Print("─".repeat(width as usize).dark_grey()))?; | |
| // Draw status bar | |
| stdout.queue(cursor::MoveTo(0, height - 1))?; | |
| stdout.queue(terminal::Clear(ClearType::CurrentLine))?; | |
| let baseline = height.saturating_sub(2); | |
| stdout.queue(cursor::MoveTo(0, baseline))?; | |
| stdout.queue(terminal::Clear(ClearType::CurrentLine))?; | |
| stdout.queue(Print("─".repeat(width as usize).dark_grey()))?; | |
| // Draw status bar | |
| stdout.queue(cursor::MoveTo(0, height.saturating_sub(1)))?; | |
| stdout.queue(terminal::Clear(ClearType::CurrentLine))?; | 
🤖 Prompt for AI Agents
In dash-spv/src/terminal.rs around lines 92 to 99, the code subtracts from the
terminal height without checking if height is at least 2, which can cause a
panic on very small terminals. Add a guard to ensure height is greater than 1
before subtracting 1 or 2 from it, and handle the case when the terminal is too
small to safely perform these operations.
| /// Add a peer address. | ||
| pub fn add_peer(&mut self, address: SocketAddr) -> &mut Self { | ||
| self.peers.push(address); | ||
| self | ||
| } | ||
| 
               | 
          ||
| /// Set storage path. | ||
| pub fn with_storage_path(mut self, path: PathBuf) -> Self { | ||
| self.storage_path = Some(path); | ||
| self.enable_persistence = true; | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Builder API is inconsistent
add_peer returns &mut Self whereas the rest of the fluent API consumes & returns Self.
Pick one style – most Rust builders return Self:
-pub fn add_peer(&mut self, address: SocketAddr) -> &mut Self {
-    self.peers.push(address);
-    self
-}
+pub fn add_peer(mut self, address: SocketAddr) -> Self {
+    self.peers.push(address);
+    self
+}🤖 Prompt for AI Agents
In dash-spv/src/client/config.rs around lines 159 to 168, the add_peer method
returns &mut Self while other builder methods consume self and return Self,
causing inconsistency in the builder API. To fix this, change add_peer to
consume self (not &mut self) and return Self, aligning it with the rest of the
builder methods for a consistent fluent interface.
| let wallet = self.wallet.read().await; | ||
| 
               | 
          ||
| // Sync UTXOs from storage to wallet | ||
| let storage_utxos = self.storage.get_all_utxos().await | ||
| .map_err(|e| SpvError::Storage(e))?; | ||
| let wallet_utxos = wallet.get_utxos().await; | ||
| 
               | 
          ||
| // Add missing UTXOs to wallet | ||
| for (outpoint, storage_utxo) in &storage_utxos { | ||
| if !wallet_utxos.iter().any(|wu| &wu.outpoint == outpoint) { | ||
| if let Err(e) = wallet.add_utxo(storage_utxo.clone()).await { | ||
| tracing::error!("Failed to sync UTXO {} to wallet: {}", outpoint, e); | ||
| recovery.success = false; | ||
| } else { | ||
| recovery.utxos_synced += 1; | ||
| } | ||
| } | ||
| } | ||
| 
               | 
          ||
| // Remove UTXOs from wallet that aren't in storage | ||
| for wallet_utxo in &wallet_utxos { | ||
| if !storage_utxos.contains_key(&wallet_utxo.outpoint) { | ||
| if let Err(e) = wallet.remove_utxo(&wallet_utxo.outpoint).await { | ||
| tracing::error!("Failed to remove UTXO {} from wallet: {}", wallet_utxo.outpoint, e); | ||
| recovery.success = false; | ||
| } else { | ||
| recovery.utxos_removed += 1; | ||
| } | ||
| } | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Lock held across await – risk of dead-lock & contention
let wallet = self.wallet.read().await; is kept while calling async methods (get_utxos, add_utxo, remove_utxo).
If these methods internally try to acquire self.wallet again (e.g. write-lock) the task will dead-lock.
Pattern:
let wallet_utxos = {
    let wallet = self.wallet.read().await;
    wallet.get_utxos().await
}; // lock released before await-points that mutate walletApply the same pattern for add/remove loops.
🤖 Prompt for AI Agents
In dash-spv/src/client/consistency.rs around lines 166 to 195, the read lock on
self.wallet is held across await points when calling async methods like
get_utxos, add_utxo, and remove_utxo, which risks deadlocks if those methods try
to acquire the wallet lock again. To fix this, restructure the code to acquire
the read lock only briefly to get the wallet_utxos, then release it before the
loops. Similarly, inside the add and remove loops, acquire the necessary lock
only when calling add_utxo or remove_utxo, releasing it immediately after each
call to avoid holding the lock across await points.
| let mut connections = self.connections.write().await; | ||
| let mut connecting = self.connecting.write().await; | ||
| 
               | 
          ||
| // Remove from connecting set | ||
| connecting.remove(&addr); | ||
| 
               | 
          ||
| // Check if we're at capacity | ||
| if connections.len() >= MAX_PEERS { | ||
| return Err(Error::Network(NetworkError::ConnectionFailed( | ||
| format!("Maximum peers ({}) reached", MAX_PEERS) | ||
| ))); | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Lock-ordering may dead-lock
add_connection takes connections.write() first, then connecting.write().
Another task calling mark_connecting (takes connecting.write() only) and later needing connections.write() can dead-lock.
Acquire in a single, consistent order or split the critical sections.
🤖 Prompt for AI Agents
In dash-spv/src/network/pool.rs around lines 37 to 48, the current lock
acquisition order is inconsistent, with add_connection acquiring
connections.write() before connecting.write(), while other tasks like
mark_connecting acquire connecting.write() first and then connections.write(),
risking deadlocks. To fix this, ensure all code acquires these locks in the same
order, either always acquiring connections.write() before connecting.write() or
vice versa, or refactor to split critical sections so that locks are not held
simultaneously, preventing lock-ordering deadlocks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| for (i, header) in headers.iter().enumerate() { | ||
| // Get the previous header for validation | ||
| let prev_header = if i == 0 { | ||
| // First header in batch - get from storage | ||
| let current_tip_height = storage.get_tip_height().await | ||
| .map_err(|e| SyncError::SyncFailed(format!("Failed to get tip height: {}", e)))?; | ||
| 
               | 
          ||
| if let Some(height) = current_tip_height { | ||
| storage.get_header(height).await | ||
| .map_err(|e| SyncError::SyncFailed(format!("Failed to get previous header: {}", e)))? | ||
| } else { | ||
| None | ||
| } | ||
| } else { | ||
| Some(headers[i - 1]) | ||
| }; | ||
| 
               | 
          ||
| // Validate the header | ||
| // tracing::trace!("Validating header {} at index {}", header.block_hash(), i); | ||
| // if let Some(prev) = prev_header.as_ref() { | ||
| // tracing::trace!("Previous header: {}", prev.block_hash()); | ||
| // } | ||
| 
               | 
          ||
| self.validation.validate_header(header, prev_header.as_ref()) | ||
| .map_err(|e| SyncError::SyncFailed(format!("Header validation failed for block {}: {}", header.block_hash(), e)))?; | ||
| 
               | 
          ||
| validated.push(*header); | ||
| } | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix ownership errors in validate_headers()
BlockHeader isn’t Copy; dereferencing or moving it out of the slice won’t compile.
-            let prev_header = if i == 0 {
-                /* … */
-            } else {
-                Some(headers[i - 1])
-            };
+            let prev_header = if i == 0 {
+                /* … */
+            } else {
+                Some(headers[i - 1].clone())
+            };
…
-            validated.push(*header);
+            validated.push(header.clone());(Or carry &BlockHeader throughout and clone only once before storage.)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for (i, header) in headers.iter().enumerate() { | |
| // Get the previous header for validation | |
| let prev_header = if i == 0 { | |
| // First header in batch - get from storage | |
| let current_tip_height = storage.get_tip_height().await | |
| .map_err(|e| SyncError::SyncFailed(format!("Failed to get tip height: {}", e)))?; | |
| if let Some(height) = current_tip_height { | |
| storage.get_header(height).await | |
| .map_err(|e| SyncError::SyncFailed(format!("Failed to get previous header: {}", e)))? | |
| } else { | |
| None | |
| } | |
| } else { | |
| Some(headers[i - 1]) | |
| }; | |
| // Validate the header | |
| // tracing::trace!("Validating header {} at index {}", header.block_hash(), i); | |
| // if let Some(prev) = prev_header.as_ref() { | |
| // tracing::trace!("Previous header: {}", prev.block_hash()); | |
| // } | |
| self.validation.validate_header(header, prev_header.as_ref()) | |
| .map_err(|e| SyncError::SyncFailed(format!("Header validation failed for block {}: {}", header.block_hash(), e)))?; | |
| validated.push(*header); | |
| } | |
| for (i, header) in headers.iter().enumerate() { | |
| // Get the previous header for validation | |
| let prev_header = if i == 0 { | |
| // First header in batch - get from storage | |
| let current_tip_height = storage.get_tip_height().await | |
| .map_err(|e| SyncError::SyncFailed(format!("Failed to get tip height: {}", e)))?; | |
| if let Some(height) = current_tip_height { | |
| storage.get_header(height).await | |
| .map_err(|e| SyncError::SyncFailed(format!("Failed to get previous header: {}", e)))? | |
| } else { | |
| None | |
| } | |
| } else { | |
| // clone the slice item instead of moving it | |
| Some(headers[i - 1].clone()) | |
| }; | |
| // Validate the header | |
| self.validation.validate_header(header, prev_header.as_ref()) | |
| .map_err(|e| SyncError::SyncFailed(format!( | |
| "Header validation failed for block {}: {}", | |
| header.block_hash(), | |
| e | |
| )))?; | |
| // clone here as well instead of `*header` | |
| validated.push(header.clone()); | |
| } | 
🤖 Prompt for AI Agents
In dash-spv/src/sync/headers.rs around lines 288 to 316, the code attempts to
dereference or move BlockHeader values from a slice, but BlockHeader is not
Copy, causing ownership errors. To fix this, change the code to work with
references (&BlockHeader) throughout the validation loop instead of moving or
copying headers. Only clone the BlockHeader once when pushing it into the
validated vector to maintain ownership rules and avoid compilation errors.
| /// Start reading messages from a peer | ||
| async fn start_peer_reader( | ||
| addr: SocketAddr, | ||
| pool: Arc<ConnectionPool>, | ||
| message_tx: mpsc::Sender<(SocketAddr, NetworkMessage)>, | ||
| addrv2_handler: Arc<AddrV2Handler>, | ||
| shutdown: Arc<AtomicBool>, | ||
| ) { | ||
| tokio::spawn(async move { | ||
| log::debug!("Starting peer reader loop for {}", addr); | ||
| let mut loop_iteration = 0; | ||
| 
               | 
          ||
| while !shutdown.load(Ordering::Relaxed) { | ||
| loop_iteration += 1; | ||
| log::trace!("Peer reader loop iteration {} for {}", loop_iteration, addr); | ||
| 
               | 
          ||
| // Check shutdown signal first with detailed logging | ||
| if shutdown.load(Ordering::Relaxed) { | ||
| log::info!("Breaking peer reader loop for {} - shutdown signal received (iteration {})", addr, loop_iteration); | ||
| break; | ||
| } | ||
| 
               | 
          ||
| // Get connection | ||
| let conn = match pool.get_connection(&addr).await { | ||
| Some(conn) => conn, | ||
| None => { | ||
| log::warn!("Breaking peer reader loop for {} - connection no longer in pool (iteration {})", addr, loop_iteration); | ||
| break; | ||
| } | ||
| }; | ||
| 
               | 
          ||
| // Read message with minimal lock time | ||
| let msg_result = { | ||
| // Try to get a read lock first to check if connection is available | ||
| let conn_guard = conn.read().await; | ||
| if !conn_guard.is_connected() { | ||
| log::warn!("Breaking peer reader loop for {} - connection no longer connected (iteration {})", addr, loop_iteration); | ||
| drop(conn_guard); | ||
| break; | ||
| } | ||
| drop(conn_guard); | ||
| 
               | 
          ||
| // Now get write lock only for the duration of the read | ||
| let mut conn_guard = conn.write().await; | ||
| conn_guard.receive_message().await | ||
| }; | ||
| 
               | 
          ||
| match msg_result { | ||
| Ok(Some(msg)) => { | ||
| log::trace!("Received {:?} from {}", msg.cmd(), addr); | ||
| 
               | 
          ||
| // Handle some messages directly | ||
| match &msg { | ||
| NetworkMessage::SendAddrV2 => { | ||
| addrv2_handler.handle_sendaddrv2(addr).await; | ||
| continue; // Don't forward to client | ||
| } | ||
| NetworkMessage::AddrV2(addresses) => { | ||
| addrv2_handler.handle_addrv2(addresses.clone()).await; | ||
| continue; // Don't forward to client | ||
| } | ||
| NetworkMessage::GetAddr => { | ||
| log::trace!("Received GetAddr from {}, sending known addresses", addr); | ||
| // Send our known addresses | ||
| let response = addrv2_handler.build_addr_response().await; | ||
| let mut conn_guard = conn.write().await; | ||
| if let Err(e) = conn_guard.send_message(response).await { | ||
| log::error!("Failed to send addr response to {}: {}", addr, e); | ||
| } | ||
| continue; // Don't forward GetAddr to client | ||
| } | ||
| NetworkMessage::Ping(nonce) => { | ||
| // Handle ping directly | ||
| let mut conn_guard = conn.write().await; | ||
| if let Err(e) = conn_guard.handle_ping(*nonce).await { | ||
| log::error!("Failed to handle ping from {}: {}", addr, e); | ||
| // If we can't send pong, connection is likely broken | ||
| if matches!(e, NetworkError::ConnectionFailed(_)) { | ||
| log::warn!("Breaking peer reader loop for {} - failed to send pong response (iteration {})", addr, loop_iteration); | ||
| break; | ||
| } | ||
| } | ||
| continue; // Don't forward ping to client | ||
| } | ||
| NetworkMessage::Pong(nonce) => { | ||
| // Handle pong directly | ||
| let mut conn_guard = conn.write().await; | ||
| if let Err(e) = conn_guard.handle_pong(*nonce) { | ||
| log::error!("Failed to handle pong from {}: {}", addr, e); | ||
| } | ||
| continue; // Don't forward pong to client | ||
| } | ||
| NetworkMessage::Version(_) | NetworkMessage::Verack => { | ||
| // These are handled during handshake, ignore here | ||
| log::trace!("Ignoring handshake message {:?} from {}", msg.cmd(), addr); | ||
| continue; | ||
| } | ||
| NetworkMessage::Addr(_) => { | ||
| // Handle legacy addr messages (convert to AddrV2 if needed) | ||
| log::trace!("Received legacy addr message from {}", addr); | ||
| continue; | ||
| } | ||
| _ => { | ||
| // Forward other messages to client | ||
| log::trace!("Forwarding {:?} from {} to client", msg.cmd(), addr); | ||
| } | ||
| } | ||
| 
               | 
          ||
| 
               | 
          ||
| // Forward message to client | ||
| if message_tx.send((addr, msg)).await.is_err() { | ||
| log::warn!("Breaking peer reader loop for {} - failed to send message to client channel (iteration {})", addr, loop_iteration); | ||
| break; | ||
| } | ||
| } | ||
| Ok(None) => { | ||
| // No message available, brief pause to avoid aggressive polling but stay responsive | ||
| time::sleep(MESSAGE_POLL_INTERVAL).await; | ||
| } | ||
| Err(e) => { | ||
| match e { | ||
| NetworkError::PeerDisconnected => { | ||
| log::info!("Peer {} disconnected", addr); | ||
| break; | ||
| } | ||
| NetworkError::Timeout => { | ||
| log::debug!("Timeout reading from {}, continuing...", addr); | ||
| continue; | ||
| } | ||
| _ => { | ||
| log::error!("Fatal error reading from {}: {}", addr, e); | ||
| 
               | 
          ||
| // Check if this is a serialization error that might have context | ||
| if let NetworkError::Serialization(ref decode_error) = e { | ||
| let error_msg = decode_error.to_string(); | ||
| if error_msg.contains("unknown special transaction type") { | ||
| log::warn!("Peer {} sent block with unsupported transaction type: {}", addr, decode_error); | ||
| log::error!("BLOCK DECODE FAILURE - Error details: {}", error_msg); | ||
| } else if error_msg.contains("Failed to decode transactions for block") { | ||
| // The error now includes the block hash | ||
| log::error!("Peer {} sent block that failed transaction decoding: {}", addr, decode_error); | ||
| // Try to extract the block hash from the error message | ||
| if let Some(hash_start) = error_msg.find("block ") { | ||
| if let Some(hash_end) = error_msg[hash_start + 6..].find(':') { | ||
| let block_hash = &error_msg[hash_start + 6..hash_start + 6 + hash_end]; | ||
| log::error!("FAILING BLOCK HASH: {}", block_hash); | ||
| } | ||
| } | ||
| } else if error_msg.contains("IO error") { | ||
| // This might be our wrapped error - log it prominently | ||
| log::error!("BLOCK DECODE FAILURE - IO error (possibly unknown transaction type) from peer {}", addr); | ||
| log::error!("Serialization error from {}: {}", addr, decode_error); | ||
| } else { | ||
| log::error!("Serialization error from {}: {}", addr, decode_error); | ||
| } | ||
| } | ||
| 
               | 
          ||
| // For other errors, wait a bit then break | ||
| tokio::time::sleep(Duration::from_secs(1)).await; | ||
| break; | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| 
               | 
          ||
| // Remove from pool | ||
| log::warn!("Disconnecting from {} (peer reader loop ended)", addr); | ||
| pool.remove_connection(&addr).await; | ||
| }); | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Track and await peer-reader tasks
start_peer_reader spawns detached tasks, so shutdown() never waits for them, leaving sockets alive beyond shutdown and causing flappy tests.
Add them to the existing JoinSet:
-        tokio::spawn(async move {
+        let join_handle = tokio::spawn(async move {
             /* reader loop … */
         });
+
+        // push into JoinSet held in `self.tasks`
+        if let Some(set) = pool.task_set() {   // pseudo-helper returning Arc<Mutex<JoinSet<()>>>
+            set.lock().await.spawn(join_handle);
+        }Or pass a JoinSet reference when spawning.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dash-spv/src/network/multi_peer.rs between lines 173 and 343, the
start_peer_reader function spawns peer reader tasks without tracking or awaiting
them, causing these tasks to remain alive after shutdown and leading to flappy
tests. Modify the function to accept a mutable reference to a JoinSet and add
each spawned task to this JoinSet instead of spawning detached tasks. This
ensures that shutdown can await all peer reader tasks properly, allowing clean
termination of connections and preventing lingering sockets.
| /// Storage manager for persistence. | ||
| storage: Arc<RwLock<dyn StorageManager>>, | ||
| 
               | 
          ||
| /// Set of addresses being watched. | ||
| watched_addresses: Arc<RwLock<HashSet<Address>>>, | ||
| 
               | 
          ||
| /// Current UTXO set indexed by outpoint. | ||
| utxo_set: Arc<RwLock<HashMap<OutPoint, Utxo>>>, | ||
| } | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trait object must be Send + Sync for cross-task use
All accesses to storage occur across tokio tasks; the trait object should be bounded accordingly:
-    storage: Arc<RwLock<dyn StorageManager>>,
+    storage: Arc<RwLock<dyn StorageManager + Send + Sync>>,Mirror this bound wherever the trait is taken by reference.
🤖 Prompt for AI Agents
In dash-spv/src/wallet/mod.rs around lines 27 to 35, the trait object for
storage must be bounded by Send + Sync to allow safe access across tokio tasks.
Update the declaration of storage to use Arc<RwLock<dyn StorageManager + Send +
Sync>> and ensure that all references to this trait object also include these
bounds to maintain thread safety.
| println!("🐛 BUG REPRODUCTION TEST - This test should FAIL to demonstrate the bug!"); | ||
| 
               | 
          ||
| // This test simulates the REAL production scenario that causes crashes: | ||
| // - Peer A sends heights 1000-2000 | ||
| // - Peer B sends heights 1500-2500 (overlapping!) | ||
| // Each peer provides different (but potentially valid) previous_filter_header values | ||
| // | ||
| // The system should handle this gracefully, but currently it crashes. | ||
| // This test will FAIL until we implement the fix. | ||
| 
               | 
          ||
| let mut storage = MemoryStorageManager::new().await | ||
| .expect("Failed to create storage"); | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Failing-by-design tests will break CI
The test is intended to fail (panic!("🚨 BUG REPRODUCED …")).
Mark it #[ignore] or #[should_panic] (with an explanatory message) so the suite can pass until the underlying fix lands.
🤖 Prompt for AI Agents
In dash-spv/tests/filter_header_verification_test.rs around lines 322 to 333,
the test is designed to fail to demonstrate a bug but currently causes CI
failures. To fix this, annotate the test function with #[ignore] or
#[should_panic(expected = "explanatory message")] to indicate the failure is
intentional, allowing the test suite to pass until the bug is fixed.
| let initial_headers = create_test_headers_range(1000, 5000); // Headers 1000-4999 | ||
| storage.store_headers(&initial_headers).await | ||
| .expect("Failed to store initial headers"); | ||
| 
               | 
          ||
| let tip_height = storage.get_tip_height().await.unwrap().unwrap(); | ||
| println!("Initial header chain stored: tip height = {}", tip_height); | ||
| assert_eq!(tip_height, 4999); | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Off-by-one: tip_height assertion is wrong
create_test_headers_range(1000, 5000) generates headers 1000 … 5999 (5000 total), therefore the tip should be 5999, not 4999.
The assertion will always fail.
-assert_eq!(tip_height, 4999);
+assert_eq!(tip_height, 5999);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let initial_headers = create_test_headers_range(1000, 5000); // Headers 1000-4999 | |
| storage.store_headers(&initial_headers).await | |
| .expect("Failed to store initial headers"); | |
| let tip_height = storage.get_tip_height().await.unwrap().unwrap(); | |
| println!("Initial header chain stored: tip height = {}", tip_height); | |
| assert_eq!(tip_height, 4999); | |
| let initial_headers = create_test_headers_range(1000, 5000); | |
| storage.store_headers(&initial_headers).await | |
| .expect("Failed to store initial headers"); | |
| let tip_height = storage.get_tip_height().await.unwrap().unwrap(); | |
| println!("Initial header chain stored: tip height = {}", tip_height); | |
| assert_eq!(tip_height, 5999); | 
🤖 Prompt for AI Agents
In dash-spv/tests/filter_header_verification_test.rs around lines 181 to 188,
the assertion for tip_height is off by one because
create_test_headers_range(1000, 5000) generates headers from 1000 to 5999, not
4999. Update the assert_eq line to expect 5999 as the tip height instead of 4999
to correctly reflect the last header in the range.
| let (block_processor_tx, _block_processor_rx) = mpsc::unbounded_channel(); | ||
| 
               | 
          ||
| // Create a placeholder wallet - will be properly initialized in start() | ||
| let placeholder_storage = Arc::new(RwLock::new(crate::storage::MemoryStorageManager::new().await.map_err(|e| SpvError::Storage(e))?)); | ||
| let wallet = Arc::new(RwLock::new(crate::wallet::Wallet::new(placeholder_storage))); | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Wallet is wired to volatile storage even when persistence is enabled
DashSpvClient::new builds the main storage (which may be a DiskStorageManager) but then creates
let placeholder_storage = Arc::new(RwLock::new(MemoryStorageManager::new().await?));
let wallet = Wallet::new(placeholder_storage);The wallet therefore always uses in-memory storage and will lose data between runs, defeating enable_persistence.
Pass the already-created self.storage (or a wallet-specific on-disk store) instead of a fresh memory backend.
- let placeholder_storage = Arc::new(RwLock::new(...MemoryStorageManager::new()...));
- let wallet = Arc::new(RwLock::new(Wallet::new(placeholder_storage)));
+ let wallet_storage = Arc::clone(&self.storage); // ensure StorageManager is Arc<RwLock<dyn>>
+ let wallet = Arc::new(RwLock::new(Wallet::new(wallet_storage)));Without this, users will observe empty wallets after restart.
🤖 Prompt for AI Agents
In dash-spv/src/client/mod.rs around lines 231 to 236, the wallet is incorrectly
initialized with a new in-memory storage instead of the existing persistent
storage. To fix this, replace the placeholder in-memory storage with the
already-created persistent storage instance (self.storage or equivalent) when
constructing the wallet, ensuring the wallet uses the correct storage backend
and retains data across restarts.
| // Check if this input spends a UTXO from our watched addresses | ||
| if let Ok(Some(spent_utxo)) = self.wallet.read().await.remove_utxo(&input.previous_output).await { | ||
| transaction_relevant = true; | ||
| let amount = spent_utxo.value(); | ||
| 
               | 
          ||
| tracing::info!("💸 Found relevant input: {}:{} spending UTXO {} (value: {})", | ||
| txid, vin, input.previous_output, amount); | ||
| 
               | 
          ||
| // Update balance change for this address (subtract) | ||
| *balance_changes.entry(spent_utxo.address.clone()).or_insert(0) -= amount.to_sat() as i64; | ||
| } | ||
| 
               | 
          ||
| // Also check against explicitly watched outpoints | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
RwLock guard is held across .await – code will not compile
Expressions like
self.wallet.read().await.remove_utxo(&input.previous_output).awaithold the RwLockReadGuard returned by read().await across another .await.
The guard is not Send, so the compiler will reject this (borrow of guard across await).
Even if it compiled, it would risk dead-locks by keeping the lock while waiting on I/O.
Apply a pattern that releases the guard before the await, e.g.:
let removed = {
    let wallet = self.wallet.clone();      // Arc clone, no lock yet
    wallet                                 // `Wallet` internal APIs do their own locking
        .remove_utxo(&input.previous_output)
        .await
}?;  // guard free while awaitingRefactor every similar occurrence (add_utxo, get_balance*, update_confirmation_status, etc.) in this file.
This is a build-blocking issue.
Also applies to: 968-978, 1228-1236, 1284-1294, 1624-1634
🤖 Prompt for AI Agents
In dash-spv/src/client/mod.rs around lines 920 to 932, the code holds an
RwLockReadGuard across an await point when calling
self.wallet.read().await.remove_utxo(...).await, which is not allowed and causes
compilation errors. To fix this, refactor the code to acquire the lock and
extract the needed data before the await, or clone the Arc to the wallet and
call async methods that internally handle locking without holding the guard
across await. Apply this pattern to all similar cases in the file, including
lines 968-978, 1228-1236, 1284-1294, and 1624-1634, ensuring no RwLock guards
are held across await points.
| let mut storage = MemoryStorageManager::new().await.unwrap(); | ||
| let block_result = processor.process_block(&block, 100, &wallet, &mut storage).await.unwrap(); | ||
| 
               | 
          ||
| // Verify block processing results | ||
| assert_eq!(block_result.height, 100); | ||
| assert_eq!(block_result.relevant_transaction_count, 1); | ||
| assert_eq!(block_result.total_utxos_added, 1); | ||
| assert_eq!(block_result.total_utxos_spent, 0); | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Processor and wallet use different MemoryStorageManager instances
Each process_block call receives a fresh MemoryStorageManager, while the Wallet was constructed with its own storage handle. This means block/chain state written during processing is invisible to the wallet, potentially masking bugs and producing false positives in the tests. Re-use the wallet’s internal storage (e.g. wallet.storage.clone()) for the processor to ensure the same view of chain data.
Also applies to: 198-201
🤖 Prompt for AI Agents
In dash-spv/tests/wallet_integration_test.rs around lines 126 to 134, the
processor and wallet are using different MemoryStorageManager instances, causing
the processor's block/chain state changes to be invisible to the wallet. To fix
this, replace the new MemoryStorageManager instance passed to process_block with
a clone of the wallet's internal storage (wallet.storage.clone()) so both share
the same storage view. Apply the same fix to lines 198-201 as well.
| assert!(!utxo.is_confirmed); // Should start unconfirmed | ||
| assert!(!utxo.is_instantlocked); | ||
| 
               | 
          ||
| // Verify wallet state after payment discovery | ||
| let final_balance = wallet.get_balance().await.unwrap(); | ||
| assert_eq!(final_balance.confirmed, Amount::from_sat(payment_amount)); // Will be confirmed due to high mock current height | ||
| assert_eq!(final_balance.pending, Amount::ZERO); | ||
| assert_eq!(final_balance.instantlocked, Amount::ZERO); | ||
| assert_eq!(final_balance.total(), Amount::from_sat(payment_amount)); | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistent confirmation expectations – test will likely fail
You assert !utxo.is_confirmed (line 150) yet immediately afterwards expect the same amount to appear in final_balance.confirmed (line 155). Both cannot be true at the same time with a single-block height of 100. Decide whether the payment should be treated as confirmed or pending and align both assertions accordingly.
🤖 Prompt for AI Agents
In dash-spv/tests/wallet_integration_test.rs around lines 150 to 158, the test
inconsistently asserts that the UTXO is unconfirmed while expecting the final
balance to show the payment amount as confirmed. To fix this, decide if the
payment should be confirmed or pending at the test height, then update the
assertion on utxo.is_confirmed to match the expected final_balance.confirmed
state, ensuring both reflect the same confirmation status.
| // Start masternode sync | ||
| let sync_started = self.masternode_sync.start_sync(network, storage).await?; | ||
| 
               | 
          ||
| if !sync_started { | ||
| // Already up to date | ||
| self.state.finish_sync(SyncComponent::Masternodes); | ||
| 
               | 
          ||
| let final_height = match storage.load_masternode_state().await { | ||
| Ok(Some(state)) => state.last_height, | ||
| _ => 0, | ||
| }; | ||
| 
               | 
          ||
| return Ok(SyncProgress { | ||
| masternode_height: final_height, | ||
| masternodes_synced: true, | ||
| ..SyncProgress::default() | ||
| }); | ||
| } | ||
| 
               | 
          ||
| // Note: The actual sync now happens through the monitoring loop | ||
| // calling handle_mnlistdiff_message() and check_sync_timeout() | ||
| tracing::info!("Masternode sync started - will be completed through monitoring loop"); | ||
| 
               | 
          ||
| self.state.finish_sync(SyncComponent::Masternodes); | ||
| 
               | 
          
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sync_masternodes marks the component finished immediately
self.state.finish_sync(SyncComponent::Masternodes) is called right after start_sync, even when the sync is still running asynchronously. This clears the “syncing” flag, so is_syncing() will return false while work is in progress, breaking timeout/recovery logic. Defer finish_sync until the masternode sync manager reports completion (e.g. when handle_mnlistdiff_message returns false).
🤖 Prompt for AI Agents
In dash-spv/src/sync/mod.rs around lines 420 to 444, the call to
self.state.finish_sync(SyncComponent::Masternodes) happens immediately after
starting the masternode sync, which incorrectly clears the syncing flag while
the sync is still running asynchronously. To fix this, remove or defer this
finish_sync call here and instead call finish_sync only when the masternode sync
manager signals completion, such as when handle_mnlistdiff_message returns
false, ensuring the syncing state accurately reflects ongoing work and preserves
timeout/recovery logic.
4364cf9    to
    050efa7      
    Compare
  
    | 
           Good job!  | 
    
feat: Add Complete dash-spv SPV Client + Modular Wallet Architecture
Summary
This PR introduces a brand new dash-spv SPV client implementation (26,000+ lines) along with a modular wallet architecture. The dash-spv crate provides a complete SPV (Simplified Payment Verification) client for the Dash network with advanced features like filter synchronization, masternode management, and wallet functionality. Additionally, wallet functionality has been extracted into standalone key-wallet components with cross-platform FFI bindings.
Key Features Added
• Complete dash-spv SPV Client - Full-featured SPV client with 26,000+ lines of new code
• Advanced Filter Synchronization - BIP157 compact block filters with gap detection and auto-restart
• Wallet Integration - UTXO tracking, address monitoring, and transaction processing
• Masternode Support - Synchronization and validation of Dash masternode lists
• Multi-peer Networking - Robust P2P networking with connection management and message routing
• Comprehensive Storage - Both memory and disk-based storage with segmented architecture
New dash-spv Architecture
Client Layer:
DashSpvClientwith comprehensive configuration optionsNetwork Layer:
Storage Layer:
Synchronization Layer:
Validation Layer:
Wallet Layer:
Core Library Enhancements
Enhanced Special Transactions:
Network Protocol Improvements:
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation
Chores