Skip to content

Conversation

@PastaPastaPasta
Copy link
Member

@PastaPastaPasta PastaPastaPasta commented Oct 30, 2025

Summary by CodeRabbit

  • Bug Fixes

    • Robust framing and resynchronization for network streams, with length-prefixed frame handling and per-frame checksum validation to recover or disconnect cleanly on corruption
  • Chores

    • Added diagnostic logging for framing and payload checksum issues
    • Safer FFI memory deallocation adjustments
    • Minor enum defaulting change for validation mode

Improved logging and proactive disconnection for peers sending messages with invalid checksums. This change ensures that potential message corruption is addressed promptly, enhancing network stability.
Enhanced TCP connection handling by introducing a stateful framing buffer to ensure complete message frames are read before decoding. This change improves error handling for invalid checksums and stream desynchronization, providing better diagnostics and stability in network communication.
Updated the read_some closure to remove unnecessary mutability, enhancing code clarity and maintainability. This change contributes to the ongoing improvements in TCP connection handling.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 30, 2025

Walkthrough

Adds stateful TCP framing with a per-connection framing_buffer, 24-byte header parsing (magic, cmd, length, checksum), per-frame checksum validation and resynchronization using a consecutive_resyncs counter, diagnostic logging for payload/checksum issues, minor enum Default derivation change, and small FFI deallocation helper updates.

Changes

Cohort / File(s) Summary
TCP Connection Framing & Resync
dash-spv/src/network/connection.rs
Adds framing_buffer: Vec<u8> to connection state and initializes it; replaces direct BufReader decoding with a framing-based receive path that accumulates bytes, aligns to magic, parses 24-byte header, validates per-frame checksum (treating all-zeros checksum as corruption), decodes framed payloads into messages, implements resynchronization logic with a per-call resync cap, adds consecutive_resyncs: u32 telemetry, logging, and adjusted error propagation.
Network Message Checksum Logging
dash/src/network/message.rs
Adjusts RawNetworkMessage deserialization to explicitly match payload checksum failures: logs diagnostic details (command and magic) on InvalidChecksum { expected, actual } and re-returns that error; other deserialization paths unchanged.
SML Test Key Construction
dash/src/sml/quorum_entry/validation.rs
Test updated: operator_keys changed from a Vec to a fixed-size array literal and from_bytes_with_mode calls now use unwrap() inline in test_verify_secure_with_real_operators.
FFI deallocation helper updates
key-wallet-ffi/src/address.rs, key-wallet-ffi/src/address_pool.rs, key-wallet-ffi/src/keys.rs
Replace std::slice::from_raw_parts_mut usages with std::ptr::slice_from_raw_parts_mut when reconstructing boxed slices from raw pointers for deallocation (address_array_free, address_info_free, address_info_array_free, derivation_path_free). Behavior and ownership semantics unchanged.
ValidationMode Default derivation
dash-spv/src/types.rs
ValidationMode now derives Default with #[default] on Full; the prior manual impl Default was removed.

Sequence Diagram(s)

sequenceDiagram
    participant Socket as TCP Socket
    participant Conn as Connection (framing_buffer)
    participant Parser as Frame Parser
    participant Decoder as Message Decoder
    participant App as Application

    loop receive loop
        Socket->>Conn: read bytes
        Conn->>Conn: append to framing_buffer
        Conn->>Parser: scan for 4-byte magic
        alt magic not found
            Parser-->>Conn: drop/skip stray bytes (resync++)
            Conn->>Socket: continue reading
        else magic found
            Parser->>Conn: ensure 24-byte header present
            Parser->>Conn: ensure full frame bytes present (length)
            Parser->>Parser: validate frame checksum
            alt checksum OK
                Parser->>Decoder: hand full frame bytes
                Decoder->>Decoder: decode RawNetworkMessage (validate magic inside)
                Decoder->>App: deliver message
                Conn-->>Conn: drain frame, reset resyncs
            else checksum mismatch
                Parser-->>Conn: increment consecutive_resyncs, skip bytes to next candidate
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Inspect dash-spv/src/network/connection.rs framing loop for correct handling of partial frames, buffer drains, bounds checks, and per-call resync limit.
  • Verify checksum validation semantics (including all-zeros handling) and whether checksum failures should resync or disconnect.
  • Confirm consecutive_resyncs telemetry integration and side effects.
  • Review added logging in dash/src/network/message.rs for verbosity and any sensitive data exposure.
  • Spot-check FFI pointer-to-box conversions in key-wallet-ffi files to ensure proper reconstruction and safe drops.

Poem

🐇 I nibble bytes into a careful heap,
I hunt for magic where the streams sleep,
A checksum twitch, a resync hop,
I stitch each frame and never stop,
Hooray — the frames are safe to keep! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The pull request title "fix: resolve flakey connection errors due to TCP buffer issues" directly relates to the primary changes in the changeset. The main modification is in dash-spv/src/network/connection.rs, which implements a comprehensive framing-based receive path with buffering, length-prefixed message handling, checksum validation, and resync logic to address TCP buffering and connection reliability issues. The title accurately captures this core objective. Supporting changes to message checksum handling and minor FFI/test cleanup are secondary to this primary purpose, which is normal for a changeset. The title is concise, clear, and specific enough that it effectively communicates the main improvement without being vague or generic.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/investigate-fix-checksum-error

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b864dd4 and bd12e57.

📒 Files selected for processing (2)
  • dash-spv/src/network/connection.rs (8 hunks)
  • dash/src/network/message.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash/src/network/message.rs
  • dash-spv/src/network/connection.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash/src/network/message.rs
  • dash-spv/src/network/connection.rs
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/network/connection.rs
dash-spv/src/network/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Files:

  • dash-spv/src/network/connection.rs
🧠 Learnings (2)
📚 Learning: 2025-02-25T06:19:32.230Z
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#51
File: dash/src/sml/masternode_list_entry/hash.rs:7-12
Timestamp: 2025-02-25T06:19:32.230Z
Learning: The `consensus_encode` method on `MasternodeListEntry` writing to a `Vec` buffer cannot fail, so using `.expect()` is appropriate rather than propagating the error with the `?` operator.

Applied to files:

  • dash/src/network/message.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes

Applied to files:

  • dash-spv/src/network/connection.rs
🧬 Code graph analysis (2)
dash/src/network/message.rs (1)
dash/src/consensus/encode.rs (4)
  • consensus_decode_from_finite_reader (438-445)
  • consensus_decode_from_finite_reader (813-821)
  • consensus_decode_from_finite_reader (833-835)
  • consensus_decode_from_finite_reader (856-890)
dash-spv/src/network/connection.rs (2)
dash/src/network/message.rs (5)
  • consensus_decode (133-143)
  • consensus_decode (473-475)
  • consensus_decode (664-666)
  • cmd (291-343)
  • cmd (363-365)
dash-network/src/lib.rs (1)
  • magic (58-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (hashes_sha1)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: SPV Components Tests
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: fuzz (dash_deserialize_script)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: Core Components Tests
  • GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: Key Wallet Components Tests
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: Strict Warnings and Clippy Checks

Implemented checks to validate the announced payload length against the maximum message size, preventing overflow and ensuring robust error handling. This enhancement improves the stability and security of TCP connection handling.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bd12e57 and 86fdc9e.

📒 Files selected for processing (1)
  • dash-spv/src/network/connection.rs (8 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/network/connection.rs
dash-spv/src/network/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Files:

  • dash-spv/src/network/connection.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv/src/network/connection.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv/src/network/connection.rs
🧠 Learnings (2)
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes

Applied to files:

  • dash-spv/src/network/connection.rs
📚 Learning: 2025-09-01T14:44:53.683Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: AGENTS.md:0-0
Timestamp: 2025-09-01T14:44:53.683Z
Learning: Not for consensus-critical validation; do not rely on exact Dash Core consensus behavior

Applied to files:

  • dash-spv/src/network/connection.rs
🧬 Code graph analysis (1)
dash-spv/src/network/connection.rs (2)
dash/src/network/message.rs (5)
  • consensus_decode (133-143)
  • consensus_decode (473-475)
  • consensus_decode (664-666)
  • cmd (291-343)
  • cmd (363-365)
dash-network/src/lib.rs (1)
  • magic (58-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
  • GitHub Check: Strict Warnings and Clippy Checks
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: Core Components Tests
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (hashes_sha1)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: SPV Components Tests
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (dash_deserialize_script)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: fuzz (dash_deser_net_msg)

Comment on lines +403 to +413
let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
let mut tmp = [0u8; 8192];
match state.read_buffer.read(&mut tmp) {
Ok(0) => Ok(0),
Ok(n) => {
state.framing_buffer.extend_from_slice(&tmp[..n]);
Ok(n)
}
Err(e) => Err(e),
}
};
Copy link
Contributor

@coderabbitai coderabbitai bot Oct 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Retry on EINTR to avoid spurious disconnects.

If read() is interrupted we currently bubble the ErrorKind::Interrupted out, which immediately turns into NetworkError::ConnectionFailed and drops the peer. The old decode path retried on EINTR (via read_exact), so this is a regression in reliability. Please retry the read when interrupted.

-            let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
-                let mut tmp = [0u8; 8192];
-                match state.read_buffer.read(&mut tmp) {
-                    Ok(0) => Ok(0),
-                    Ok(n) => {
-                        state.framing_buffer.extend_from_slice(&tmp[..n]);
-                        Ok(n)
-                    }
-                    Err(e) => Err(e),
-                }
-            };
+            let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
+                let mut tmp = [0u8; 8192];
+                loop {
+                    match state.read_buffer.read(&mut tmp) {
+                        Ok(0) => return Ok(0),
+                        Ok(n) => {
+                            state.framing_buffer.extend_from_slice(&tmp[..n]);
+                            return Ok(n);
+                        }
+                        Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
+                        Err(e) => return Err(e),
+                    }
+                }
+            };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
let mut tmp = [0u8; 8192];
match state.read_buffer.read(&mut tmp) {
Ok(0) => Ok(0),
Ok(n) => {
state.framing_buffer.extend_from_slice(&tmp[..n]);
Ok(n)
}
Err(e) => Err(e),
}
};
let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
let mut tmp = [0u8; 8192];
loop {
match state.read_buffer.read(&mut tmp) {
Ok(0) => return Ok(0),
Ok(n) => {
state.framing_buffer.extend_from_slice(&tmp[..n]);
return Ok(n);
}
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
Err(e) => return Err(e),
}
}
};
🤖 Prompt for AI Agents
In dash-spv/src/network/connection.rs around lines 403 to 413, the closure
returns Interrupted errors directly causing a disconnect; change it to retry the
read when ErrorKind::Interrupted. Implement a small loop inside the closure that
repeatedly calls state.read_buffer.read(&mut tmp) and on Ok return as before,
but on Err(e) check if e.kind() == std::io::ErrorKind::Interrupted and continue
the loop (retry); for any other error return Err(e). Ensure the buffer extend
logic and returning of the byte count remain identical when a read succeeds.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please expand; lets think about the architecture, and in what circumstances we may run into a ErrorKind::Interrupted; is the desired behavior to disconnect in this case, or is it something we can reasonably expect to recover from.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PastaPastaPasta, great question! Let me break down the ErrorKind::Interrupted (EINTR) scenario in the context of this architecture:

What is ErrorKind::Interrupted?

ErrorKind::Interrupted corresponds to the POSIX EINTR error, which occurs when a system call (like read()) is interrupted by a signal handler before it could complete any data transfer. This is a temporary condition, not an actual I/O failure.

When Does EINTR Occur?

Common scenarios include:

  • Signal handlers: The process receives a signal (SIGALRM, SIGCHLD, SIGUSR1, etc.) while blocked in a read
  • Debugger interaction: Attaching/detaching debuggers can interrupt syscalls
  • Container/orchestration signals: In Docker/Kubernetes environments, signals for health checks or graceful shutdown
  • System activity: Various OS-level events can trigger signals

Architecture Context

In your current setup:

  1. Blocking reads with timeout: You're using TcpStream with set_read_timeout(Some(read_timeout))
  2. BufReader wrapper: Adds buffering layer on top of the socket
  3. Synchronous I/O: The reads are blocking (not using async I/O primitives)
  4. Long-lived connections: Peer connections that should remain stable for extended periods

Should We Disconnect on EINTR?

No, we should NOT disconnect. Here's why:

  1. EINTR is recoverable: The interrupted syscall did nothing wrong - it was simply asked to pause by the OS. The socket is still valid and the connection is intact.

  2. Spurious disconnects harm the network: Dropping a perfectly good peer connection because a signal was delivered wastes resources and hurts sync performance.

  3. Standard practice: The Rust standard library's Read::read_exact() and similar helpers automatically retry on EINTR. By manually handling the read loop, you've lost this automatic retry behavior.

  4. Previous code was better: The old RawNetworkMessage::consensus_decode(&mut state.read_buffer) path would internally use read_exact(), which retries EINTR transparently.

Recommended Behavior

The closure should retry the read when interrupted:

let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
    let mut tmp = [0u8; 8192];
    loop {
        match state.read_buffer.read(&mut tmp) {
            Ok(0) => return Ok(0),
            Ok(n) => {
                state.framing_buffer.extend_from_slice(&tmp[..n]);
                return Ok(n);
            }
            Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => {
                // Signal interrupted the read - just retry
                continue;
            }
            Err(e) => return Err(e),
        }
    }
};

This approach:

  • ✅ Maintains the connection on transient signal delivery
  • ✅ Matches standard library behavior
  • ✅ Aligns with the previous implementation's reliability
  • ✅ Has zero performance cost (signals are rare)

The only errors that should trigger disconnection are real I/O failures like ConnectionReset, ConnectionAborted, BrokenPipe, or timeout/would-block conditions that are already explicitly handled elsewhere in your code.

Updated the ValidationMode enum to include the Default trait, setting the default value to Full. This change streamlines the code and enhances usability by providing a clear default validation mode.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
dash-spv/src/types.rs (1)

513-524: LGTM! Clean refactor to idiomatic Rust.

The conversion from a manual impl Default to using the derive macro with #[default] attribute is a nice improvement. This is the idiomatic approach for specifying default enum variants and reduces boilerplate while maintaining identical behavior.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0571d49 and 45ddb42.

📒 Files selected for processing (1)
  • dash-spv/src/types.rs (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/types.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv/src/types.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv/src/types.rs
🧠 Learnings (5)
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`

Applied to files:

  • dash-spv/src/types.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes

Applied to files:

  • dash-spv/src/types.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/Cargo.toml : Set MSRV to 1.89 using the `rust-version` field and keep the code compatible with Rust 1.89

Applied to files:

  • dash-spv/src/types.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Add cbindgen annotations for complex types so headers are generated correctly

Applied to files:

  • dash-spv/src/types.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Expose complex Rust types to C as opaque pointers (e.g., FFIDashSpvClient*)

Applied to files:

  • dash-spv/src/types.rs
🧬 Code graph analysis (1)
dash-spv/src/types.rs (1)
dash-spv/src/client/config.rs (1)
  • default (197-257)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (dash_deserialize_script)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: Core Components Tests
  • GitHub Check: Strict Warnings and Clippy Checks
  • GitHub Check: SPV Components Tests

@PastaPastaPasta PastaPastaPasta merged commit 8e9d94d into v0.41-dev Nov 1, 2025
29 checks passed
@PastaPastaPasta PastaPastaPasta deleted the feat/investigate-fix-checksum-error branch November 1, 2025 01:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants