-
Notifications
You must be signed in to change notification settings - Fork 3
fix: resolve flakey connection errors due to TCP buffer issues #167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: resolve flakey connection errors due to TCP buffer issues #167
Conversation
Improved logging and proactive disconnection for peers sending messages with invalid checksums. This change ensures that potential message corruption is addressed promptly, enhancing network stability.
Enhanced TCP connection handling by introducing a stateful framing buffer to ensure complete message frames are read before decoding. This change improves error handling for invalid checksums and stream desynchronization, providing better diagnostics and stability in network communication.
Updated the read_some closure to remove unnecessary mutability, enhancing code clarity and maintainability. This change contributes to the ongoing improvements in TCP connection handling.
WalkthroughAdds stateful TCP framing with a per-connection framing_buffer, 24-byte header parsing (magic, cmd, length, checksum), per-frame checksum validation and resynchronization using a consecutive_resyncs counter, diagnostic logging for payload/checksum issues, minor enum Default derivation change, and small FFI deallocation helper updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Socket as TCP Socket
participant Conn as Connection (framing_buffer)
participant Parser as Frame Parser
participant Decoder as Message Decoder
participant App as Application
loop receive loop
Socket->>Conn: read bytes
Conn->>Conn: append to framing_buffer
Conn->>Parser: scan for 4-byte magic
alt magic not found
Parser-->>Conn: drop/skip stray bytes (resync++)
Conn->>Socket: continue reading
else magic found
Parser->>Conn: ensure 24-byte header present
Parser->>Conn: ensure full frame bytes present (length)
Parser->>Parser: validate frame checksum
alt checksum OK
Parser->>Decoder: hand full frame bytes
Decoder->>Decoder: decode RawNetworkMessage (validate magic inside)
Decoder->>App: deliver message
Conn-->>Conn: drain frame, reset resyncs
else checksum mismatch
Parser-->>Conn: increment consecutive_resyncs, skip bytes to next candidate
end
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
dash-spv/src/network/connection.rs(8 hunks)dash/src/network/message.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
dash/src/network/message.rsdash-spv/src/network/connection.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
dash/src/network/message.rsdash-spv/src/network/connection.rs
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/network/connection.rs
dash-spv/src/network/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
Files:
dash-spv/src/network/connection.rs
🧠 Learnings (2)
📚 Learning: 2025-02-25T06:19:32.230Z
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#51
File: dash/src/sml/masternode_list_entry/hash.rs:7-12
Timestamp: 2025-02-25T06:19:32.230Z
Learning: The `consensus_encode` method on `MasternodeListEntry` writing to a `Vec` buffer cannot fail, so using `.expect()` is appropriate rather than propagating the error with the `?` operator.
Applied to files:
dash/src/network/message.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes
Applied to files:
dash-spv/src/network/connection.rs
🧬 Code graph analysis (2)
dash/src/network/message.rs (1)
dash/src/consensus/encode.rs (4)
consensus_decode_from_finite_reader(438-445)consensus_decode_from_finite_reader(813-821)consensus_decode_from_finite_reader(833-835)consensus_decode_from_finite_reader(856-890)
dash-spv/src/network/connection.rs (2)
dash/src/network/message.rs (5)
consensus_decode(133-143)consensus_decode(473-475)consensus_decode(664-666)cmd(291-343)cmd(363-365)dash-network/src/lib.rs (1)
magic(58-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
- GitHub Check: fuzz (dash_outpoint_string)
- GitHub Check: fuzz (hashes_sha256)
- GitHub Check: fuzz (hashes_sha512_256)
- GitHub Check: fuzz (hashes_sha512)
- GitHub Check: fuzz (hashes_sha1)
- GitHub Check: fuzz (hashes_json)
- GitHub Check: RPC Tests (stable, true)
- GitHub Check: fuzz (hashes_ripemd160)
- GitHub Check: SPV Components Tests
- GitHub Check: fuzz (dash_deserialize_witness)
- GitHub Check: fuzz (dash_deserialize_script)
- GitHub Check: fuzz (hashes_cbor)
- GitHub Check: Core Components Tests
- GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
- GitHub Check: fuzz (dash_deser_net_msg)
- GitHub Check: fuzz (dash_deserialize_block)
- GitHub Check: Key Wallet Components Tests
- GitHub Check: fuzz (dash_deserialize_amount)
- GitHub Check: fuzz (dash_deserialize_address)
- GitHub Check: Strict Warnings and Clippy Checks
Implemented checks to validate the announced payload length against the maximum message size, preventing overflow and ensuring robust error handling. This enhancement improves the stability and security of TCP connection handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
dash-spv/src/network/connection.rs(8 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/network/connection.rs
dash-spv/src/network/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
Files:
dash-spv/src/network/connection.rs
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
dash-spv/src/network/connection.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
dash-spv/src/network/connection.rs
🧠 Learnings (2)
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes
Applied to files:
dash-spv/src/network/connection.rs
📚 Learning: 2025-09-01T14:44:53.683Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: AGENTS.md:0-0
Timestamp: 2025-09-01T14:44:53.683Z
Learning: Not for consensus-critical validation; do not rely on exact Dash Core consensus behavior
Applied to files:
dash-spv/src/network/connection.rs
🧬 Code graph analysis (1)
dash-spv/src/network/connection.rs (2)
dash/src/network/message.rs (5)
consensus_decode(133-143)consensus_decode(473-475)consensus_decode(664-666)cmd(291-343)cmd(363-365)dash-network/src/lib.rs (1)
magic(58-66)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: Strict Warnings and Clippy Checks
- GitHub Check: RPC Tests (stable, true)
- GitHub Check: fuzz (hashes_sha512)
- GitHub Check: fuzz (hashes_sha512_256)
- GitHub Check: Core Components Tests
- GitHub Check: fuzz (hashes_json)
- GitHub Check: fuzz (dash_deserialize_block)
- GitHub Check: fuzz (hashes_sha1)
- GitHub Check: fuzz (hashes_sha256)
- GitHub Check: SPV Components Tests
- GitHub Check: fuzz (dash_deserialize_witness)
- GitHub Check: fuzz (hashes_cbor)
- GitHub Check: fuzz (hashes_ripemd160)
- GitHub Check: fuzz (dash_deserialize_script)
- GitHub Check: fuzz (dash_deserialize_amount)
- GitHub Check: fuzz (dash_outpoint_string)
- GitHub Check: fuzz (dash_deserialize_address)
- GitHub Check: fuzz (dash_deser_net_msg)
| let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> { | ||
| let mut tmp = [0u8; 8192]; | ||
| match state.read_buffer.read(&mut tmp) { | ||
| Ok(0) => Ok(0), | ||
| Ok(n) => { | ||
| state.framing_buffer.extend_from_slice(&tmp[..n]); | ||
| Ok(n) | ||
| } | ||
| Err(e) => Err(e), | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retry on EINTR to avoid spurious disconnects.
If read() is interrupted we currently bubble the ErrorKind::Interrupted out, which immediately turns into NetworkError::ConnectionFailed and drops the peer. The old decode path retried on EINTR (via read_exact), so this is a regression in reliability. Please retry the read when interrupted.
- let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
- let mut tmp = [0u8; 8192];
- match state.read_buffer.read(&mut tmp) {
- Ok(0) => Ok(0),
- Ok(n) => {
- state.framing_buffer.extend_from_slice(&tmp[..n]);
- Ok(n)
- }
- Err(e) => Err(e),
- }
- };
+ let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
+ let mut tmp = [0u8; 8192];
+ loop {
+ match state.read_buffer.read(&mut tmp) {
+ Ok(0) => return Ok(0),
+ Ok(n) => {
+ state.framing_buffer.extend_from_slice(&tmp[..n]);
+ return Ok(n);
+ }
+ Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue,
+ Err(e) => return Err(e),
+ }
+ }
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> { | |
| let mut tmp = [0u8; 8192]; | |
| match state.read_buffer.read(&mut tmp) { | |
| Ok(0) => Ok(0), | |
| Ok(n) => { | |
| state.framing_buffer.extend_from_slice(&tmp[..n]); | |
| Ok(n) | |
| } | |
| Err(e) => Err(e), | |
| } | |
| }; | |
| let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> { | |
| let mut tmp = [0u8; 8192]; | |
| loop { | |
| match state.read_buffer.read(&mut tmp) { | |
| Ok(0) => return Ok(0), | |
| Ok(n) => { | |
| state.framing_buffer.extend_from_slice(&tmp[..n]); | |
| return Ok(n); | |
| } | |
| Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => continue, | |
| Err(e) => return Err(e), | |
| } | |
| } | |
| }; |
🤖 Prompt for AI Agents
In dash-spv/src/network/connection.rs around lines 403 to 413, the closure
returns Interrupted errors directly causing a disconnect; change it to retry the
read when ErrorKind::Interrupted. Implement a small loop inside the closure that
repeatedly calls state.read_buffer.read(&mut tmp) and on Ok return as before,
but on Err(e) check if e.kind() == std::io::ErrorKind::Interrupted and continue
the loop (retry); for any other error return Err(e). Ensure the buffer extend
logic and returning of the byte count remain identical when a read succeeds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please expand; lets think about the architecture, and in what circumstances we may run into a ErrorKind::Interrupted; is the desired behavior to disconnect in this case, or is it something we can reasonably expect to recover from.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PastaPastaPasta, great question! Let me break down the ErrorKind::Interrupted (EINTR) scenario in the context of this architecture:
What is ErrorKind::Interrupted?
ErrorKind::Interrupted corresponds to the POSIX EINTR error, which occurs when a system call (like read()) is interrupted by a signal handler before it could complete any data transfer. This is a temporary condition, not an actual I/O failure.
When Does EINTR Occur?
Common scenarios include:
- Signal handlers: The process receives a signal (SIGALRM, SIGCHLD, SIGUSR1, etc.) while blocked in a read
- Debugger interaction: Attaching/detaching debuggers can interrupt syscalls
- Container/orchestration signals: In Docker/Kubernetes environments, signals for health checks or graceful shutdown
- System activity: Various OS-level events can trigger signals
Architecture Context
In your current setup:
- Blocking reads with timeout: You're using
TcpStreamwithset_read_timeout(Some(read_timeout)) - BufReader wrapper: Adds buffering layer on top of the socket
- Synchronous I/O: The reads are blocking (not using async I/O primitives)
- Long-lived connections: Peer connections that should remain stable for extended periods
Should We Disconnect on EINTR?
No, we should NOT disconnect. Here's why:
-
EINTR is recoverable: The interrupted syscall did nothing wrong - it was simply asked to pause by the OS. The socket is still valid and the connection is intact.
-
Spurious disconnects harm the network: Dropping a perfectly good peer connection because a signal was delivered wastes resources and hurts sync performance.
-
Standard practice: The Rust standard library's
Read::read_exact()and similar helpers automatically retry on EINTR. By manually handling the read loop, you've lost this automatic retry behavior. -
Previous code was better: The old
RawNetworkMessage::consensus_decode(&mut state.read_buffer)path would internally useread_exact(), which retries EINTR transparently.
Recommended Behavior
The closure should retry the read when interrupted:
let read_some = |state: &mut ConnectionState| -> std::io::Result<usize> {
let mut tmp = [0u8; 8192];
loop {
match state.read_buffer.read(&mut tmp) {
Ok(0) => return Ok(0),
Ok(n) => {
state.framing_buffer.extend_from_slice(&tmp[..n]);
return Ok(n);
}
Err(ref e) if e.kind() == std::io::ErrorKind::Interrupted => {
// Signal interrupted the read - just retry
continue;
}
Err(e) => return Err(e),
}
}
};This approach:
- ✅ Maintains the connection on transient signal delivery
- ✅ Matches standard library behavior
- ✅ Aligns with the previous implementation's reliability
- ✅ Has zero performance cost (signals are rare)
The only errors that should trigger disconnection are real I/O failures like ConnectionReset, ConnectionAborted, BrokenPipe, or timeout/would-block conditions that are already explicitly handled elsewhere in your code.
Updated the ValidationMode enum to include the Default trait, setting the default value to Full. This change streamlines the code and enhances usability by providing a clear default validation mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
dash-spv/src/types.rs (1)
513-524: LGTM! Clean refactor to idiomatic Rust.The conversion from a manual
impl Defaultto using the derive macro with#[default]attribute is a nice improvement. This is the idiomatic approach for specifying default enum variants and reduces boilerplate while maintaining identical behavior.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
dash-spv/src/types.rs(1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/types.rs
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
dash-spv/src/types.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
dash-spv/src/types.rs
🧠 Learnings (5)
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`
Applied to files:
dash-spv/src/types.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/error.rs : Define and maintain domain-specific error types in `src/error.rs` (NetworkError, StorageError, SyncError, ValidationError, SpvError) and update them when adding new failure modes
Applied to files:
dash-spv/src/types.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/Cargo.toml : Set MSRV to 1.89 using the `rust-version` field and keep the code compatible with Rust 1.89
Applied to files:
dash-spv/src/types.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Add cbindgen annotations for complex types so headers are generated correctly
Applied to files:
dash-spv/src/types.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
Repo: dashpay/rust-dashcore PR: 0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Expose complex Rust types to C as opaque pointers (e.g., FFIDashSpvClient*)
Applied to files:
dash-spv/src/types.rs
🧬 Code graph analysis (1)
dash-spv/src/types.rs (1)
dash-spv/src/client/config.rs (1)
default(197-257)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
- GitHub Check: fuzz (hashes_ripemd160)
- GitHub Check: fuzz (hashes_sha256)
- GitHub Check: fuzz (hashes_sha512)
- GitHub Check: fuzz (dash_deserialize_block)
- GitHub Check: fuzz (hashes_cbor)
- GitHub Check: fuzz (hashes_sha512_256)
- GitHub Check: fuzz (dash_deserialize_script)
- GitHub Check: fuzz (hashes_json)
- GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
- GitHub Check: fuzz (dash_deserialize_address)
- GitHub Check: fuzz (dash_deserialize_witness)
- GitHub Check: fuzz (dash_deserialize_amount)
- GitHub Check: fuzz (dash_deser_net_msg)
- GitHub Check: fuzz (dash_outpoint_string)
- GitHub Check: RPC Tests (stable, true)
- GitHub Check: Core Components Tests
- GitHub Check: Strict Warnings and Clippy Checks
- GitHub Check: SPV Components Tests
Summary by CodeRabbit
Bug Fixes
Chores