Skip to content

Conversation

@TheBlueMatt
Copy link
Collaborator

We recently ran into a race condition on macOS where `read_event`
would return `Ok(true)` (implying reads should be paused) but calls
to `send_data` which flushed the buffer completed before the
`read_event` caller was able to set the read-pause flag.

This should be fairly rare, but not unheard of - the `pause_read`
flag in `read_event` is calculated before handling the last
message, so there's some time between when its calculated and when
its returned. However, that has to race with multiple calls to
`send_data` to send all the pending messages, which all have to
complete before the `read_event` return happens. We've (as far as I
can tell) never hit this on Linux, but a benchmark HTLC-flood test
managed to hit it somewhat reliably within a few minutes on macOS.

Ultimately we can't fix this with the current API (though we could
make it more rare). Thus, here, we stick to a single "stream" of
pause-read events from `PeerManager` to user code via `send_data`
calls, dropping the read-pause flag return from `read_event`
entirely.

Technically this adds risk that someone can flood us with enough
messages fast enough to bloat our outbound buffer for a peer before
`PeerManager::process_events` gets called and can flush the pause
flag via `read_event` calls to all descriptors. This isn't ideal
but it should still be relatively hard to do as `process_events`
calls are pretty quick and should be triggered immediately after
each `read_event` call completes.

@TheBlueMatt TheBlueMatt added this to the 0.2 milestone Oct 22, 2025
@ldk-reviews-bot
Copy link

ldk-reviews-bot commented Oct 22, 2025

👋 I see @tankyleo was un-assigned.
If you'd like another reviewer assignment, please click here.

@TheBlueMatt TheBlueMatt force-pushed the 2025-10-net-race-fixes branch from aa5f64e to ad1e948 Compare October 22, 2025 21:18
@codecov
Copy link

codecov bot commented Oct 22, 2025

Codecov Report

❌ Patch coverage is 83.01887% with 9 lines in your changes missing coverage. Please review.
✅ Project coverage is 88.81%. Comparing base (05f2848) to head (5e93838).
⚠️ Report is 42 commits behind head on main.

Files with missing lines Patch % Lines
lightning/src/ln/peer_handler.rs 84.78% 4 Missing and 3 partials ⚠️
lightning-background-processor/src/lib.rs 0.00% 1 Missing ⚠️
lightning-net-tokio/src/lib.rs 83.33% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4168      +/-   ##
==========================================
+ Coverage   88.78%   88.81%   +0.03%     
==========================================
  Files         180      180              
  Lines      137004   137261     +257     
  Branches   137004   137261     +257     
==========================================
+ Hits       121642   121914     +272     
+ Misses      12538    12535       -3     
+ Partials     2824     2812      -12     
Flag Coverage Δ
fuzzing 21.50% <50.00%> (+0.54%) ⬆️
tests 88.66% <83.01%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@ldk-reviews-bot
Copy link

👋 The first review has been submitted!

Do you think this PR is ready for a second reviewer? If so, click here to assign a second reviewer.

@joostjager
Copy link
Contributor

Ultimately we can't fix this with the current API (though we could
make it more rare).

What API change would be required to fix it completely? And is a reason not to do it to avoid breaking external usage of this code?

@TheBlueMatt
Copy link
Collaborator Author

The API change in this PR should fix it completely. My comment was about backporting to 0.1, where we aren't allowed to remove the returned-bool from read_event, and probably shouldn't silently change the semantics of the resume_read/continue_read bool (which now implies we should stop reading if its false).

@TheBlueMatt TheBlueMatt force-pushed the 2025-10-net-race-fixes branch 2 times, most recently from e4a70b9 to bd4356a Compare October 25, 2025 14:15
@TheBlueMatt
Copy link
Collaborator Author

Dropped the first commit as it makes it more annoying to remove the spurious Box::pins now that our MSRV is higher.

@ldk-reviews-bot
Copy link

✅ Added second reviewer: @wpaulino

@ldk-reviews-bot
Copy link

✅ Added second reviewer: @jkczyz

@ldk-reviews-bot
Copy link

✅ Added second reviewer: @tankyleo

@ldk-reviews-bot
Copy link

✅ Added second reviewer: @valentinewallace

@valentinewallace
Copy link
Contributor

The API change in this PR should fix it completely. My comment was about backporting to 0.1, where we aren't allowed to remove the returned-bool from read_event, and probably shouldn't silently change the semantics of the resume_read/continue_read bool (which now implies we should stop reading if its false).

Can you clarify this in the commit message? It isn't clear atm

@TheBlueMatt TheBlueMatt force-pushed the 2025-10-net-race-fixes branch from bd4356a to 254673b Compare October 27, 2025 15:51
@TheBlueMatt
Copy link
Collaborator Author

Okay, I rewrote that paragraph

@valentinewallace
Copy link
Contributor

We recently ran into a race condition on macOS where read_event
would return Ok(true) (implying reads should be paused) but calls
to send_data which flushed the buffer completed before the
read_event caller was able to set the read-pause flag.

^ from the commit message

I found this unclear at first, I think adding a sentence saying "This race condition caused us to unnecessarily halt reads from a peer, even after we got out some writes that should have unpaused the reads. Halting reads in this way triggered a hang in testing." would go a long way (assuming what I wrote is correct)

@TheBlueMatt TheBlueMatt force-pushed the 2025-10-net-race-fixes branch from 254673b to 028d64c Compare October 27, 2025 19:57
@TheBlueMatt
Copy link
Collaborator Author

I found this unclear at first, I think adding a sentence saying "This race condition caused us to unnecessarily halt reads from a peer, even after we got out some writes that should have unpaused the reads. Halting reads in this way triggered a hang in testing." would go a long way (assuming what I wrote is correct)

The proposed wording I found confusing as it seems to imply that the write events happen after the read event, which is only half-true. I tried again with the following:

We recently ran into a race condition on macOS where `read_event`
would return `Ok(true)` (implying reads should be paused) due to
many queued outbound messages but before the caller was able to
set the read-pause flag, the `send_data` calls to flush the
buffered messages completed. Thus, when the `read_event` caller got
scheduled again, the buffer was empty and we should be reading, but
it is finally processing the read-pause flag and we end up hanging,
unwilling to read messages and unable to learn that we should start
reading again as there are no messages to `send_data` for.

@valentinewallace
Copy link
Contributor

valentinewallace commented Oct 27, 2025

Much clearer, thanks. I'm good with a squash (and CI fix)

@TheBlueMatt TheBlueMatt force-pushed the 2025-10-net-race-fixes branch 2 times, most recently from 06ad948 to 6dd0836 Compare October 27, 2025 20:39
@TheBlueMatt
Copy link
Collaborator Author

TheBlueMatt commented Oct 27, 2025

Squashed with an additional doc fix:

$ git diff-tree -U1 06ad9485c 6dd0836c8
diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs
index 9ac39a6fbc..3c7f4a845b 100644
--- a/lightning/src/ln/peer_handler.rs
+++ b/lightning/src/ln/peer_handler.rs
@@ -783,3 +783,3 @@ struct Peer {
 	awaiting_write_event: bool,
-	/// Set to true if the last call to [`Descriptor::send_data`] for this peer had the
+	/// Set to true if the last call to [`SocketDescriptor::send_data`] for this peer had the
 	/// `should_read` flag unset, indicating we've told the driver to stop reading from this peer.

@valentinewallace
Copy link
Contributor

fuzz is still sad

We recently ran into a race condition on macOS where `read_event`
would return `Ok(true)` (implying reads should be paused) due to
many queued outbound messages but before the caller was able to
set the read-pause flag, the `send_data` calls to flush the
buffered messages completed. Thus, when the `read_event` caller got
scheduled again, the buffer was empty and we should be reading, but
it is finally processing the read-pause flag and we end up hanging,
unwilling to read messages and unable to learn that we should start
reading again as there are no messages to `send_data` for.

This should be fairly rare, but not unheard of - the `pause_read`
flag in `read_event` is calculated before handling the last
message, so there's some time between when its calculated and when
its returned. However, that has to race with multiple calls to
`send_data` to send all the pending messages, which all have to
complete before the `read_event` return happens. We've (as far as I
recall) never hit this in prod, but a benchmark HTLC-flood test
managed to hit it somewhat reliably within a few minutes on macOS
and when a synthetic few-ms sleep was added to each message
handling call.

Ultimately this is an issue with the API - we pause reads via a
returned flag but unpause them via a called method, creating two
independent "stream"s of pause/unpauses which can get out of sync.
Thus, here, we stick to a single "stream" of pause-read events from
`PeerManager` to user code via `send_data` calls, dropping the
read-pause flag return from `read_event` entirely.

Technically this adds risk that someone can flood us with enough
messages fast enough to bloat our outbound buffer for a peer before
`PeerManager::process_events` gets called and can flush the pause
flag via `read_event` calls to all descriptors. This isn't ideal
but it should still be relatively hard to do as `process_events`
calls are pretty quick and should be triggered immediately after
each `read_event` call completes.
In the previous commit, we moved the `send_data` `resume_read` flag
to also indicate that we should pause if its unset. This should
work as we mostly only set the flag when we're sending but may
cause us to fail to pause if we are blocked on gossip validation
but `awaiting_write_event` wasn't set as we had previously failed
to fully flush a buffer (which no longer implies read-pause).

Here we make this logic much more robust by ensuring we always make
at least one `send_data` call in `do_attempt_write_data` if we
need to pause read (or unpause read).
@TheBlueMatt TheBlueMatt force-pushed the 2025-10-net-race-fixes branch from 6dd0836 to 5e93838 Compare October 27, 2025 20:47
@TheBlueMatt
Copy link
Collaborator Author

Ha, heat me to it,

$ git diff-tree -U1 06ad9485c 5e93838f3
diff --git a/fuzz/src/full_stack.rs b/fuzz/src/full_stack.rs
index 186dce0933..97a74871ea 100644
--- a/fuzz/src/full_stack.rs
+++ b/fuzz/src/full_stack.rs
@@ -697,3 +697,3 @@ pub fn do_test(mut data: &[u8], logger: &Arc<dyn Logger>) {
 				match loss_detector.handler.read_event(&mut peer, get_slice!(get_slice!(1)[0])) {
-					Ok(res) => assert!(!res),
+					Ok(()) => {},
 					Err(_) => {
diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs
index 9ac39a6fbc..3c7f4a845b 100644
--- a/lightning/src/ln/peer_handler.rs
+++ b/lightning/src/ln/peer_handler.rs
@@ -783,3 +783,3 @@ struct Peer {
 	awaiting_write_event: bool,
-	/// Set to true if the last call to [`Descriptor::send_data`] for this peer had the
+	/// Set to true if the last call to [`SocketDescriptor::send_data`] for this peer had the
 	/// `should_read` flag unset, indicating we've told the driver to stop reading from this peer.

Copy link
Contributor

@valentinewallace valentinewallace left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One suggestion but LGTM

Comment on lines -1541 to +1553
let mut have_written = false;
while !peer.awaiting_write_event {
if !self.should_read_from(peer) {
if !peer.sent_pause_read {
force_one_write = true;
}
} else {
if peer.sent_pause_read {
force_one_write = true;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this can be simplified, and I think a comment would be helpful in explaining what's going on:

diff --git a/lightning/src/ln/peer_handler.rs b/lightning/src/ln/peer_handler.rs
index 3c7f4a845..76737ea67 100644
--- a/lightning/src/ln/peer_handler.rs
+++ b/lightning/src/ln/peer_handler.rs
@@ -1542,15 +1542,12 @@ where
 	fn do_attempt_write_data(
 		&self, descriptor: &mut Descriptor, peer: &mut Peer, mut force_one_write: bool,
 	) {
-		if !self.should_read_from(peer) {
-			if !peer.sent_pause_read {
-				force_one_write = true;
-			}
-		} else {
-			if peer.sent_pause_read {
-				force_one_write = true;
-			}
-		}
+		// If we detect that we should be reading from the peer but reads are currently paused, or vice
+		// versa, then we need to tell the peer to update their internal flag indicating whether or not
+		// reads are paused. Do this by forcing a write to the peer with the desired `continue_read`
+		// flag set, even if no outbound write is currently pending.
+		force_one_write |= self.should_read_from(peer) == peer.sent_pause_read;
+
 		while force_one_write || !peer.awaiting_write_event {
 			if peer.should_buffer_onion_message() {
 				if let Some((peer_node_id, _)) = peer.their_node_id {

Copy link
Contributor

@joostjager joostjager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd indeed say this fix is also a simplification. Win-win.

Still don't have full understanding of this part of the code, so should consider my approval secondary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants