Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix lightning-net-tokio sometimes dropping messages #2832

Conversation

Tibo-lg
Copy link
Contributor

@Tibo-lg Tibo-lg commented Jan 17, 2024

I was having some issues sending custom messages with v0.0.118 (when trying to send a rather large number - ~25 - it would stop sending after ~14). After debugging I found that the issue was coming from lightning-net-tokio and checking the documentation it seems that when receiving a WouldBlock the write should just be retried. I confirmed that this fixed the issue I was having.

Copy link

coderabbitai bot commented Jan 17, 2024

Walkthrough

The update to the lightning-net-tokio library involves an enhancement to the SocketDescriptor's error handling. It now gracefully manages the WouldBlock error, a common non-critical issue that occurs when a socket operation cannot immediately complete, by allowing the operation to retry rather than fail.

Changes

File Path Change Summary
.../lib.rs Added error handling for std::io::ErrorKind::WouldBlock within SocketDescriptor to permit continuation instead of failure.

Poem

In the code where sockets block,
A rabbit hopped and took stock.
With a hop, skip, and a tweak,
Now the code plays hide and seek! 🐇💻

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit-tests for this file.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit tests for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository from git and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit tests.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • The JSON schema for the configuration file is available here.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

CodeRabbit Discord Community

Join our Discord Community to get help, request features, and share feedback.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Status

Actionable comments generated: 1

Configuration used: CodeRabbit UI

Commits Files that changed from the base of the PR and between ea5de93 and 4cd0e6a.
Files selected for processing (1)
  • lightning-net-tokio/src/lib.rs (1 hunks)

Comment on lines +505 to +507
Err(ref e) if e.kind() == std::io::ErrorKind::WouldBlock => {
continue;
}
Copy link

@coderabbitai coderabbitai bot Jan 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handling of std::io::ErrorKind::WouldBlock in the send_data method of SocketDescriptor uses a continue statement in a loop without any delay or backoff mechanism. This could lead to a busy-wait loop that consumes CPU resources unnecessarily when the socket is not ready to send more data.

To address this, consider implementing a backoff strategy or using an async waiting mechanism to allow other tasks to run while waiting for the socket to become writable again.

Err(ref e) if e.kind() == std::io::ErrorKind::WouldBlock => {
    // Implement a delay or async wait before continuing
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we busy loop here its a bug in tokio, ultimately, as it should eventually return Poll::Pending for us.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TheBlueMatt Understood, if tokio's poll_write_ready behaves as expected, it should indeed prevent a busy loop by returning Poll::Pending when appropriate. Thank you for the clarification.

@codecov-commenter
Copy link

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (ea5de93) 88.52% compared to head (4cd0e6a) 88.53%.
Report is 10 commits behind head on main.

Files Patch % Lines
lightning-net-tokio/src/lib.rs 0.00% 2 Missing ⚠️

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2832   +/-   ##
=======================================
  Coverage   88.52%   88.53%           
=======================================
  Files         114      114           
  Lines       92090    92092    +2     
  Branches    92090    92092    +2     
=======================================
+ Hits        81526    81534    +8     
  Misses       8058     8058           
+ Partials     2506     2500    -6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea, makes sense. We ultimately have to keep trying until tokio gives us a Poll::Pending otherwise we'll hang, like you're seeing. Strange that no one else has ever run into this (that we know of). Certainly I've sent a ton of HTLC messages back to back to peers so you'd think I'd have hit it. Wonder if you have a small socket buffer size or something.

This patch is pretty safe, so just gonna land it. Ultimately we have to keep going here - the Err return here is for "real" errors that imply the socket is closed, as we return early and assume the read task will quit and close up the socket. If we hit a WouldBlock we're supposed to (according to the docs) treat it as "try your poll/await again", which is what this patch is doing.

@TheBlueMatt TheBlueMatt merged commit a97f945 into lightningdevkit:main Jan 17, 2024
13 of 15 checks passed
@TheBlueMatt
Copy link
Collaborator

Thanks for tracking this down!

@Tibo-lg
Copy link
Contributor Author

Tibo-lg commented Jan 17, 2024

Certainly I've sent a ton of HTLC messages back to back to peers so you'd think I'd have hit it. Wonder if you have a small socket buffer size or something.

I'm basically sending a very large message (containing a lot of adaptor signatures) in chunks so that it fit within the noise limit. So it means all the messages sent are "full", maybe that's one difference with what you experienced with.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants