Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DynComms [1/n]: Implement Quiescence Protocol #8270

Open
wants to merge 29 commits into
base: master
Choose a base branch
from

Conversation

ProofOfKeags
Copy link
Collaborator

@ProofOfKeags ProofOfKeags commented Dec 12, 2023

NOTE: This PR is part of a series implementing Dynamic Commitments. This PR does not directly implement any Dynamic Commitments specific logic but quiescence is a protocol gadget that is a prerequisite for Dynamic Commitments.

Change Description

This change implements the behavior described in the Quiescence Specification. It allows us to respond to our peer's request to quiesce the channel as well as implementing some ChannelUpdateHandler operations that allow us to initiate the process ourselves.

Some commits towards the end of the series have been included to allow us to initiate quiescence via RPC for the purposes of integration and interop testing. These commits should be removed before this PR is considered ready to merge. They will ultimately be replaced by RPCs that initiate the Dynamic Commitments protocol itself which will implicitly initiate quiescence as part of its process.

NOTE: This PR does NOT include a mechanism for timing out a quiescence session. This means that if we have an intentionally or unintentionally uncooperative peer, the channel will remain quiesced indefinitely. This is not desirable and will either be addressed in later commits in this PR or into a subsequent PR. However, this PR is submitted without it as it is "complete" in its own right.

Steps to Test

Steps for reviewers to follow to test the change.

Pull Request Checklist

Testing

  • Your PR passes all CI checks.
  • Tests covering the positive and negative (error paths) are included.
  • Bug fixes contain tests triggering the bug to prevent regressions.

Code Style and Documentation

📝 Please see our Contribution Guidelines for further guidance.

Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty straight forward diff! Missing some context from the other PR, and also still need to catch up w/ the latest state of the spec.

The main thing I think we need to zoom in on re unit tests is: the assumption that if we don't ACK a new settle/fail from the mailbox, then upon reconnection, all those items are retransmitted once again. If this is the case, then we can just force a disconnection after the stfu cycle is complete (see comment there about needing to send a special internal error to make that happen).

One other question I have is: is it the expected flow that a disconnect restores the channel lifecycle back to "active"? Or do we really want another protocol level message here so we can go back to normal w/o needing to re-create the peer connection?


// Initiator is a byte that identifies whether we are the initiator of
// this process.
Initiator bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would assume given the existing connection context both sides already know how's the initiator when things are being sent?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you look at the spec, it is included as a means of determining who holds the session.

lnwire/stfu.go Show resolved Hide resolved
if q.received {
return false,
fmt.Errorf(
"stfu already received for channel %s",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we promote these to either an error variable or a simple error struct, then unit tests can use errors.As and errors.Is, etc.

The caller/driver of the state machine would then also be able to use a switch to handle the error case.

@@ -5340,6 +5340,39 @@ func (lc *LightningChannel) NumLocalUpdatesPendingOnRemote() uint64 {
return lc.localUpdateLog.logIndex - lastRemoteCommit.ourMessageIndex
}

// NumLocalUpdatesPendingOnLocal returns the number of local updates that still
// need to be applied to the local commitment tx.
func (lc *LightningChannel) NumLocalUpdatesPendingOnLocal() uint64 {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't it enough to simply know if we owe a commitment or if they do? OweCommitment and NeedCommitment seem to cover those cases. Still getting through the diff however.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately, no. See this. There is still some instability in spec discussion and so the details of this PR aren't ultra high priority, but in the interest of not wasting time I've opted to implement it to the best of my understanding knowing full well that we may wish to throw some of this away.

I would prefer that this code change wasn't needed, however, as written, the best interpretation I have of the spec requires this.

htlcswitch/link.go Outdated Show resolved Hide resolved
if finished {
return
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would've thought we'd have some logic here to signal to the switch we're not eligible to forward. Maybe that's later on, still getting through the diff.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is handled implicitly in the EligibleToForward query.

@@ -638,7 +638,8 @@ func (l *channelLink) EligibleToForward() bool {
return l.channel.RemoteNextRevocation() != nil &&
l.ShortChanID() != hop.Source &&
l.isReestablished() &&
!l.IsDraining(Outgoing)
!l.IsDraining(Outgoing) &&
l.quiescer.canSendUpdates()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Seems like IsDraining could just call canSendUpdates within the impl? FWIW, missing some partial context from that other PR.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some slight nuance here where adds are a subset of updates. Intuitively, "draining" means we can't send adds, but we can send removes (fulfill/fail) and fee updates. canSendUpdates refers to all channel updates. If we change IsDraining (now IsFlushing) to CanSendAdds then there are some ways we can consolidate this. I generally try and name things to maximize future readers' ability to understand.

@@ -1652,6 +1653,17 @@ func (l *channelLink) handleDownstreamUpdateAdd(pkt *htlcPacket) error {
)
}

// If the channel is quiescent then we issue a temporary channel failure
// and bounce it.
if !l.quiescer.canSendUpdates() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same above re folding into IsDraining.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder also if we can fashion things s.t underneath everything uses the quiescer, but depending on the input signal/event, it may or may not send/expect the stfu message.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think your other idea of having a more flexible TrafficControl interface is the better route. Quiescence is all or nothing. Flushing is about preventing adds only. If you want to consolidate we should choose a flexible core and then we can add some convenience language over the top of it.

@@ -1738,6 +1750,14 @@ func (l *channelLink) handleDownstreamPkt(pkt *htlcPacket) {
_ = l.handleDownstreamUpdateAdd(pkt)

case *lnwire.UpdateFulfillHTLC:
if !l.quiescer.canSendUpdates() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, this is one of those "never should happen" scenarios right? I guess can happen given a slim concurrency window where we mark state as flushing, but the items are already in the mailbox.

Def need to test this out more, but my understanding is that these are un-ack'd, so they'll sit in the mailbox, to eventually be retransmitted once the connection recycles.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, but I believe every subsystem needs to be responsible for ensuring its own consistency so it is included for completeness. As we tease these systems apart a bit better, I think we can get more clear about the caller's responsibilities.

This code is a living system and I'd rather have some redundant checks than have someone change something else and break a fundamental guarantee in a place they didn't touch.

@@ -2035,6 +2049,20 @@ func (l *channelLink) handleUpstreamMsg(msg lnwire.Message) {
"assigning index: %v", msg.PaymentHash[:], index)

case *lnwire.UpdateFulfillHTLC:
if !l.quiescer.canRecvUpdates() {
l.fail(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I think we'd still want them to be able to send settle+fail message, I might be misunderstanding the current spec draft though, will double check there.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a game theoretic POV, maybe. Implementing the spec as-written, though, requires that I treat this scenario as a protocol violation.

Copy link

coderabbitai bot commented Jan 24, 2024

Important

Review skipped

Auto reviews are limited to specific labels.

Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

The implementation of the Quiescence (stfu) protocol introduces a new mechanism for initiating quiescence on a link via RPC, enhancing channel synchronization, and error handling. It includes a state machine for managing the quiescence protocol, new RPC methods for testing purposes, and extends the functionality of existing structures to support these features. This change is pivotal for the advancement of Dynamic Commitments within the Lightning Network.

Changes

File Pattern Change Summary
htlcswitch/... Introduced InitStfu() method, quiescence protocol logic, and related error handling.
itest/... Added test cases for validating the quiescence protocol.
lnrpc/... New RPC method Quiesce and related entities for quiescence protocol handling.
lnwire/... Added MsgStfu message type and quiescence-related feature bits.
peer/... Handling of lnwire.Stfu messages.

Assessment against linked issues

Objective Addressed Explanation
Implement Quiescence (stfu) as a prerequisite for Dynamic Commitments (#8260)
Track state of Dynamic Commitments and upgrade channels to Taproot Channels (#7878) The PR focuses on the Quiescence protocol, not directly on Dynamic Commitments or Taproot Channels.
Address retransmission of shutdown message upon reconnection (#8397) This PR does not address the retransmission issues described.
Improve handling of dust HTLCs in channel closure (#7969) The changes are unrelated to dust HTLCs handling.

Possibly related issues

🐇✨
In the land of code and wire,
A rabbit hopped, with dreams so dire.
"Let's quiesce," it said, with glee,
For quieter channels, we all agree.
With stfu in place, and tests to run,
Our Lightning paths are second to none.
🌩️🐰💻


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

In this commit we defer processRemoteAdds using a new mechanism on
the quiescer where we capture a closure that needs to be run. We
do this because we need to avoid the scenario where we send back
immediate resolutions to the newly added HTLCs when quiescent as
it is a protocol violation. It is not enough for us to simply defer
sending the messages since the purpose of quiescence itself is to
have well-defined and agreed upon channel state. If, for whatever
reason, the node (or connection) is restarted between when these
hooks are captured and when they are ultimately run, they will
be resolved by the resolveFwdPkgs logic when the link comes back
up.

In a future commit we will explicitly call the quiescer's resume
method when it is OK for htlc traffic to commence.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

None yet

6 participants