Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support third-party watchtowers in persistence pipeline #2337

Merged

Conversation

alecchendev
Copy link
Contributor

@alecchendev alecchendev commented Jun 5, 2023

Closes #1776. This PR aims to make it easier to form a justice/penalty transaction within the channel monitor persistence pipeline, which would help with building third-party watchtowers to support LDK-based nodes.

An internal test implementation of Persist is provided to show that we can form a justice tx after these changes as an example of how I'd imagine a user to be able to implement watchtower-like functionality.

Channel monitor persistence seems like the best place to have watchtower support so that we can have control over the channel pausing/resuming normal operation depending on the success/failure of persisting at the watchtower level, so that's mainly where the focus of this PR is.

Note: when I refer to "watchtowers" here, I'm mainly referring to third-party watchtowers, where a node would send a pre-signed, encrypted justice transaction to a externally run watchtower, e.g. teos. This API is meant to make it easier to build the client side support for these kinds of watchtowers.

To do

  • Accommodate the the first monitor update not being sent through persistence
    • Explore if refactoring this would be viable
    • OR add some state to ChannelMonitor and make sure it gets cleaned up after the initial state

lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/util/test_utils.rs Outdated Show resolved Hide resolved
lightning/src/util/test_utils.rs Outdated Show resolved Hide resolved
@alecchendev
Copy link
Contributor Author

Rewrote a bunch of this with a new approach which I think is better now. Still figuring out exactly the right API. I tried to elaborate in the commit messages on my thought process so far.

Also I still need to figure out how to best accommodate the very first monitor update with the counterparty's commitment, whether that be storing some state in the ChannelMonitor that gets cleaned up, or possibly trying to refactor to push the update through persistence, or something else.

@codecov-commenter
Copy link

codecov-commenter commented Jun 16, 2023

Codecov Report

Patch coverage: 96.82% and project coverage change: +0.03% 🎉

Comparison is base (0c25046) 90.53% compared to head (b20b1db) 90.56%.
Report is 14 commits behind head on main.

❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2337      +/-   ##
==========================================
+ Coverage   90.53%   90.56%   +0.03%     
==========================================
  Files         107      107              
  Lines       56923    57171     +248     
  Branches    56923    57171     +248     
==========================================
+ Hits        51533    51776     +243     
- Misses       5390     5395       +5     
Files Changed Coverage Δ
lightning/src/chain/chainmonitor.rs 95.03% <ø> (ø)
lightning/src/chain/channelmonitor.rs 94.63% <92.68%> (-0.10%) ⬇️
lightning/src/ln/chan_utils.rs 95.04% <97.16%> (+0.42%) ⬆️
lightning/src/util/test_utils.rs 73.61% <98.18%> (+2.14%) ⬆️
lightning/src/ln/channel.rs 89.82% <100.00%> (+0.04%) ⬆️
lightning/src/ln/functional_test_utils.rs 88.86% <100.00%> (-0.04%) ⬇️
lightning/src/ln/functional_tests.rs 98.17% <100.00%> (+0.01%) ⬆️

... and 3 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still WIP, figuring out what is the best API for this. I currently have an implementation of
Persist that can form a justice tx after exposing a few things on ChannelMonitor as an
example of how I'd imagine a user being able to implement some sort of watchtower
support. Open to feedback about how this should work.

I think this would benefit the review of this PR if we have a more detailed flow of how do you assume third-party watchtowers flows to work.

If we assume “local” watchtowers or “outsourced” ones (i.e run by an external entity where the pre-signed justice transactions are potentially encrypted thanks to BOLT3’s obscured commitment number). And if the flow should work for basic watchtower (justice transaction-only) or monitor replica (justice txn + time-sensitive ones like HTLC claims), see our GLOSSARY here.

There is a historical BOLT13 on how watchtower might work: https://github.com/sr-gi/bolt13/blob/master/13-watchtowers.md, however to the best of my knowledge there is no documentation on watchtower architecture (which might be helpful not only for cross-implementation compatibility but also guarantees compatibility between versions of LDK).

let their_per_commitment_point = PublicKey::from_secret_key(&self.onchain_tx_handler.secp_ctx, &per_commitment_key);
let revokeable_redeemscript = self.revokeable_redeemscript(&their_per_commitment_point);

let sig = self.onchain_tx_handler.signer.sign_justice_revoked_output(&justice_tx, input_idx, value, &per_commitment_key, &self.onchain_tx_handler.secp_ctx)?;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Between the two approaches mentioned “Get counterparty commitment txs from monitor update” and “Get revoked output data from update, build/sign separately”, I think one of the determinant factor to pick up which one might be the most advantageous is the flexibility preserved for the watchtower.

Namely, they are few characteristics than a watchtower signing policy can enforce:

  • the amount of fees paid by the justice_tx
  • the destination scriptPubkey
  • the claim aggregation of one or more RevokeableOutput

In term of a potential watchtower signing policy, from my understanding the two approaches are equivalent:

  • with approach 1, as we’re calling into OnchainTxHandler’s signer, a signing policy can be implemented behind EcdsaChannelSigner::sign_justice_revoked_output
  • with approach 2, build_justice_tx is a pub method, the caller can verify the checks on the Transaction and then give it back to sign_justice_tx.

As we have already refactored our signing interface to let custom signing policy be implemented there, this is unclear to me what are the other trade-off between both approaches ? Do we wanna the transaction broadcast to be managed by the own transaction-relay module of the watchtower ? Or rather extend OnchainTxHandler to have justice_tx re-scheduled by the watchtower.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this. I've mainly been thinking of this API in terms of what will be easiest/most intuitive for a user building a watchtower without exposing too many unnecessary details, while still allowing for some flexibility in terms of transaction creation (in case in the future a watchtower module wanted to do anything beyond the most basic justice tx, e.g. add an output to the justice tx like mentioned here or claiming revoked HTLC outputs). This is where I was thinking about whether we should expose the full commitment transaction data in approach 1--where they'd have more data needed to possibly claim other outputs, at the expense of possibly offering up excess data--versus just exposing the counterparty's to_local revokable output info in approach 2 (or some combination of both).

Do we wanna the transaction broadcast to be managed by the own transaction-relay module of the watchtower ? Or rather extend OnchainTxHandler to have justice_tx re-scheduled by the watchtower.

As I've had the basic "outsourced" watchtower in mind for this, I imagined that this would expose the necessary information to build, sign, and encrypt the justice tx, and send it to the watchtower server which would take care of broadcasting the tx separately.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(in case in the future a watchtower module wanted to do anything beyond the most basic justice tx, e.g. add an output to the justice tx like mentioned here or claiming revoked HTLC outputs).

Yes adding an output as a “service bounty” for the outsourced watchtower makes a lot of sense (or fee-bumping purpose if you wanna do CPFP). Note, I would favor to expose the full commitment transaction data that way the watchtower can build claim on the revoked HTLC outputs, without waiting counterparty confirmation of second-stage HTLC transaction (which might never happen!). The anchor output could be swept too by the watchtower as recommended by BOLT3 after the CSV 16: https://github.com/lightning/bolts/blob/master/03-transactions.md#to_local_anchor-and-to_remote_anchor-output-option_anchors (not punishment per se though as you have the infrastructure laid out).

As I've had the basic "outsourced" watchtower in mind for this, I imagined that this would expose the necessary information to build, sign, and encrypt the justice tx, and send it to the watchtower server which would take care of broadcasting the transaction separately.

“Outsourced” watchtower sounds good. There is one design option moving forward, making OnchainTxHandler an interface to which transaction issuer (watchtower, dual-funding rbfing) can subscribe broadcasting transaction. A robust transaction rebroadcast and fee-bumping module is easier said than done :) (especially with all transaction relay policy maintenance).

Copy link
Contributor Author

@alecchendev alecchendev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would benefit the review of this PR if we have a more detailed flow of how do you assume third-party watchtowers flows to work.

Ah thanks, I'll add that to the PR description. I've been mainly designing this with the "basic" (justice tx-only) "outsourced" (pre-signed, encrypted justice tx sent to an external entity) watchtower in mind.

let their_per_commitment_point = PublicKey::from_secret_key(&self.onchain_tx_handler.secp_ctx, &per_commitment_key);
let revokeable_redeemscript = self.revokeable_redeemscript(&their_per_commitment_point);

let sig = self.onchain_tx_handler.signer.sign_justice_revoked_output(&justice_tx, input_idx, value, &per_commitment_key, &self.onchain_tx_handler.secp_ctx)?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this. I've mainly been thinking of this API in terms of what will be easiest/most intuitive for a user building a watchtower without exposing too many unnecessary details, while still allowing for some flexibility in terms of transaction creation (in case in the future a watchtower module wanted to do anything beyond the most basic justice tx, e.g. add an output to the justice tx like mentioned here or claiming revoked HTLC outputs). This is where I was thinking about whether we should expose the full commitment transaction data in approach 1--where they'd have more data needed to possibly claim other outputs, at the expense of possibly offering up excess data--versus just exposing the counterparty's to_local revokable output info in approach 2 (or some combination of both).

Do we wanna the transaction broadcast to be managed by the own transaction-relay module of the watchtower ? Or rather extend OnchainTxHandler to have justice_tx re-scheduled by the watchtower.

As I've had the basic "outsourced" watchtower in mind for this, I imagined that this would expose the necessary information to build, sign, and encrypt the justice tx, and send it to the watchtower server which would take care of broadcasting the tx separately.

sr-gi added a commit to sr-gi/ldk-sample that referenced this pull request Jul 7, 2023
Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay, this is looking pretty good to me in general, I don't think we need to worry about any other watchtower models here - the only other realistic one is "copies of my ChannelMonitor", which is already doable :).

lightning/src/chain/channelmonitor.rs Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
sr-gi added a commit to sr-gi/ldk-sample that referenced this pull request Jul 10, 2023
sr-gi added a commit to sr-gi/ldk-sample that referenced this pull request Jul 10, 2023
sr-gi added a commit to sr-gi/ldk-sample that referenced this pull request Jul 10, 2023
@sr-gi
Copy link
Contributor

sr-gi commented Jul 10, 2023

I've tried this downstream on ldk-sample. The workflow, from an API standpoint, is what I'd expect.

If I've got this right, for a client to implement the functionality something like WatchtowerPersister would need to be implemented so the state can be backed up to the remote tower. In my test, I have just logged the methods on the persister and tried this on regtest against a CLN node. In a prod setup, it would make more sense to have a queue that the client process can subscribe to, rather than a map, but I guess that's up to the implementer of the persister to pick whatever they find more useful.

Here's the patched version of ldk-sample building on top of @tnull's 0.0.116 update WIP and pulling from a custom rust-lightning with this patch: https://github.com/sr-gi/ldk-sample/tree/ldk-towers

@alecchendev
Copy link
Contributor Author

Thanks for testing it out @sr-gi!

If I've got this right, for a client to implement the functionality something like WatchtowerPersister would need to be implemented so the state can be backed up to the remote tower.

Yep, your patch looks like how I was imagining it to be used

In a prod setup, it would make more sense to have a queue that the client process can subscribe to, rather than a map, but I guess that's up to the implementer of the persister to pick whatever they find more useful.

Makes sense, yea I just used a map off the top of my head for testing :)

@ariard
Copy link

ariard commented Jul 11, 2023

Concept ACK, I’ll favor approach 1 over approach 2 as exposing the full commitment data allows more flexibility for the outsourced tower service (e.g fee-bumping output).

@alecchendev alecchendev force-pushed the 2023-06-watchtower-support branch 2 times, most recently from 444db49 to 8bfbe68 Compare July 12, 2023 21:18
@alecchendev
Copy link
Contributor Author

Rebased, cleaned things ups, added docs, and reworked some things according to all the feedback so far

@alecchendev alecchendev marked this pull request as ready for review July 12, 2023 21:35
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/ln/chan_utils.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
@alecchendev alecchendev force-pushed the 2023-06-watchtower-support branch 2 times, most recently from 7e3dd9a to c5f5f2b Compare July 31, 2023 16:13
@alecchendev
Copy link
Contributor Author

Just fixed a docs build error

@alecchendev alecchendev force-pushed the 2023-06-watchtower-support branch 2 times, most recently from 0e58741 to e25593d Compare July 31, 2023 19:46
lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/ln/chan_utils.rs Outdated Show resolved Hide resolved
lightning/src/util/test_utils.rs Outdated Show resolved Hide resolved
lightning/src/util/test_utils.rs Show resolved Hide resolved
@alecchendev
Copy link
Contributor Author

Thanks for all the feedback. CI is gonna fail because of certain fixups spanning multiple commits - let me know when I should squash since there are quite a few :)

@TheBlueMatt
Copy link
Collaborator

Feel free to squash imo, will take another look after lunch but I think this is good.

@alecchendev
Copy link
Contributor Author

Squashed without further changes

@alecchendev
Copy link
Contributor Author

Just simplified filtering the dust outputs and removed the commit adding counterparty_dust_limit_satoshis, squashed immediately

lightning/src/ln/channel.rs Outdated Show resolved Hide resolved
lightning/src/ln/chan_utils.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
Comment on lines +1458 to +1463
/// This method will only succeed if this monitor has received the revocation secret for the
/// provided `commitment_number`. If a commitment number is provided that does not correspond
/// to the commitment transaction being revoked, this will return a signed transaction, but
/// the signature will not be valid.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd imagine this would be pretty hard to mess up, but if we also require they pass in the prevout, we could check the re-derived witness script corresponds to it and return an error.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yea hmm, I feel like they'd have to really go out of their way to spend the wrong output/commitment number since they should just be using our helper(s). I don't feel that strongly either way, do you feel like this will meaningfully catch user errors?

Copy link
Collaborator

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, some tiny nits that you can address while squashing, IMO, plus we should probably have some util to do the tracking for users who want it.

lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/ln/chan_utils.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
) -> chain::ChannelMonitorUpdateStatus {
let res = self.persister.update_persisted_channel(funding_txo, update, data, update_id);

if let Some(update) = update {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a followup, can we expose the code to track unsigned_justice_tx_data and generate the ultimate signed transactions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will try and get to this over the next couple of days

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the delay here, getting to this now. Were you thinking to expose basically a cleaned up version of this test util? I'm wondering who would this be for, since I imagine someone using this interface would want to return a different ChannelMonitorUpdateStatus depending on however they're using these signed justice txs (e.g. sending to remote watchtower/other server).

Hmm although... I could extend this util to take some user-implemented interface that takes a signed justice tx, does something, and returns a ChannelMonitorUpdateStatus, so our util handles the tracking and the user can focus on handling what to do with the tx? Would that make sense?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea basically. There's quite a bit of code required here to track transactions and their value and then eventually get them signed. I don't think the interface needs to be the same, however - we don't need to actually do the returning of the status/impl Persist, but rather we can just have a method that says "I got an update, please, state object, handle that update and give me any finalized, broadcastable, transactions that I should hand to the watchtower.

Sadly, this state object is going to need to be able to persist itself in some way or another - if we get a commitment transaction then restart, then see the revocation data for it, we'll need to be able to get that revocation data to the watchtower combined with the commitment tx.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah ok that makes sense!

This adds the feerate and local and remote output values to this channel
monitor update step so that a monitor can reconstruct the counterparty's
commitment transaction from an update. These commitment transactions
will be exposed to users in the following commits to support third-party
watchtowers in the persistence pipeline.

With only the HTLC outputs currently available in the monitor update, we
can tell how much of the channel balance is in-flight and towards which
side, however it doesn't tell us the amount that resides on either side.
Because of dust, we can't reliably derive the remote value from the
local value and visa versa. Thus, it seems these are the minimum fields
that need to be added.
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK a21f242, haven’t look deep into the tests.

///
/// It is expected that a watchtower client may use this method to retrieve the latest counterparty
/// commitment transaction(s), and then hold the necessary data until a later update in which
/// the monitor has been updated with the corresponding revocation data, at which point the
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/revocation data/per_commitment_secret/g

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please no, lets keep our docs assuming users don't know all the inner details of the BOLTs including variable names like this.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More confusing to have loosely names imho.

lightning/src/chain/channelmonitor.rs Outdated Show resolved Hide resolved
/// The built transaction will allow fee bumping with RBF, and this method takes
/// `feerate_per_kw` as an input such that multiple copies of a justice transaction at different
/// fee rates may be built.
pub fn build_to_local_justice_tx(&self, feerate_per_kw: u64, destination_script: Script)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might sounds obvious though the destination_script and all the descendant scripts should be only controlled by the local lightning node, to avoid all kind of boring pinnings. If you like more doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I think I'll just leave it how it is, I think it shouldn't be too big of a concern given this is expected to just help with a fairly specific use case

@alecchendev
Copy link
Contributor Author

Oops forgot to say I rebased because of conflicts

@TheBlueMatt
Copy link
Collaborator

Oops, acked before you pushed fixups, feel free to just go ahead and squash I think.

Upon creating a channel monitor, it is provided with the initial
counterparty commitment transaction info directly before the very first
time it is persisted. Because of this, the very first counterparty
commitment is not seen as an update in the persistence pipeline, and so
our previous changes to the monitor and updates cannot be used to
reconstruct this commitment.

To be able to expose the counterparty's transaction for the very first
commitment, we add a thin wrapper around
`provide_latest_counterparty_commitment_tx`, that stores the necessary
data needed to reconstruct the initial commitment transaction in the
monitor.
For watchtowers to be able to build justice transactions for our
counterparty's revoked commitments, they need to be able to find the
revokeable output for them to sweep. Here we cache `to_self_delay` in
`CommitmentTransaction` to allow for finding this output on the struct
directly. We also add a simple helper method to aid in building the
initial spending transaction.

This also adds a unit test for both of these helpers, and
refactors a bit of a previous `CommitmentTransaction` unit test to make
adding these easier.
Here we implement `WatchtowerPersister`, which provides a test-only
sample implementation of `Persist` similar to how we might imagine a
user to build watchtower-like functionality in the persistence pipeline.

We test that the `WatchtowerPersister` is able to successfully build and
sign a valid justice transaction that sweeps a counterparty's funds if
they broadcast an old commitment.
@alecchendev
Copy link
Contributor Author

Squashed with no further changes

@TheBlueMatt TheBlueMatt merged commit 32d6e91 into lightningdevkit:main Aug 23, 2023
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add a commitment revocation hook ⚡🗼
6 participants