Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make cltv_expiry_delta configurable and reduce the min/default some #849

Merged

Conversation

TheBlueMatt
Copy link
Collaborator

This PR is just the top commit but is based on #848 because it uses the intra-doc-linking and this is what made me go write 848.

We allow users to configure the to_self_delay, which is analogous to
the cltv_expiry_delta in terms of its security context, so we should
allow users to specify both.

We similarly bound it on the lower end, but reduce that bound
somewhat now that it is configurable.

@codecov
Copy link

codecov bot commented Mar 17, 2021

Codecov Report

Merging #849 (e640b93) into main (fba204b) will increase coverage by 0.05%.
The diff coverage is 83.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #849      +/-   ##
==========================================
+ Coverage   90.61%   90.67%   +0.05%     
==========================================
  Files          51       51              
  Lines       27135    27140       +5     
==========================================
+ Hits        24589    24608      +19     
+ Misses       2546     2532      -14     
Impacted Files Coverage Δ
lightning/src/util/config.rs 47.61% <0.00%> (-1.17%) ⬇️
lightning/src/ln/channel.rs 87.13% <100.00%> (+0.01%) ⬆️
lightning/src/ln/channelmanager.rs 83.30% <100.00%> (+<0.01%) ⬆️
lightning/src/ln/functional_tests.rs 97.12% <0.00%> (+0.25%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fba204b...e640b93. Read the comment docs.

@TheBlueMatt
Copy link
Collaborator Author

Rebased after merge of 848.

/// commitment transaction at least once per this many blocks.
///
/// Default value: [`MIN_CLTV_EXPIRY_DELTA`] (currently 36), which we enforce as a minimum so
/// you can change the config to ask for more security, not less.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lol, I think we should've just stolen these docs for this discussion

@TheBlueMatt
Copy link
Collaborator Author

Also pushed a commit which should further reduce the "big red Xs" we regularly see on codecov - in general we should only put up an X if we really intend to block merge on it, and we never do for coverage, though we could debate blocking merge if a patch has, eg, <50% coverage (assuming codecov was smart about ignoring non-code bits, which I dont think it always is).

@TheBlueMatt
Copy link
Collaborator Author

Ah, notably, codecov is completely confused by annotations, see channelmanager at https://github.com/rust-bitcoin/rust-lightning/pull/851/files, so I think we should ignore patch diff entirely as the new commit does.

Copy link
Contributor

@jkczyz jkczyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good.

lightning/src/ln/channelmanager.rs Show resolved Hide resolved
lightning/src/util/config.rs Outdated Show resolved Hide resolved
// i.e. the node we forwarded the payment on to should always have enough room to reliably time out
// the HTLC via a full update_fail_htlc/commitment_signed dance before we hit the
// CLTV_CLAIM_BUFFER point (we static assert that it's at least 3 blocks more).
pub const MIN_CLTV_EXPIRY_DELTA: u16 = 6 * 6;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the comment matches exactly this timelock purpose.

IIRC, "CLTV_EXPIRY_DELTA serves to give the forwarding node (us/this_local_node) a block delay between an onchain HTLC timeout on the forward channel and expiration of an inbound HTLC on the backward channel. Differing settlement result (success on the forward channel, failure on the backward channel) is a loss of fund for the forwarding node."

I think 36 is quite low (let's consider the case of a few hours power outage/node upgrade) and I'm more confident with the 72 blocks before. IIRC, Eclair default is 144, CL default is 20, lnd default is 40. Also consider that this timelock won't bother mobile clients, as we can assume they won't forward HTLCs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I feel like 36 is probably a reasonable lower-bound to enforce, but maybe we can change the default to a higher value and allow the user to reduce it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As to your comment about the purpose - are you referring to the comment or the documentation here? THe comment, indeed, is just noting one additional requirement, not the purpose, but its not a part of the public documentation.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I feel like 36 is probably a reasonable lower-bound to enforce, but maybe we can change the > default to a higher value and allow the user to reduce it?

Yes good idea to split between lower-bound/default value.

Reading the comment, I don't think the main requirement is about giving a time buffer to your counterparty to relay the HTLC forward. Otherwise a single-digit block would be enough. It's about protecting the forwarding node against multiple risks, like lengthy onchain confirmations or elongated offliness.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I split the lower-bound/default some. Discussion on IRC (I think) concluded the wording is acceptable.

/// This is analogous to [`ChannelHandshakeConfig::our_to_self_delay`] but applies to in-flight
/// HTLC balance when a channel appears on-chain whereas
/// [`ChannelHandshakeConfig::our_to_self_delay`] applies to the remaining
/// (non-HTLC-encumbered) balance.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The analogy isn't straighforward.

CSV delay protects against the risk of a revoked commitment state and offer to the offended counterparty a delay to punish and claim back funds.

CLTV delta protects against the risk of differing settlement result on your backward/forward channels and offer to a routing node a delay to confirm the commitment and corresponding HTLC transaction before the incoming HTLC can be cancel by the backward counterparty.

What purpose does it serve to document CSV delay around CLTV delta ? It might confuse the read more than anything else, you can set up both of them quite independently. Though ideally both of them should account how much you "trust" your counterparty...

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I'm open to better wording. The intent of the connection here is that both have similar "you must be online at least once per X" requirements, which I think is a useful connection for users. That said, when we handle #850 it becomes more complicated - "you must be online at least once per X, unless you have limited the htlc_in_flight amount to something you don't care about, in which case, we don't care about this".

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think your onliness requirement can be reduced to the smaller one between soonest-HTLC expiration and CSV delay. You should care about the smaller one.

I don't think CLTV delta is the same than "soonest-HTLC expiration". What you should look for is the soonest to expire HTLC. If you're block 100, and expiration is 120, your "must-be-back-every X" dynamically decrease until reaching X - CLTV_CLAIM_BUFFER and you must go online.

I think a lightweight onliness requirement is achievable for non-forward node but in that case you don't care about CLTV_DELTA.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DIscussed this in a long conversation on IRC, but I think we ultimately agreed that its ok as-is, plus mentioning rbf time.

IRC Convo Pasted Below
<ariard> BlueMatt: i dunno about your "must-be-back-online-every-X" reasoning around https://github.com/rust-bitcoin/rust-lightning/pull/849#discussion_r597995251]
<BlueMatt> ariard: correct, but we don't expose the cltv time of all currently-in-flight htlcs to the user
<ariard> or you might allow to be back online every X but at some point you should be back lower than X
<ariard> otherwise you won't be on time for CLTV_CLAIM_BUFFER
<BlueMatt> and even if we did, it'd be race-y - you have to (a) disable forwarding, then (b) check the cltvs, then (c) shutdown, scheduling when to go back online
<ariard> like if it's block 100 and expiration=120, back every X=7 doesn't work
<BlueMatt> on the other hand, I think my statement is correct in that as long as you *are* online once every CLTV_DELTA blocks you're (mostly) ok
<BlueMatt> modulo desire to rbf, of course
<ariard> yeah modulo rbf
<BlueMatt> I could expland the comment and note that you should be online a ways more often to get rbf time, but I dont think trying to explain the full nuance of htlc timeouts is worth it
<ariard> hmmmm i think it depends when you start to go offline
<ariard> because if you go offline CLTV_DELTA + 10
<ariard> your model work if you assume you go offline as soon as you committed the htlc
<ariard> in practice you might be around for few blocks then disconnect
<BlueMatt> i think the solution to that is to have a forwarding_disabled flag
<BlueMatt> which we currently dont have (though you could set infinite fee, I think)
<BlueMatt> but, again, thats a lot of nuance that I dont think is worth communicating
<BlueMatt> if users really dont want to be online more often than once a week, they should set cltv_delta and csv_delay to 1 week and move on
<ariard> that's a lot of nuances for now
<BlueMatt> then they dont have to think about it
<ariard> they should set cltv_delta to a week for sure
<BlueMatt> if we try to explain how you have to track all your in-flight htlcs and think about when they time-out, then you have a lot of code for....nothing
<BlueMatt> when they should just turn up the delta and move on
<ariard> i think it's more about mobile-vs-routing-node
<ariard> like won't make sense to check your in-flight HTLCs for a mobile and decide your onliness requirement dynamically
<ariard> awful ux
<BlueMatt> somewhat, but if you're mobile, you can just jack up the cltv_delta
<BlueMatt> routing, the cltv_delta docs are correct now, cause you always have htlcs in flight
<BlueMatt> mobile, you dont care what it is cause you dont want to forward
<ariard> yeah that's my point
<BlueMatt> right, but I think in *either* case, the response you have to the docs is correct
<ariard> let's go forward with your reasoning
<BlueMatt> a mobile user reading it would increase the cltv_delta to something big (fine, they dont want to forward, fine outcome) and a routing node would read it and think about it the same as the csv_delay (which is correct)
<ariard> i'm still thinking if you're a big node, better to be always online and spot weird behaviors as soon as you can
<ariard> not exactly, depend risk of realization of each
<BlueMatt> yea, I mean I can also expand it to talk about rbf if you want
<ariard> account for timevalue
<ariard> fine for me as long as you mention the CLTV_CLAIM_BUFFER/rbf
<BlueMatt> ok, I'll note rbf
<BlueMatt> does the current wording not capture CLTV_CLAIM_BUFFER correctly?
<BlueMatt> ariard: ^?
<ariard> just add a Big Comment on #845
<ariard> let me see
<ariard> BlueMatt: "plus some margin to allow us enough time to broadcast and confirm a transaction" minus some margin ?
<BlueMatt> yea
<BlueMatt> well, yes, sorry
<ariard> otherwise looks good to mw
<BlueMatt> cool, thanks
<BlueMatt> ariard: can you ack 849?
<BlueMatt> I split the default/min and tweaked the wording

///
/// Thus, for HTLC-encumbered balances to be enforced on-chain, when a channel is force-closed,
/// we (or one of our watchtowers) MUST be online to check for broadcast of the current
/// commitment transaction at least once per this many blocks.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't be at least once per this many blocks minus CLTV_CLAIM_BUFFER ? You might have to fee-bump and rebroadcast few times to confirm the commitment and its HTLCs.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a note that you need a buffer here and to our_to_self_delay.

@TheBlueMatt TheBlueMatt force-pushed the 2021-03-config-cltv-delta branch 2 times, most recently from 3b09a43 to b81aaf4 Compare March 19, 2021 23:36
Copy link

@ariard ariard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review ACK b81aaf4

lightning/src/util/config.rs Outdated Show resolved Hide resolved
@TheBlueMatt TheBlueMatt force-pushed the 2021-03-config-cltv-delta branch 2 times, most recently from fbfeca6 to 67088eb Compare March 20, 2021 00:14
@TheBlueMatt
Copy link
Collaborator Author

TheBlueMatt commented Mar 20, 2021

Squashed, only diff is:

$ git diff-tree -U1 b81aaf4 e640b935f5bdf94a63e9a1103df9b5c9a5e8f4ed
diff --git a/lightning/src/util/config.rs b/lightning/src/util/config.rs
index a00b469c..4a9f3006 100644
--- a/lightning/src/util/config.rs
+++ b/lightning/src/util/config.rs
@@ -31,3 +31,3 @@ pub struct ChannelHandshakeConfig {
 	/// be online to check for revoked transactions on-chain at least once every our_to_self_delay
-	/// blocks (plus some margin to allow us enough time to broadcast and confirm a transaction,
+	/// blocks (minus some margin to allow us enough time to broadcast and confirm a transaction,
 	/// possibly with time in between to RBF the spending transaction).
@@ -176,3 +176,3 @@ pub struct ChannelConfig {
 	/// we (or one of our watchtowers) MUST be online to check for broadcast of the current
-	/// commitment transaction at least once per this many blocks (plus some margin to allow us
+	/// commitment transaction at least once per this many blocks (minus some margin to allow us
 	/// enough time to broadcast and confirm a transaction, possibly with time in between to RBF
@@ -184,3 +184,3 @@ pub struct ChannelConfig {
 	///
-	/// [`MIN_CLTV_EXPIRY_DELTA`]: ln::channelmanager::MIN_CLTV_EXPIRY_DELTA
+	/// [`MIN_CLTV_EXPIRY_DELTA`]: crate::ln::channelmanager::MIN_CLTV_EXPIRY_DELTA
 	pub cltv_expiry_delta: u16,
$

We allow users to configure the to_self_delay, which is analogous to
the cltv_expiry_delta in terms of its security context, so we should
allow users to specify both.

We similarly bound it on the lower end, but reduce that bound
somewhat now that it is configurable.
In some PRs, codecov gets mad that the coverage of the patch itself
is lower than the base. In most cases, we largely don't want a Big
Red X, at least as long as the total coverage has not gone down
substantially.
@jkczyz
Copy link
Contributor

jkczyz commented Mar 20, 2021

ACK e640b93

@TheBlueMatt TheBlueMatt merged commit 80ee1da into lightningdevkit:main Mar 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants