Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

htlcswitch: make max cltv expiry configurable and lower default #3348

Merged
merged 2 commits into from Aug 2, 2019

Conversation

@wpaulino
Copy link
Collaborator

commented Jul 27, 2019

The current value was based on the previous default CLTV delta of 144 blocks. This has been lowered to 40 since lnd v0.6.0-beta, making the current value of 5000 blocks a bit high. Lowering it to one week should be more than enough to account for the other major lightning implementations. Eclair currently has a default CLTV delta of 144, while c-lightning's is 14.

The value can be configured through the --max-cltv-expiry flag.

Fixes #3286.

wpaulino added 2 commits Jul 27, 2019
The current value was based on the previous default CLTV delta of 144
blocks. This has been lowered to 40 since lnd v0.6.0-beta, making the
current value of 5000 blocks a bit high. Lowering it to one week should
be more than enough to account for the other major lightning
implementations. Eclair currently has a default CLTV delta of 144, while
c-lightning's is 14.
@wpaulino wpaulino requested review from cfromknecht and Roasbeef as code owners Jul 27, 2019
@wpaulino wpaulino added this to the 0.8.0 milestone Jul 29, 2019
Copy link
Member

left a comment

LGTM 🔑

// is based on the maximum number of hops (20), the default cltv delta
// (40) and some extra margin to account for the other lightning
// implementations.
DefaultMaxOutgoingCltvExpiry = 1008

This comment has been minimized.

Copy link
@cfromknecht

cfromknecht Jul 30, 2019

Collaborator

IMO cutting this value by 80% is far too drastic, i'd prefer to slowly reduce this value over time to prevent any unforeseen interactions with pathfinding and other implementations.

fwiw, after the variable-size onion payload is deployed, the max pathlength will be closer to 26. the extra margin included in the original value was 5000 - 144 * 20 = 2120 blocks. Adding that to say 40*26 would give a value of 3160. Somewhere around 3k I think a more reasonable reduction. We can drop this again in the future if we don't see any regressions

This comment has been minimized.

Copy link
@Roasbeef

Roasbeef Jul 30, 2019

Member

3k would still result in a 20 day timelock for HTLCs that will be auto accepted. That's even longer than our max CSV delay (where most of the funds ideally are in a channel). Those values are based on the max path length, which afaik based on the current graph diameter, is nowhere close to being hit. If there're nodes today that are routinely extending paths of length 20, then that likely indicates they're attempting to launch nuisance attacks on the network.

This comment has been minimized.

Copy link
@cfromknecht

cfromknecht Jul 30, 2019

Collaborator

all of that makes sense. i'm only expressing concern over how dramatic the drop si, not that we shouldn't shoot to end at this lower value.

This comment has been minimized.

Copy link
@wpaulino

wpaulino Aug 1, 2019

Author Collaborator

One could also argue that the previous value didn't need to be so high.

@Roasbeef Roasbeef merged commit 9eb7237 into lightningnetwork:master Aug 2, 2019
1 of 2 checks passed
1 of 2 checks passed
coverage/coveralls Coverage decreased (-0.07%) to 60.722%
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@wpaulino wpaulino deleted the wpaulino:max-cltv-expiry-config branch Aug 2, 2019
@joostjager

This comment has been minimized.

Copy link
Collaborator

commented Oct 10, 2019

@wpaulino, when we receive a FailExpiryTooSoon in mission control, we black list all of the channels of that node. By lowering the default limit, your node is at risk of this happening. Is this intended?

A nicer, much more involved, solution would be to have nodes announce their max cltv (node or channel level), so we can take it into account during path finding.

@joostjager

This comment has been minimized.

Copy link
Collaborator

commented Oct 10, 2019

The problem is also that you can get banned if your predecessors in the route have a too high cltv delta. Maybe we should also limit the max cltv limit during path finding. The restriction is already there, just need to set a default.

@joostjager

This comment has been minimized.

Copy link
Collaborator

commented Oct 10, 2019

Current distribution of time_lock_delta on mainnet:

time_lock_delta occurences
144 28612
40 16650
30 7004
14 2985
4 2033
64 607
20 551
432 297
1000 295
10 253
72 180
39 156
300 141
34 137
6 135
7 131
142 104
140 69
100 59
12 49
43 44
24 31
60 19
288 14
420 14
48 13
80 10
50 9
143 7
600 6
576 5
101 4
120 4
2016 2
32 2
36 2
1440 1
145 1
16 1
21 1
250 1
44 1
@wpaulino

This comment has been minimized.

Copy link
Collaborator Author

commented Oct 10, 2019

@wpaulino, when we receive a FailExpiryTooSoon in mission control, we black list all of the channels of that node. By lowering the default limit, your node is at risk of this happening. Is this intended?

I wasn't aware we had this sort of interpretation on a FailExpiryTooSoon error.

A nicer, much more involved, solution would be to have nodes announce their max cltv (node or channel level), so we can take it into account during path finding.

I don't think there's an existing way to do this in the protocol?

Current distribution of time_lock_delta on mainnet:

Looking at this, bumping the default to 2016 should be more than enough.

@joostjager

This comment has been minimized.

Copy link
Collaborator

commented Oct 10, 2019

Looking at this, bumping the default to 2016 should be more than enough.

We keep running into the error, even with 2016 and long routes? Why do you think it is more than enough?

There is already a max cltv limit restriction. We could also just change the default from no limit to 1000 or so

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.