Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ChannelRelay] Prioritize lowest capacity channels #1539

Merged
merged 3 commits into from
Oct 2, 2020

Conversation

t-bast
Copy link
Member

@t-bast t-bast commented Sep 24, 2020

This makes it more difficult for attackers to "squat" high-capacity channels by sending HTLCs and then hodling them (filling up the 483 available pending HTLC slots).

It results in less locked liquidity during this kind of attacks because we preserve the highest-capacity channels.

Note: I also changed the way we test channel updates in the first commit because I didn't like the dependency on watching for logs, but if you don't like it I can revert that.

Exposing the private wrapped messages to tests allows removing the
dependency on capturing logs which felt very brittle.
This makes it more difficult for attackers to "squat" high-capacity channels
by sending HTLCs and then hodling them.

It results in less locked liquidity during this kind of attacks.
@t-bast t-bast requested a review from pm47 September 24, 2020 09:57
@pm47
Copy link
Member

pm47 commented Sep 24, 2020

Re: dcf7d34, yes log parsing was very brittle, but it should be fixed with #1541.

Another idea would be to use GetOutgoingChannels for synchronization.

I agree that just making the objects more visible make things much easier, but it feels like cheating.

@joostjager
Copy link

Potentially interesting comment: lightningnetwork/lnd#4646 (comment)

@t-bast
Copy link
Member Author

t-bast commented Sep 28, 2020

Potentially interesting comment

I agree that this heuristic is not a clear-cut solution. But conner's comment doesn't consider the inbound bandwidth in his example, and inbound bandwidth is also important to preserve (otherwise you may have the outbound bandwidth to relay but no-one will send you HTLCs because you don't have any inbound bandwidth) and your proposal of ordering by capacity ensure that you'll preserve either inbound or outbound bandwidth.

I thought about choosing at random, but I think it's worse in practice (given the number of requests LN has today). A real (beginning of a) solution is multi-layer, and I think ordering by (capacity, outbound bandwidth) + having diverse htlc_minimum_msat configs (with channels dedicated to "big" payments) can be a good starting point.

We'll see how it behaves in the wild, and will re-visit that decision if it doesn't work out.

@cfromknecht
Copy link

conner's comment doesn't consider the inbound bandwidth

Inbound bandwidth is preserved if the other peer uses the same strategy and tries to protect their outgoing bandwidth. Whether or not the other node chooses to optimize for their outbound is totally outside of the local node’s control, but it does seem like there exists some sort of Schelling point. Each is incentivized to protect their own outgoing, which in turn protects the counterparty’s inbound.

your proposal of ordering by capacity ensure that you'll preserve either inbound or outbound bandwidth

I disagree here. The limits are directional so my forwarding decisions have no effect on whether the inbound is actually preserved. It’s entirely dependent on the counterparty’s strategy.

@t-bast
Copy link
Member Author

t-bast commented Sep 29, 2020

Inbound bandwidth is preserved if the other peer uses the same strategy and tries to protect their outgoing bandwidth.

Not exactly, you can always preserve inbound bandwidth by refusing to relay incoming HTLCs that would block the channel (because of the 483 HTLCs limit) and are low value. It's better if your peer does it right from the start, but you're not powerless if he doesn't. This is also why I don't agree with your last statement that it’s entirely dependent on the counterparty’s strategy. Let me know if I'm missing something.

A potentially interesting solution here would be to refuse to relay low-value HTLCs when your channels are almost full. But as you discussed in the lnd PR, that's the kind of heuristic that would be best in a plugin, as the effectiveness will depend a lot on real-world conditions and the specificities of each node...

@cfromknecht
Copy link

Not exactly, you can always preserve inbound bandwidth by refusing to relay incoming HTLCs that would block the channel (because of the 483 HTLCs limit) and are low value.

This is true, but how is refusal-to-forward related to the outgoing channel tie-breaker? Seems to me like a slightly different question: should I even try to forward this HTLC?

In my other comments I'm assuming the answer to this question is "we have decided to forward this, now let's pick our best channel". At that point the HTLC is already locked in on the incoming channel and you only have control over the outgoing channel. I also think part of the confusion is that I was referring to inbound and outbound bandwidth on the outgoing channel.

FWIW it could be that we're thinking about things slightly differently due to differing implementation details.

A potentially interesting solution here would be to refuse to relay low-value HTLCs when your channels are almost full.

Definitely! Something along this line and/or an adaptive min htlc value could be an interesting research topic

@t-bast
Copy link
Member Author

t-bast commented Oct 1, 2020

At that point the HTLC is already locked in on the incoming channel and you only have control over the outgoing channel.

Gotcha, that makes sense, I'm understanding your answers more clearly now.
My current thinking is that in order to properly mitigate this kind of issue (if possible at all), we need a more end-to-end, global view of the flow (and it's a complex system with a lot of moving parts!).

I think that an acceptable (beginning of a) solution is to combine:

  • Heuristics for acceptance of incoming HTLCs (as these afterwards impact outgoing non-strict forwarding): we currently accept all incoming HTLCs that can potentially be relayed, maybe we should be more strict than that
  • Vary your channel sizes and htlc_minimum_msat
  • Instead of a fixed value for htlc_minimum_msat, use adaptive thresholds: if you have a many pending HTLCs in your channel but it still has a lot of available balance (in either direction), only accept "big" HTLCs (much bigger than your static htlc_minimum_msat)
  • Smarter heuristics for choosing the outgoing channel

Those will span multiple PRs and take time to get right (and should probably be offered as configurable plugins or something like it), but it's worth starting the discussion and exploring ideas!

@t-bast t-bast requested a review from pm47 October 2, 2020 07:32
@t-bast t-bast merged commit 382868d into master Oct 2, 2020
@t-bast t-bast deleted the channel-relay-lowest-capacity branch October 2, 2020 08:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants