-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ChannelRelay] Prioritize lowest capacity channels #1539
Conversation
Exposing the private wrapped messages to tests allows removing the dependency on capturing logs which felt very brittle.
This makes it more difficult for attackers to "squat" high-capacity channels by sending HTLCs and then hodling them. It results in less locked liquidity during this kind of attacks.
eclair-core/src/main/scala/fr/acinq/eclair/payment/relay/ChannelRelay.scala
Outdated
Show resolved
Hide resolved
Potentially interesting comment: lightningnetwork/lnd#4646 (comment) |
I agree that this heuristic is not a clear-cut solution. But conner's comment doesn't consider the inbound bandwidth in his example, and inbound bandwidth is also important to preserve (otherwise you may have the outbound bandwidth to relay but no-one will send you HTLCs because you don't have any inbound bandwidth) and your proposal of ordering by capacity ensure that you'll preserve either inbound or outbound bandwidth. I thought about choosing at random, but I think it's worse in practice (given the number of requests LN has today). A real (beginning of a) solution is multi-layer, and I think ordering by We'll see how it behaves in the wild, and will re-visit that decision if it doesn't work out. |
Inbound bandwidth is preserved if the other peer uses the same strategy and tries to protect their outgoing bandwidth. Whether or not the other node chooses to optimize for their outbound is totally outside of the local node’s control, but it does seem like there exists some sort of Schelling point. Each is incentivized to protect their own outgoing, which in turn protects the counterparty’s inbound.
I disagree here. The limits are directional so my forwarding decisions have no effect on whether the inbound is actually preserved. It’s entirely dependent on the counterparty’s strategy. |
Not exactly, you can always preserve inbound bandwidth by refusing to relay incoming HTLCs that would block the channel (because of the 483 HTLCs limit) and are low value. It's better if your peer does it right from the start, but you're not powerless if he doesn't. This is also why I don't agree with your last statement that it’s entirely dependent on the counterparty’s strategy. Let me know if I'm missing something. A potentially interesting solution here would be to refuse to relay low-value HTLCs when your channels are almost full. But as you discussed in the lnd PR, that's the kind of heuristic that would be best in a plugin, as the effectiveness will depend a lot on real-world conditions and the specificities of each node... |
This is true, but how is refusal-to-forward related to the outgoing channel tie-breaker? Seems to me like a slightly different question: should I even try to forward this HTLC? In my other comments I'm assuming the answer to this question is "we have decided to forward this, now let's pick our best channel". At that point the HTLC is already locked in on the incoming channel and you only have control over the outgoing channel. I also think part of the confusion is that I was referring to inbound and outbound bandwidth on the outgoing channel. FWIW it could be that we're thinking about things slightly differently due to differing implementation details.
Definitely! Something along this line and/or an adaptive min htlc value could be an interesting research topic |
Gotcha, that makes sense, I'm understanding your answers more clearly now. I think that an acceptable (beginning of a) solution is to combine:
Those will span multiple PRs and take time to get right (and should probably be offered as configurable plugins or something like it), but it's worth starting the discussion and exploring ideas! |
This makes it more difficult for attackers to "squat" high-capacity channels by sending HTLCs and then hodling them (filling up the 483 available pending HTLC slots).
It results in less locked liquidity during this kind of attacks because we preserve the highest-capacity channels.
Note: I also changed the way we test channel updates in the first commit because I didn't like the dependency on watching for logs, but if you don't like it I can revert that.