Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forward over substitute channels #1578

Conversation

ViktorTigerstrom
Copy link
Contributor

Closes #1278.

This PR is based and blocked on #1507, so only the last commits are part of this PR. As #1507 needs to be rebased, this will fail CI currently.

Enables forwarding of htlcs over alternative channels to the same peer, in case forwarding over the channel specified in the onion payload fails.

Pushing this as a draft for now as there are a few things I'd like some guidance for. I'll add some review comments with the questions I have :).

As the map values are no longer only `channel_id`s, but also a
`counterparty_node_id`s, the map is renamed to better correspond to
whats actually stored in the map.
After `channels` are now stored in the `per_peer_state`, some logic can
be simplified and extra accessing of the `per_peer_state` can be
removed.
Attempt to forward htlcs over alternative channels to the same peer,
in case forwarding over the channel specified in the onion payload
fails.
As we now may forward HTLCS over other channels than the channels
specified in channel_state.forward_htlcs, we move the creation of
CommitmentUpdate outside of the loop för the channel_state.forward_htlcs
to ensure that we don't create multiple CommitmentUpdate structs for the
same channels.
match peer_state.channel_by_id.get_mut(&chan_id).unwrap().send_htlc(amt_to_forward, payment_hash, outgoing_cltv_value, htlc_source.clone(), onion_packet.clone(), &self.logger) {
Err(e) => {
if let ChannelError::Ignore(msg) = e {
log_trace!(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit unsure of our logging guidelines. Is this line and the log_info + log_trace calls I've added below how we would like to handle logging?

// Attempt to forward the HTLC over all available channels
// to the peer, but attempt to forward the HTLC over the
// channel specified in the onion payload first.
let mut counterparty_channel_ids = peer_state.channel_by_id.keys()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you can think of a more efficient way of doing this, please me know :).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think using iter::once() and iter::chain() should allow you to get rid of the insert. E.g.:

let mut counterparty_channel_ids = core::iter::once(forward_chan_id)
    .chain(peer_state.channel_by_id.keys()
                .filter(|&chan_id| *chan_id != forward_chan_id)
                .map(|&chan_id| chan_id))
    .collect::<Vec<_>>();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice thanks! 13dbfe3

if let hash_map::Entry::Occupied(mut entry) = htlcs_msgs_by_id.entry($channel_id) {
let msgs_entry = entry.get_mut();
msgs_entry.1.push($fail_htlc_msg);
if let Some(msg) = (&$htlc_msg as &Any).downcast_ref::<msgs::UpdateAddHTLC>() {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely not happy with this downcast solution, but couldn't get a match statement to work. If you can think of a better way of doing this, or if you prefer avoiding this DRY up, please let me know!

@@ -3438,6 +3395,53 @@ impl<Signer: Sign, M: Deref, T: Deref, K: Deref, F: Deref, L: Deref> ChannelMana
}
}
}
for (chan_id, (add_htlc_msgs, fail_htlc_msgs, counterparty_node_id)) in htlcs_msgs_by_id.into_iter() {
Copy link
Contributor Author

@ViktorTigerstrom ViktorTigerstrom Jun 28, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I correct that we'd like to move this out of the channel_state.forward_htlcs loop, to avoid that we else may create multiple CommitmentUpdate instances for the same channels, despite that we'll then need to drop and reacquire the PeerState mutex for that peer?

This will happen in cases where channel_state.forward_htlcs contains one htlc that is successfully forwarded over a specified channel (a), and another htlc that is forwarded over that same channel through substitution.

counterparty_channel_ids.insert(0, forward_chan_id);
let mut send_succeeded = false;
for chan_id in counterparty_channel_ids {
match peer_state.channel_by_id.get_mut(&chan_id).unwrap().send_htlc(amt_to_forward, payment_hash, outgoing_cltv_value, htlc_source.clone(), onion_packet.clone(), &self.logger) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR assumes that we want to attempt to forward the HTLC over another channel if forwarding over the specified channel in the onion fails for any reason, as I figured that it's probably useful to attempt forwarding over substitute channels for more reasons than just insufficient liquidity as specified in the issue. Is that correct?

If so, would we like to avoid attempt forwarding over substitute channels for some failure reasons, such as for example that the peer is disconnected?

@tnull
Copy link
Contributor

tnull commented Jun 28, 2022

Thank you at having a look at this!

One conceptual question I want to raise is whether we'd want to always substitute the chosen channel based on some criteria. E.g., this could be used to implement a basic form of balancing strategy by always selecting the most out-of-balance channel. Or we could go in the opposite direction of always preferring the channel with remaining balance closest to the routed amount, i.e., always aiming to deplete outgoing channel capacity before using another channel. The latter approach would have the potential benefit of keeping larger chunks of capacity untouched by smaller amounts, which could allow us to route larger payments for longer?

@ViktorTigerstrom
Copy link
Contributor Author

ViktorTigerstrom commented Jun 28, 2022

Thanks @tnull! Hmm yeah those are both good ideas and either those alternatives makes sense for different use cases. Perhaps we would like to make the channel forwarding selection algo configurable by the user?

Would it make sense to go for the implementation this PR implements to start with (to avoid unnecessary forwarding failures), and continue the discussion for the other alternatives and implement them in followups?

@tnull
Copy link
Contributor

tnull commented Jun 29, 2022

Thanks @tnull! Hmm yeah those are both good ideas and either those alternatives makes sense for different use cases. Perhaps we would like to make the channel forwarding selection algo configurable by the user?

I'm not too sure if we should make this user configurable. I mean we may, but mainly we should think about it and provide a sane default forwarding policy.

Would it make sense to go for the implementation this PR implements to start with (to avoid unnecessary forwarding failures), and continue the discussion for the other alternatives and implement them in followups?

Yes, I think retrying over a substitute failure is something we'd want to do either way so it's a good point to start. However, even in this case the question which substitute channel we choose is still there.

@ViktorTigerstrom
Copy link
Contributor Author

This is no longer blocked on #1507 🚀! Though, this will probably require quite a bit of updating now.
I'll pick this one up again once the follow-ups to #1507 are done :).

@TheBlueMatt
Copy link
Collaborator

Any interest in picking this back up?

@ViktorTigerstrom
Copy link
Contributor Author

Hey @TheBlueMatt! I'm intending to finish both this and #1771 as soon as I have the possibility to. I've also discussed this with Steve offline if you'd like to get some more context :)!

Though if you need this feature in the near future, or working on something related, feel very much free to take it on instead, as it'd definitely be a useful feature!

I'm hoping to pick this up again and finishing it before the next release though, unless the timeframe for the next release is planned to be a short one.

@TheBlueMatt
Copy link
Collaborator

Nope, I don't have any particular rush here, I just figured I'd ping and see if you were still interested in working on it or if you had your hands full. If you aren't gonna get around to it that's fine too, but best to close it. If you plan to get to it in the next month or two then I'll leave it as-is.

@TheBlueMatt
Copy link
Collaborator

Closing as abandoned.

@TheBlueMatt TheBlueMatt closed this Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Use substitude channel for forwarding if we have another with the same peer
3 participants