Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HtlcInterceptor event gets replayed after channel counterparty reconnects #5115

Closed
hsjoberg opened this issue Mar 17, 2021 · 3 comments · Fixed by #5280
Closed

HtlcInterceptor event gets replayed after channel counterparty reconnects #5115

hsjoberg opened this issue Mar 17, 2021 · 3 comments · Fixed by #5280
Labels
concurrency htlcswitch rpc Related to the RPC interface

Comments

@hsjoberg
Copy link
Contributor

hsjoberg commented Mar 17, 2021

Background

Greetings, I have stumbled upon an issue that is a little bit difficult to explain, but nonetheless is causing some significant issues.

I am using the HtlcInterceptor API to intercept and settle incoming forward HTLCs.
This works as intended and it settles the incoming HTLC as it should.

However, when the counterparty node for the channel that the incoming HTLC was received from goes offline and then online again (reestablishing contact), the HtlcInterceptor event gets replayed again. This happens every time the connection with the channel counterparty gets reestablished.
This happens after the HTLC was already settled.

Your environment

  • Some of the nodes on lnd latest master (b4aa661). Will try with all on master soon

Steps to reproduce

I have created a simple NodeJS application that demonstrates this issue: https://github.com/hsjoberg/htlcintercept-bug

Either create three nodes manually somehow or use Polar to manage it through docker containers.

Here's a video of it happening:

HtlcInterceptor bug video

Expected behaviour

No old HTLC event should ever be replayed.

Actual behaviour

Old HtlcInterceptor HTLC events gets replayed when connection to the channel counterparty node is reestablished.

A n.b here is that attempting to settle the HTLC again by writing back to the HtlcInterceptor stream does nothing as no HTLC was actually really sent (AFAICT).

I'm a bit unsure exactly what information would be helpful here so just ask if there's anything in particular. In the video I tail the logfile, but I can provide it in text format as well if wanted.

Cheers

@hsjoberg hsjoberg changed the title HtlcIntercept event gets replayed after channel counterparty reconnects HtlcInterceptor event gets replayed after channel counterparty reconnects Mar 17, 2021
@Roasbeef
Copy link
Member

IIUC, this is to be expected as upon reconnection at times the set of HTLCs are essentially re forwarded internally if they don't get locked into an outgoing channel.

@champo
Copy link
Contributor

champo commented Mar 20, 2021

@Roasbeef is the HTLC commited in the incoming channel? In other words, can the incoming HTLC disppear if it's not locked into an outgoing channel?

@Roasbeef
Copy link
Member

is the HTLC commited in the incoming channel? In other words, can the incoming HTLC disppear if it's not locked into an outgoing channel?

Not 100% sure what you're asking but in the "happy case" after things are fully locked in, internally things we be retransmitted again, but ignored by the switch since the circuit is already active. Looks like what we need here is a similar check: the interceptor just ignores that replay if it already has an active client and it's developed the message to them.

The incoming HTLC can only be removed if things get cancelled back, or we need to go off-chain for some reason. I upon initial forwarding, no outgoing link exists (unknown next peer), then things will be cancelled back immediately.

@Roasbeef Roasbeef added concurrency rpc Related to the RPC interface htlcswitch labels Mar 22, 2021
champo added a commit to champo/lnd that referenced this issue May 7, 2021
Having it set to nil caused lightningnetwork#5115

The problem was several layers removed from the fix. The link decides to
clean up a `fwdPkg` only if it's completed, otherwise it renotifies the
HTLCs. A package is only set to complete if it's `addAck` and
`settleFail` filters are full. For forwarded HTLCs, the `addAck` was
never being set so it would never be considered complete under this
criteria.

`addAck` is set for an HTLC when signing the next commitment TX in the
`LightningChannel`. The path for this is:
* `LightningChannel#SettleHtlc` adds the HTLC to `localUpdates`
* `LightningChannel#SignNextCommitment` builds the `ackAddRef` for all
updates with `SourceRef != nil`.
* `LightningChannel#SignNextCommitment` then passes the list of
`ackAddRef` to `OpenChannel#AppendRemoteCommitChain` to persist the new
acks in the filter

Since `SourceRef` was nil for interceptor packages, `SignNextCommitment`
ignored it and the ack was never persisted.
matheusd pushed a commit to matheusd/dcrlnd that referenced this issue Nov 14, 2023
Having it set to nil caused lightningnetwork#5115

The problem was several layers removed from the fix. The link decides to
clean up a `fwdPkg` only if it's completed, otherwise it renotifies the
HTLCs. A package is only set to complete if it's `addAck` and
`settleFail` filters are full. For forwarded HTLCs, the `addAck` was
never being set so it would never be considered complete under this
criteria.

`addAck` is set for an HTLC when signing the next commitment TX in the
`LightningChannel`. The path for this is:
* `LightningChannel#SettleHtlc` adds the HTLC to `localUpdates`
* `LightningChannel#SignNextCommitment` builds the `ackAddRef` for all
updates with `SourceRef != nil`.
* `LightningChannel#SignNextCommitment` then passes the list of
`ackAddRef` to `OpenChannel#AppendRemoteCommitChain` to persist the new
acks in the filter

Since `SourceRef` was nil for interceptor packages, `SignNextCommitment`
ignored it and the ack was never persisted.
matheusd pushed a commit to decred/dcrlnd that referenced this issue Nov 20, 2023
Having it set to nil caused lightningnetwork#5115

The problem was several layers removed from the fix. The link decides to
clean up a `fwdPkg` only if it's completed, otherwise it renotifies the
HTLCs. A package is only set to complete if it's `addAck` and
`settleFail` filters are full. For forwarded HTLCs, the `addAck` was
never being set so it would never be considered complete under this
criteria.

`addAck` is set for an HTLC when signing the next commitment TX in the
`LightningChannel`. The path for this is:
* `LightningChannel#SettleHtlc` adds the HTLC to `localUpdates`
* `LightningChannel#SignNextCommitment` builds the `ackAddRef` for all
updates with `SourceRef != nil`.
* `LightningChannel#SignNextCommitment` then passes the list of
`ackAddRef` to `OpenChannel#AppendRemoteCommitChain` to persist the new
acks in the filter

Since `SourceRef` was nil for interceptor packages, `SignNextCommitment`
ignored it and the ack was never persisted.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
concurrency htlcswitch rpc Related to the RPC interface
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants