Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anchor Output Support Design & Caveats #989

Closed
ariard opened this issue Jul 8, 2021 · 5 comments
Closed

Anchor Output Support Design & Caveats #989

ariard opened this issue Jul 8, 2021 · 5 comments
Assignees

Comments

@ariard
Copy link

ariard commented Jul 8, 2021

This isssue is a follow-up on the 21/06/21 meeting concerning our post-anchor fee-bumping strategy
and the UtxoPool interface design interface.

The Problem

Lightning commitment transactions are pre-signed with a feerate agreed on by channel counterparties
hopefully reflecting a compelling feerate for fast block inclusion, at signature time. However,
inclusion efficiency of this feerate might downgrades severely in case of mempool congestions,
thus lowering the confirmation odds of the transaction.

This issue is a safety matter if a time-sensitive second-stage HTLC-transaction must be settled
on-chain according to the Lightning security model. Further, the first-stage commitment transaction
might not have ongoing HTLC outputs to arbiter, though its feerate might not be attractive according
to node operator liquidity strategy eager to proceed to fast unilateral closing.

Anchor output upgrade allows a LN node to unilaterally attach a fee-bumping CPFP to solve this issue.
Though this flexibility novelty comes with a new burden, the LN node MUST maintain a new fee-bumping
reserve, from which to feed the potential CPFPs.

How and when this fee-bumping reserve should be funded, what should be the reserve ratio and composition, how to spend it
in case of multiple in-flight channel closing, what should be the aggregation stratey are open questions.

Let's start first how and when this fee-bumping reserce should be funded.

Fee-Bumping Reserve Funding

As soon as HTLCs start to flow in the channel, there is a risk of emergency channel closing to
settle time-sensitive outputs [0]. In consequence, liveliness of the fee-bumping reserve should
start as soon the channel is funding_locked.

(a) If the channel is single-funded and the LDK node is the funder, we can block the channel creation
(create_channel()) if the utxo pool is empty, too fragmented or aggregated amounts are too low
to guarantee efficient fee-bumping. The library user should then call a new add_bumping_reserve() to
fulfill the requirement.

W.r.t to this add_bumping_reserve(), we can either pass a private key through our interface directed
to our KeysInterface or ask a UtxoPoolImpl to return a reserve address to the user. With the
first option, we can also encipher the private key between user wallet and our KeysInterface,
thouhg i don't believe such standard exist yet across the ecosystem.

If the channel is single-funded and the LDK node is the fundee, we can reject the channel opening
proposal (new_from_req()) if the node is fresh and the reserve isn't live yet or ask the funder
to add a change output on the funding transaction (with dual-funded spec upgrade) and pay a
compensation fee out-of-band (or pre-negotiated one if the opening counterparty is a LSP and you
can assume recurring interactions).

Another option could be to route back fundee to call add_bumping_reserve during the confirmation delay
and increase our announced minimum_depth in consequence.

Overall, I think this is a major UX change as previously the LN protocol didn't make assumption that a
channel fundee already owns a UTXO.

What should be our UTXO Pool funding API both the asymmetric cases of funder/funder ?

[0] I guess we can introduce a waiwer for sending payment with a non_forwarding_node config
setting, if there is not HTLC to claim backward, you don't have the responsibility to close forward
an offered HTLC

UTXO Pool reserve ratio and composition

Fee-bumping reserve should be enough to cover the worst-case scenario of a commitment transaction
inflated with both counterparty's max_accepted_htlc/holder's max_accepted_htlc outputs.

Though how to define worst-case scenario ? One heuristic could be the maximum commitmnent transaction
size as defined by channel parameters multiplied by the feerate median of X mempool spikes on
the past 2 years.

This feerate multiplicand could be manually provided by the library user if they're able to come
up with better mempools statistics.

Another heuristic could be to to take the max_htlc_value_in_flight_msat values as the maximum
loss and be ready to burn as much in a scortched-earth approach. Though, IMO, this 100% reserve
model might too costly for node operators on the long-term.

W.r.t to the composition, we might start with one UTXO, and as we spend it for unilateral closes
keep it lazily fragmented as it might provide more ready-to-spend feerate groups.

Though should we preemptively fragment the reserve in many UTXOs if we have too many channels opened ?
In case of concurrent closing, we would like to avoid risk of too-long chain of transactions in network
mempools.

UTXO Pool Spending Strategies

Assuming concurrent, unilateral closing of many our channels, we might have 2 strategies, starting
from one UTXO.

This section assumes 2-tx package-relay support.

Solution 1 : Chained CPFPs

We chain CPFP sequentially starting from the bumping utxo, ordering the transaction from when we
did receive PackageTemplate requests. If we want to bump the first component of the chain,
we have to rebroadcast the subsequent components and endeavors the replacement relay fee.


		commitment_a	     commitment_c
			 \			\
	    		  V			 V
	reserve_utxo ---> cpfp_a --> cpfp_b --> cpfp_c --> cpfp_d -----> reserve_tx
				      ^		  		^
                                     /			       /
			 commitment_b		   commitment_d

AFAICT, this solution is insecure in concurrent setting (not even adverserial!) if the counterparty
spends our commitment transaction's remote_anchor thus blocking the chain extension or even worst
evict an early CPFP, forcing a drop out of the subsequent chain.

Solution 2 : Domino Bumping

We start with an initial package at onchain request A reception. If we receive another request
for B, we replace A's CPFP with a newer one, increased by output destinated to feed B's CPFP.
We wait for package A to confirm before to broadcast package B. If we receive a request C
and CPFP A isn't confirmed, we upgrade it again with an output.

                             commitment_c	
				    \	
	      commitment_a           V
			\	 > cpfp_c
			 V      /
	reserve_utxo --> cpfp_a 
				\
                                 > cpfp_b
				   /				
				  V
			commitment_b

A slight modifcation could be to swallow the safety risk and don't wait on cpfp_a confirmation
to broadcast subsequent packages, that would avoid slow-to-confirm cpfp_a encroaching on
subsequent package's CLTV_CLAIM_BUFFER.

This could be improved by increasing the fee-bumping frequency of cpfp_a in case of a loaded
pipeline.

Also one reserve utxo per channel doesn't present this queued claiming issue and should prevent pinning of funding_a contaminating other channel claims ?

Thoughts ?

@ariard
Copy link
Author

ariard commented Jul 12, 2021

Discussion on pinning interactions/Core's mempool support https://lightningdevkit.slack.com/archives/CTBLT3CAU/p1625786787422100

@TheBlueMatt
Copy link
Collaborator

I spent some time chatting with @sdaftuar the other day about package relay/replacement and am no longer convinced the complexity here is worth it. Specifically, I think you and I agree that some kind of minimal package relay/replacement is required for safety on an individual channel level. After chatting with @sdaftuar, I'm also thinking that its sufficient to provide safety with multiple anchor spends in one package and am not really worried about us being able to figure out how to avoid cross-commitment security contamination.

I am worried, however, about the complexity of tracking all the outputs here.

@jkczyz jkczyz self-assigned this Apr 18, 2022
@ariard ariard changed the title Anchor Fee-Bumping Reserve Design Questions Anchor Output Support Design & Caveats Apr 22, 2022
@ariard
Copy link
Author

ariard commented Apr 22, 2022

Now this is public : https://lists.linuxfoundation.org/pipermail/lightning-dev/2022-April/003561.html we SHOULD NOT forget to modify our 2nd-stage transaction parser in consequence. Post-anchor, our implementation would be blind to claim aggregated revoked HTLC-transaction as described in the issue :

if tx.input.len() != 1 || tx.output.len() != 1 || tx.input[0].witness.len() != 5 {

cc @TheBlueMatt @jkczyz

@TheBlueMatt
Copy link
Collaborator

I think we can close this now.

@ariard
Copy link
Author

ariard commented Jul 26, 2023

Sure we can close it. Though I think we can still point out it to people if we start to have LSP-side fee-bumping of the anchor outputs e.g in the ldk-lsp thing or implement smarter CPFP strategy in the future, some caveats will stay sadly.

@ariard ariard closed this as completed Jul 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants