Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anchor outputs #688

Open
wants to merge 1 commit into
base: master
from

Conversation

@joostjager
Copy link
Contributor

joostjager commented Oct 28, 2019

This PR is a continuation of #513.

Summary of changes:

  • Refer to pushme outputs as anchor outputs, to prevent confusion with push_msat on the open_channel message.

  • Add max_to_self_delay OP_CHECKSEQUENCEVERIFY OP_DROP to the to_remote output. max_to_self_delay is the maximum of the csv delay values that both parties proposed in the channel opening sequence. This ensures that the carve-out bitcoin/bitcoin#15681 works as intended and also removes the incentive to game the other party into force-closing. This does require knowledge of the value of max_to_self_delay in case of recovery from data loss.

    Note: there is still a reason left why you'd want to game the other party into closing, which is the asymmetry of the htlc spend. If there's a large htlc, it is better if the remote party force closes. This could be addressed by always (also to remote) using the presigned second level transaction, but is it worth the change?

  • Add 1 OP_CHECKSEQUENCEVERIFY OP_DROP to the non-revocation clause of the HTLC outputs. Reason: to make the carve-out work.

  • Anchor output type: locked to the (untweaked) funding pubkey and spendable by anyone after the commit tx confirms to prevent utxo set pollution.

  • Within each version of the commitment transaction, both anchors always have equal values and are paid for by the initiator.

  • The value of the anchors is the dust limit that was negotiated in the open_channel or accept_channel message of the party that publishes the transaction. It means that the definitive balance of an endpoint is dependent on which version of the commitment transaction confirms. This however is nothing new. In the current commitment format, there are always two or three valid versions of the commitment transaction (local, remote and sometimes the not yet revoked previous remote tx) which can have slightly different balances. For the initiator, it is important to validate the other party's dust limit. The initiator pays for it and doesn't want to give away more free money than necessary.

  • Leave update_fee mechanism in place. Initially the anchor outputs are mainly a safety mechanism to get the commitment transaction confirmed. Later the network can start operating at lower negotiated fees and rely more heavily on cpfp.

  • For co-op close, the commitment tx fee is a floor and parties can negotiate upward if desired.

  • HTLC timeout/success transactions are signed with SIGHASH_SINGLE|SIGHASH_ANYONECANPAY to allow attachment of an additional input to increase fee.

@joostjager joostjager changed the title Anchor outputs Anchor outputs [draft] Oct 28, 2019
@joostjager joostjager force-pushed the joostjager:anchor-outputs branch 3 times, most recently from 27217d8 to c963d54 Oct 28, 2019
@joostjager joostjager changed the title Anchor outputs [draft] Anchor outputs Oct 28, 2019
02-peer-protocol.md Outdated Show resolved Hide resolved
Copy link
Contributor

ariard left a comment

On update_fee I lean to remove it. A hurdle to use CPFP is now to operate a ready-to-use utxo mempool to feed your CPFP, the size being scaled on the number of open channels (worst-case all channels closed at same time and you can't share utxo, at least with current version of package relay). Even if we are going to be better a CPFP management with time, you need a basic version to anchor ouputs being useful today. And if it works you don't need anymore update_fee which is also a safety mechanism to get unilateral commitment tx getting confirmed (mutual closing being covered with its own negotiation phase) but less-reliable as it's a source of unexpected unilateral close due to fee negotiation disagreement. It could be argue that's less onchain footprint compare to CPFP but in case of unilateral close it's really likely you need another wawe of txn to clean pending HTLCs.

On anchor_output and updating all scripts with OP_CSV, I lean to having only one ouput being spendable by both party and avoid OP_CSV infecting. IMO, from the mempool viewpoint, there is no such thing as an attacker stucking low-feerate childs, if they got inside it means at time of their insertion their feerate was above the rollingMinimumFeeRate required and in fact they can't be distinguish from a savy honest user starting to reuse the CPFP output for a honest chain of txn (like a tree of CPFPs to bump multiple commitment txn). Worst-case you can still carve-out and replace the branch of txn, yes you will have to pay to cover their bandwidth, but it may be something you have to do if your first, honest CPFP doesn't work as expected. I think it's worthy to dig more into the mempool policy before to update most of the LN scripts.

02-peer-protocol.md Outdated Show resolved Hide resolved
02-peer-protocol.md Show resolved Hide resolved
02-peer-protocol.md Show resolved Hide resolved
02-peer-protocol.md Outdated Show resolved Hide resolved
03-transactions.md Outdated Show resolved Hide resolved
03-transactions.md Outdated Show resolved Hide resolved

A node which broadcasts an HTLC-success or HTLC-timeout transaction for a commitment transaction for which `option_anchor_outputs` applies:
- MUST contribute sufficient fee to ensure timely inclusion in a block.
- MAY combine it with other transactions.

This comment has been minimized.

Copy link
@ariard

ariard Nov 4, 2019

Contributor

nit: you can precise "MAY combine it with other non-HTLC-timeout/HTLC-success" as all of them are going to pre-sign the same index value

This comment has been minimized.

Copy link
@joostjager

joostjager Nov 4, 2019

Author Contributor

Not sure what you mean by this. Every htlc has its own output index on the commitment transaction?

This comment has been minimized.

Copy link
@ariard

ariard Nov 4, 2019

Contributor

You can't combine multiple HTLC-success or HTLC-timeout txn as they are all going to sign the input index and output index 0. If your transaction is dual or multiparty signed you can't aggregate freely without new interactivity.

This comment has been minimized.

Copy link
@joostjager

joostjager Nov 5, 2019

Author Contributor

I didn't realize that. That means that an additional fee input needs to be added to every htlc tx, assuming there is not much else to batch it with? Needs many utxos if there are 400 something htlcs outstanding :(

This comment has been minimized.

Copy link
@halseth

halseth Nov 6, 2019

Contributor

I am no expert here, but I took a look at the sighash algorithm, and it seems like this should work with segwit, since it no longer commits to the index:

SINGLE does not commit to the input index. When ANYONECANPAY is not set, the semantics are unchanged since hashPrevouts and outpoint together implictly commit to the input index. When SINGLE is used with ANYONECANPAY, omission of the index commitment allows permutation of the input-output pairs, as long as each pair is located at an equivalent index.

https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki#specification

This comment has been minimized.

Copy link
@ariard

ariard Nov 6, 2019

Contributor

Ah you're right thanks, it wouldn't have been possible before SegWit and the new transaction digest algorithm but that's cool and should save us a lot of utxos!

Here the lines of concern : https://github.com/bitcoin/bitcoin/blob/e65b4160e99fa86d6d840dce75cae29334afd1f2/src/script/interpreter.cpp#L1257

So we can aggregate txn but I think we should be careful in case of RBF bump, like not adding new unconfirmed input to avoid choking against BIP 125 rules 2 (but I think we can drop a input-output pair and broadcast on its own if its timelock expires soon ?)

@@ -36,6 +36,7 @@ The following `globalfeatures` bits are currently assigned by this specification
| Bits | Name | Description | Link |
|------|-------------------|--------------------------------------------------------------------|---------------------------------------|
| 8/9 | `var_onion_optin` | This node requires/supports variable-length routing onion payloads | [Routing Onion Specification][bolt04] |
| 14/15| `option_anchor_outputs` | Anchor outputs | [BOLT #3](03-transactions.md) |

This comment has been minimized.

Copy link
@ariard

ariard Nov 4, 2019

Contributor

nit: name could be option_bring_your_fees or option_dynamic_fee_adjustement to encompass changes on HTLC txn and underscores the use of CPFP.

This comment has been minimized.

Copy link
@joostjager

joostjager Nov 4, 2019

Author Contributor

Yes, something to think about. There are still fees attached to the txes, but possibly only minimal. The real changes are the anchor outputs plus the htlc sighash change. Dynamic fee adjustment sounds like something that we already have with update_fee.

This comment has been minimized.

Copy link
@ariard

ariard Nov 4, 2019

Contributor

option_just_in_time_fees?, naming is hard..

This comment has been minimized.

Copy link
@joostjager

joostjager Nov 5, 2019

Author Contributor

Yes, it is hard. The fundamental decision is probably whether we describe what the option does (option_anchor_outputs), what it enables (option_bring_your_own_fees) or why you want it (option_always_confirm).

We keep the brainstorm open...

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 4, 2019

@joostjager joostjager force-pushed the joostjager:anchor-outputs branch from c963d54 to 573d8e2 Nov 4, 2019
@joostjager

This comment has been minimized.

Copy link
Contributor Author

joostjager commented Nov 4, 2019

To further Antoine’s comments about bit, update_fee needs to go (preferably now, though maybe there’s an argument to make it separate?). It’s not possible to implement it in any way sensibly. Not only are you speculating on future fee rates, but you’re speculating on future fee rates at the time you need them. Even worse, you’re somehow expecting to negotiate an impossible-to-speculate value with a counterparty you don’t trust.

How can we remove update_fee now if there is no package relay yet? We'd then rely on the feerate_kw negotiated during channel open to be sufficient for the lifetime of the channel. How much safety buffer would it take? My thought is that with update_fee we at least have an option to update the fee rate later if required for relay.

@joostjager

This comment has been minimized.

Copy link
Contributor Author

joostjager commented Nov 4, 2019

Worst-case you can still carve-out and replace the branch of txn, yes you will have to pay to cover their bandwidth, but it may be something you have to do if your first, honest CPFP doesn't work as expected. I think it's worthy to dig more into the mempool policy before to update most of the LN scripts.

So your saying that replacing the branch is not bad enough to justify always creating two anchor outputs and adding the CSV timelocks to everything?

What is your view on this @TheBlueMatt, as the author of bitcoin/bitcoin#15681?

With just a single output that is spendable only by both parties initially, there is still the question how an outsiders learns the spending script to sweep up abandoned utxos. On the ML, @rustyrussell proposed in https://lists.linuxfoundation.org/pipermail/lightning-dev/2019-October/002270.html to have one anchor reveal the key for the other anchor. In the case of just a single anchor, something else is required.

Or don't worry about the abandoned utxos and see it as a motivation to fix (or not fix) this issue on the base layer?

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 4, 2019

How can we remove update_fee now if there is no package relay yet?

Right, I think this is totally a valid point, but I'm not sure it's really worth it. Today, if you open a channel today, the best a node can do is predict future fees on the basis of current fees, implying update_fee of close to 1 sat/byte (maybe with some headroom). This is obviously bogus, and has the same issues as a static value (namely, that fees may spike and then you're screwed), with all the complication of figuring out how to negotiate fees.

The only reason I see for keeping update_fee is if we make it effectively static but use it as a cheap upgrade mechanism - current nodes can use a static update_fee of roughly the mempool minfee max that we've seen at any point, and then future nodes can, via an update, switch to minimum relay fee plus a delta once we have package relay on the network. Still, I'm not sure exactly what the semantics of that are, so it may make sense to just skip it.

@ariard

This comment has been minimized.

Copy link
Contributor

ariard commented Nov 4, 2019

So your saying that replacing the branch is not bad enough to justify always creating two anchor outputs and adding the CSV timelocks to everything?

If mempool works as expected on RBFing the subpackage (but best would be to write a test against it with worst-case scenario to assert it) the only advantage of adding CSV timelock is to save on bandwidth fees replacement (which just have to be superior, no minimal delta required). Adding OP_CSV to every script is a new cost too, and this one is going to be encumbered for everyone everytimes, not just in worst-case of someone building a low-feerate branch on anchor output.

Can the output be a anyone-can-spend, if a third-party wants to pay for my fees that's nice and it's too low I still can overbid on it ? I think this third-party could be a watchtower without introducing new burden on key management. I don't see of we can fix it at the base layer if outputs can't be garbage-collected by anyone.

How can we remove update_fee now if there is no package relay yet?

Yes, even if it's half-broken that's the only way to have a non-fine-grained but best enough feerate to get in the mempool and then trigger a carve-out to un-stuck. My thought was hopefully to get package-relay for 0.20 before this proposal get spec out and deployed. At the end of the day, are people fine if we need a option_deprecated_update_fee in one year ?

@harding

This comment has been minimized.

Copy link
Contributor

harding commented Nov 4, 2019

If mempool work as expected on RBFing the subpackage (but best would be to write a test against it with worst-case scenario to assert it) the only advantage of adding CSV timelock is to save on bandwidth fees replacement

I don't think that's correct. If there are at least two HTLCs that the closing party can spend immediately, then he can spend from the first HTLC to create a max-sized child transaction and spend from the second HTLC to create a carve-out. Then a third output that's anyone-can-spend is useless because it can't be relayed until the parent transaction confirms (and because it can't be relayed, it can't be RBF'd either).

I think you need the "1 CSV" delays on all non-anchor outputs.

@ariard

This comment has been minimized.

Copy link
Contributor

ariard commented Nov 4, 2019

If there are at least two HTLCs that the closing party can spend immediately, then he can spend from the first HTLC to create a max-sized child transaction and spend from the second HTLC to create a carve-out.

Ah yes I think you're right, and there is no way to force SIGHASH_SINGLE|SIGHASH_ANYONE_CANPAY on this HTLC spending transactions to RBF them against your attacker will ?

So we're stuck between:

  • adding 1 CSV delay to all non-anchor outputs and one single anyone-can-spend output where party can CPFP biding against each other ?
  • modifying mempool rules to allow per-output carve-out (but too much DoSy as you described on mailing list?)

Going further, @harding can't we tag the carve-output to be sure it's the anyone-can-spend RBFable at will ? Like restraining carve-out to only the first output in the transaction and as commitment txn are dual-party-signed it can't be tweaked ?

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 4, 2019

@harding

This comment has been minimized.

Copy link
Contributor

harding commented Nov 4, 2019

Going further, @harding can't we tag the carve-output to be sure it's the anyone-can-spend RBFable at will ? Like restraining carve-out to only the first output in the transaction and as commitment txn are dual-party-signed it can't be tweaked ?

There might be a better way to do carve-out or something else that hasn't been discussed yet, but it took carve-out over a year to be discussed, PR'd, reviewed, merged, and deployed (assuming nothing goes horribly wrong and it doesn't get yanked last minute from Bitcoin Core 0.19), so I think it's probably best to focus this PR on figuring out how to best use carve-out as-is. After that, my personal preference in LN-supporting work on full node relay policy would be getting package relay deployed so that bring-your-own-fees can work pretty much as intended (package relay would support many other things too).

@joostjager joostjager force-pushed the joostjager:anchor-outputs branch 2 times, most recently from 62a7a28 to 24be2a2 Nov 5, 2019
@halseth

This comment has been minimized.

Copy link
Contributor

halseth commented Nov 5, 2019

Going further, @harding can't we tag the carve-output to be sure it's the anyone-can-spend RBFable at will ? Like restraining carve-out to only the first output in the transaction and as commitment txn are dual-party-signed it can't be tweaked ?

There might be a better way to do carve-out or something else that hasn't been discussed yet, but it took carve-out over a year to be discussed, PR'd, reviewed, merged, and deployed (assuming nothing goes horribly wrong and it doesn't get yanked last minute from Bitcoin Core 0.19), so I think it's probably best to focus this PR on figuring out how to best use carve-out as-is. After that, my personal preference in LN-supporting work on full node relay policy would be getting package relay deployed so that bring-your-own-fees can work pretty much as intended (package relay would support many other things too).

Would it be safe to add the restriction "if output is OP_TRUE then only allow one (small) unconfirmed descendant"?

I cannot imagine such outputs being useful in any situation where you want to chain a large number of transaction. But as you say these changes take a lot of time to get through, so I think we should stick with what we have for now.

@halseth

This comment has been minimized.

Copy link
Contributor

halseth commented Nov 5, 2019

I think the discussion so far can be summarized as:

  • keep update_fee as is. We don't really have a way of statically determining a commitment fee that ensures proper propagation, so it will need to be dynamic somehow. We can explore removing it if package relay becomes a reality, at which point a zero commitment fee probably would make the most sense.

  • Add 1 CSV to all non-anchor outputs (to_remote gets a real delay value). This ensures the attacker cannot use the HTLC outputs to reach the max descendant limit, including using the carve-out exception.

For the anchor construction we have two options that stick out:

  1. Add one OP_TRUE anchor that anyone can spend from. This is simple, since we don't need any new key negotiation, and it can spent by anyone watching the mempool. It will also be cheap to spend, since it requires no signature, maybe making it more likely that the economic incentives for cleaning it up will align. Note that the attack vector where an attacker can make us pay a high fee to get the commitment confirmed still remains, by attaching a large low feerate descendant that we must pay the absolute fee to replace.
  2. Add two anchors that can be spent unconfirmed by the each of the channel participants, or by anyone after confirmation. As suggested by @harding, reusing the funding keys here seems like a clever solution to allow anyone to compute the scripts after confirmation. These will be a bit more expensive to spend, but we will never need to replace a large low feerate descendant of an attacker to bump the effective commitment feerate. Note that this also requires the carve-out exception in bitcoind 0.19 to be effective, otherwise the attacker could always attach max_descendants to its anchor, making it impossible for us to add our own spend.

I lean towards 1), mostly because it is simpler, and this is most likely not the last time we will change the commitment format.

@harding

This comment has been minimized.

Copy link
Contributor

harding commented Nov 5, 2019

@halseth

Would it be safe to add the restriction "if output is OP_TRUE then only allow one (small) unconfirmed descendant"?

I don't see any fundamental safety problems in a few minutes of thinking about it[1], but if you do that, I think you pretty much guarantee that every CPFP of the commitment transaction will pay at least 10,000 sat in fees:

  • I assume the "(small) unconfirmed descendant" will be up to 10,000 vbytes in size, like current carve-out.
  • Griefer Mallory is a third party who hates LN. As soon as her node sees an OP_TRUE anchor output enter the mempool in a commitment transaction, she spends it to a 10,000 vbyte transaction at the minimum relay fee (1 sat/vbyte = 10,000 sats).
  • Alice is a party to the channel who needs the commitment tx to confirm soon, so she RBFs the OP_TRUE spend. She has to pay a small amount over the original 10,000 sats to cover her additional relay cost (BIP125 rule 4) even if she creates a much smaller child transaction. If Alice was the channel initiator, that's on top of whatever minimum fees she paid for the commitment transaction itself.
  • Mallory is now paying nothing despite having tried this attack.

If the rolling minimum feerates increase, Mallory can always ensure that the honest participants have to pay at least small_tx_size * current_min_feerate to use the OP_TRUE output. If Mallory is clever and you don't use the 1 OP_CSV construction, she can inspect any spends from commit tx, find out when the earliest HTLC expires, and create her large-sized spend with a feerate that's as high as possible but that has only a small chance of confirming before that expiration; this could inflate the fees Alice needs to pay much beyond 10,000 sat.

At the low feerates we've seen for the past 18 months, I suspect the proposed approach with carve-out outputs that can only be spent by the channel participants while the commitment tx is in the mempool is probably cheaper than a minimum of 10,000 sats plus the cost of the commit tx (but I reckon that y'all probably have better data and intuition about that than I do).

Note that, for your anchor construction (2) described above, you'd probably be using P2WSH(OP_TRUE) if you wanted to deploy it today. That has only the current package size limit of 100,000 vbytes, so Mallory could force Alice to pay a minimum of about 100,000 sats per commitment. That seem pretty high to me compared to the small fixed costs of the currently-proposed carve-out construction.

[1] I think the following conditions would need to be explicit: (1) the OP_TRUE would have to in the scriptPubKey (i.e. no P2SH(OP_TRUE) or P2WSH(OP_TRUE)); what we might call "bare OP_TRUE". The reason is that Bitcoin Core currently relays the P2SH/P2WSH variants and so changing how it handles those cases could disrupt someone existing use of them, so we'd at least have to be much more careful. Bitcoin Core doesn't currently relay bare OP_TRUE (AFAIK), so allowing it to relay it at least in some cases is loosening the relay rules and so shouldn't affect anyone's existing operations. (2) If the transaction has multiple OP_TRUE outputs, only one should be allowed to be spent as an exception to other mempool rules (like carve-out) or otherwise you allow a huge amount of mempool spam; I think you were implying this, but I wanted to make it explicit.

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 5, 2019

I think the discussion so far can be summarized as: keep update_fee as is.

Huh? I'm very confused. I don't see anyone in this thread suggesting we keep update_fee? I pointed out, above, that we don't have a way of dynamically determining a commitment transaction fee that will propagate either, so I think that argument is largely bogus.

@halseth

This comment has been minimized.

Copy link
Contributor

halseth commented Nov 6, 2019

I think the discussion so far can be summarized as: keep update_fee as is.

Huh? I'm very confused. I don't see anyone in this thread suggesting we keep update_fee? I pointed out, above, that we don't have a way of dynamically determining a commitment transaction fee that will propagate either, so I think that argument is largely bogus.

Argument is that since we cannot pick a static value to ensure propagation, we need a way to change the commitment fee. Path of least resistance seems to just stick with status quo. It is not making the problem worse, negotiating a fee that propagates sounds like a strictly easier problem than a fee that readily confirms.

Removing update_fee altogether cannot be done without package relay, so it is better to revisit this when that's a reality.

@halseth

This comment has been minimized.

Copy link
Contributor

halseth commented Nov 6, 2019

Note that, for your anchor construction (2) described above, you'd probably be using P2WSH(OP_TRUE) if you wanted to deploy it today. That has only the current package size limit of 100,000 vbytes, so Mallory could force Alice to pay a minimum of about 100,000 sats per commitment. That seem pretty high to me compared to the small fixed costs of the currently-proposed carve-out construction.

(I'm assuming you meant (1)): Good observation, my assumption here was that it is unlikely that somebody would perform this attack, and in that unlikely case we would just pay the higher fee. But as you point out, the attacker is not only restricted to the channel counterparties, but all nodes watching the mempool, so someone could trivially make the life sad for every LN user.

(Now I'm leaning towards construction (2) from above)

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 6, 2019

Argument is that since we cannot pick a static value to ensure propagation, we need a way to change the commitment fee

Hmm, can you respond to my above point? Specifically, I don't believe that we can pick a value to ensure propagation dynamically, either, so the argument is somewhat moot.

@ariard

This comment has been minimized.

Copy link
Contributor

ariard commented Nov 6, 2019

I don't see any fundamental safety problems in a few minutes of thinking about it[1], but if you do that, I think you pretty much guarantee that every CPFP of the commitment transaction will pay at least 10,000 sat in fees

Seems like the only safety measure we have to avoid a channel-party/third-party inflating at no cost any CPFP is by circuit-breaking each party ability to spend in different outputs. We can still avoid the OP_CSV cost by restraining carve-out to some pattern-matching like OP_TRUE or tagged output (implemented here to see how it looks like) but that would be still a hack. Long-term solution for N-party would be to force any tx chaining on the tagged carve-out output to pay a competitive feerate...

So 2 anchor outputs anyone-can-spend-after-delay to sweep them seems to be the best solution we can get engineer for now.

On update_fee, it seems to be as an insurance mechanism against a random event (B disappearance or maliciousness) based on some non-determinism (mempool feerate fluctuation). Combining both anchor output and update_fee would seem a benefit in case you get your bet right enough to get in the mempool but not in confirmation tier (but likely if you get your bet right to get into the mempool you should be also in the confirmation tier if your fee estimator isn't broken). Its reliability is kinda fuzzy and doesn't provide the safety level we would have with package relay, so I'm not sure it's a good idea to keep it and let people think now due to new commitment format we are secure..

@halseth

This comment has been minimized.

Copy link
Contributor

halseth commented Nov 7, 2019

Argument is that since we cannot pick a static value to ensure propagation, we need a way to change the commitment fee

Hmm, can you respond to my above point? Specifically, I don't believe that we can pick a value to ensure propagation dynamically, either, so the argument is somewhat moot.

Yep, we agree that any value (being dynamic or static) cannot give any guarantee about future propagation. And since we don't have a solution, I argue there's nothing we can do at this point. (hence no change to update_fee)

@joostjager joostjager force-pushed the joostjager:anchor-outputs branch from 24be2a2 to 0b21232 Nov 7, 2019
@joostjager

This comment has been minimized.

Copy link
Contributor Author

joostjager commented Nov 7, 2019

PR updated with:

  • Both parties will get the same to_self_delay, which is the max of the values that they proposed in the open_channel and accept_channel messages.
  • The anchor will be locked to the funding pubkey and become anyone can spend after 10 blocks.

Any opinions about a good value lock time? It seems that we can also make it much longer. That will give channel parties more time to sweep their anchor and reclaim those funds. Whether that is economical depends on their dust limit and the fee market of course.

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 7, 2019

Co-authored-by: Rusty Russell <rusty@rustcorp.com.au>
@joostjager joostjager force-pushed the joostjager:anchor-outputs branch from 0b21232 to 393ba61 Nov 7, 2019
@ariard

This comment has been minimized.

Copy link
Contributor

ariard commented Nov 7, 2019

Any opinions about a good value lock time? It seems that we can also make it much longer. That will give channel parties more time to sweep their anchor and reclaim those funds.

What's about far bigger value like a week ? That may give you more room to aggregate your outputs, or even use some kind of service to aggregate the output and pay you back anchor value on LN. Garbage-collecting isn't few days near.

@halseth

This comment has been minimized.

Copy link
Contributor

halseth commented Nov 8, 2019

Yep, we agree that any value (being dynamic or static) cannot give any guarantee about future propagation. And since we don't have a solution, I argue there's nothing we can do at this point.
Right, so we agree we should drop update_fee, since it largely does nothing? I seem to be missing part of your argument here.

On Nov 7, 2019, at 06:38, Johan T. Halseth @.***> wrote: Yep, we agree that any value (being dynamic or static) cannot give any guarantee about future propagation. And since we don't have a solution, I argue there's nothing we can do at this point.

Ah, you probably didn't see the edited version of my comment, where I added "(hence no change to update_fee)"

@TheBlueMatt

This comment has been minimized.

Copy link
Collaborator

TheBlueMatt commented Nov 8, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
6 participants
You can’t perform that action at this time.