Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make P2SH redeem script "IF .. PUSH <x> ELSE ... PUSH <y> ENDIF CHECKMULTISIG .. " standard #26348

Closed
wants to merge 12 commits into from

Conversation

SergioDemianLerner
Copy link
Contributor

To characterize a transaction as standard or non-standard, Bitcoin Core tries to estimate the number of signature checks a particular script can potentially request. A maximum of 15 signature checks is allowed. The function GetSigopCount()
counts the maximum number of signature checks performed by a script.
For each CHECKSIG or CHECKSIGVERIFY found in a script, one is added to the signature check counter. For multisigs, the maximum number of checks depends on the number of signatories. When CHECKMULTISIG is executed, this number is at the top of the stack. However, to avoid executing the script to obtain the top of the stack value, Bitcoin Core performs an estimation based on static analysis of the script, which returns only an upper bound. This is what GetSigOpCount(true) does. However, the static analysis is very rudimentary, and it overestimates the signature checks for certain common scripts.

How GetSigOpCount() works

GetSigOpCount() assumes that the number of signers is pushed into the stack immediately before the CHECKMULTISIG opcode. If the previous opcode is not a push opcode, then this function assumes the worst case for each CHECKMULTISIG found, which is 20 signatures (defined by MAX_PUBKEYS_PER_MULTISIG). The problem is that this constant is greater than the maximum number of checks allowed per P2SH redeem script, which means that any P2SH script that uses CHECKMULTISIG must specify the number of signers with a PUSH opcode right before.

The bug in the RSK sidechain

To add a time-locked emergency multisig to RSK powpeg, the RSK community, with the help of RSK core developers, migrated the multisig script that holds the funds of the peg from a traditional P2SH multisig to the following transaction redeem script:

OP_NOTIF 
	OP_PUSHNUM_M
	OP_PUSHBYTES_33 pubkey1
	...
	OP_PUSHBYTES_33 pubkeyN
	OP_PUSHNUM_N 
OP_ELSE 
	OP_PUSHBYTES_3 <time-lock-value>
	OP_CSV 
	OP_DROP 
	OP_PUSHNUM_3 
	OP_PUSHBYTES_33 emergencyPubkey1
	…
	OP_PUSHBYTES_33 emergencyPubkey4
	OP_PUSHNUM_4 
OP_ENDIF 
OP_CHECKMULTISIG

We see that there are two script paths and, to reduce the script size, a single CHECKMULTISIG is used for the two paths, separating the signer count from the CHECKMULTISIG opcode. This script worked on testnet, because it lacks the standard checks performed in Mainnet.
The Liquid sidechain uses a very similar script, but using segwit addresses.

We can see here that GetSigOpCount() will find that END_IF is not a push opcode and will assume the number of signature checks is MAX_PUBKEYS_PER_MULTISIG.

Motivation for Changing Bitcoin's standard rules

Since having two execution paths for two different multisigs, one of them usually time-locked, is a very common use case, Bitcoin users can benefit from relaxing the standard rule and making two-paths multisigs standard.
The alternate way of having two-paths multisigs is by adding the CHECKMULTISIG at the end of every path. However, this reduces the maximum number of functionaries of the first (non time-locked) multisig, since the sum of both must be below 15.

Another benefit is that this change separates the new GetStandardSigOpCount() function used for standard rules from the old GetSigOpCount() method that is used in consensus code (both with fAccurate==true and fAccurate=false). This is very important because consensus rules should not be mixed with non-consensus rules in a dangerous way, as it is today.

An additional benefit for the RSK community is that if Bitcoin Core implements this improvement, the RSK community avoids preparing and activating a hard-fork to unblock the funds in the peg.

I include additional unit tests to show that:

  • Only a single level of nesting is supported. "IF ELSE ENDIF CHECKMULTISIG" is supported but "IF IF ENDIF ENDIF CHECKMULTISIG" is not supported.
  • There must be only two paths (IF ELSE ELSE ELSE ENDIF CHECKMULTISIG" not supported).
  • The change does not affect any other script standard rules, apart from the two-paths multisig.
  • The change does not modify consensus rules.

@luke-jr
Copy link
Member

luke-jr commented Oct 20, 2022

Isn't this already policy-accepted for P2SH, Segwit, and Taproot? Is there a benefit to having it raw?

@darosior
Copy link
Member

Interesting Script edge case!

You must have balanced the cost of proposing this standardness rule change, and making a PR touching consensus code, with switching to using Segwit scripts (either v0 or v1) which don't have this limitation (and have other advantages). Is there a particular reason why you can't migrate to Segwit?

Motivation for Changing Bitcoin's standard rules
Since having two execution paths for two different multisigs, one of them usually time-locked, is a very common use case, Bitcoin users can benefit from relaxing the standard rule and making two-paths multisigs standard.
The alternate way of having two-paths multisigs is by adding the CHECKMULTISIG at the end of every path. However, this reduces the maximum number of functionaries of the first (non time-locked) multisig, since the sum of both must be below 15.

Bitcoin users would probably just do that under P2WSH or Taproot nowadays. And that would be standard. But even under P2SH context it's far from being the only alternative. You could also just count public keys, for instance (untested):

IF
  <pk_1> CHECKSIG SWAP
  <pk_2> CHECKSIG ADD SWAP
  ...
  <pk_n> CHECKSIG ADD
  <m> EQUAL
ELSE
  <timelock> CSV DROP
  <pk_1'> CHECKSIG SWAP
  <pk_2'> CHECKSIG ADD SWAP
  <pk_3'> CHECKSIG ADD SWAP
  <pk_4'> CHECKSIG ADD
  <3> EQUAL
ENDIF

For a small number of pubkeys this is standard under P2SH, and the next limit you'd hit would probably be MAX_SCRIPT_ELEMENT_SIZE that limits the size of the redeem Script.

Writing Scripts by hand is a huge footgun, and we now have tooling to do that safely. The implementation of Miniscript in Bitcoin Core does not support P2SH context. But other implementations do if you really can't migrate to at least Segwit v0.

@DrahtBot
Copy link
Contributor

DrahtBot commented Oct 20, 2022

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Reviews

See the guideline for information on the review process.

Type Reviewers
Concept ACK JeremyRubin

If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #13525 (policy: Report reason inputs are nonstandard from AreInputsStandard by Empact)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@SergioDemianLerner
Copy link
Contributor Author

SergioDemianLerner commented Oct 20, 2022

You must have balanced the cost of proposing this standardness rule change, and making a PR touching consensus code, with switching to using Segwit scripts (either v0 or v1) which don't have this limitation (and have other advantages)

The cost for the RSK community to modify RSK to switch to segwit is very high, as it is a hard-fork. All the code that produces and interprets Bitcoin transactions is consensus code in RSK.
Our current highest priority is to unblock the funds in the peg (currently one or more mining pools will temporarily help RSK to deliver the non-standard transactions into blocks, but this is far from ideal, as it affects RSK decentralization).
We cannot migrate to segwit as an emergency hard-fork. It would need months of development, months of testing, more months for activation on testnet, etc. We have already created a patch to migrate to the following redeem script:

OP_NOTIF 
	...
	OP_PUSHNUM_N 
	OP_CHECKMULTISIG
OP_ELSE 
	…
	OP_PUSHNUM_4 
        OP_CHECKMULTISIG
OP_ENDIF 

But this is still a hard-fork, and the full upgrade can take several weeks, considering the public review stage and all further stages.

The RSK community mid-term plan is to migrate to segwit.

Regarding touching the consensus code, I firmly believe that GetSigOpCount() should be split in two functions: one that is called only by consensus code and one that is only called by AreInputsStandard(). This is obvious to me, since the standard rules can change over time, but not the consensus rules, and there is a real risk that a tweak of the standard rules ends up hiding an unintended consensus change. I was surprised to discover that GetSigOpCount() is called by consensus code with fAccurate==true and fAccurate==false, so basically that function should be set in stone.

Therefore, even if the you are in doubt if the addition of two-paths multisigs is a benefit, I hope you still see the benefit to split GetSigOpCount() into two initially mirrored functions with different names, and we can do this in a separate PR.

@SergioDemianLerner
Copy link
Contributor Author

Isn't this already policy-accepted for P2SH, Segwit, and Taproot? Is there a benefit to having it raw?

It's not accepted in P2SH. That's the problem.

@darosior
Copy link
Member

I understand the urgency for the RSK community. To be clear i'm not opposing relaxing this standardness rule on its own, i'm just questioning whether it's worth it.

You mention the mid-term plan is for RSK to migrate to Segwit. But proposing a standardness rule change does not seem like a short-term plan to me: even in the best case scenario that this PR is getting enough interest, review (i haven't looked at the code yet but it's currently failing CI) and gets eventually merged before the next release branch-off, then that a release gets shipped with the new standardness rule and that at least part of the nodes and hashrate upgrade to this new rule. That's what, 1 year at least before your transactions get relayed? Is it reasonable to rely on direct communication with mining pools for that long? And won't the switch to Segwit happen before that?

@SergioDemianLerner
Copy link
Contributor Author

I've thought more about @darosior comment.
Honestly, currently a review of this patch by bitcoin core developers is important to the bitcoin miners that will process (and help) the Rootstock community by mining the non-standard transactions required to migrate the funds to an updated script, so that there is a zero chance they create an invalid block.

Therefore, even if the patch is not merged into Bitcoin, by reviewing the patch and giving your thumbs up that the code is sound, you're helping also the rootstock community.

@SergioDemianLerner
Copy link
Contributor Author

Regarding the CI failure, I fixed it. But this brings an interesting topic to discuss (separate of this patch):

Invalid scripts are detected obviously at some later moment and transactions containing invalid scripts are never accepted into the memory pool.
However, in case GetStandardSigOpCount() detects and invalid script (i.e. staring with ENDIF) it's difficult to decide what GetStandardSigOpCount() should return. If it returns 0, you let a script containing too many checksigs be executed by the node. This is bad, but gives the node the opportunity to ban the peer that sent the invalid transaction. If it returns a high value, you avoid high computations, but at the same time you do not get the chance to penalize the peer that sent the bad transaction.

To summarize, on an invalid script detected:

option 1) return 20 -> avoid heavy computations, do not penalize
option 2) return 0 -> let heavy computations be performed, penalize
option 3) continue but revert to how the old GetSigOpCount() worked

My code currently does 3, from the point the script is detected as invalid.

@maflcko
Copy link
Member

maflcko commented Nov 2, 2022

Tend to agree with darosior, see also the comment by achow101 in #20178 (comment) .

So if it is not a goal to get this patch merged, it might be easier to comment out the return false in the policy check and send that patch to a miner along with your transaction.

@SergioDemianLerner
Copy link
Contributor Author

But by changing our script to IF ... <n> CHECKMULTISIG ELSE ... <q> CHECKMULTISIG ENDIF we have to reduce the number of signatories from n=15 to n=11 (because the emergency multisig consumes 4 (q=4)).

This patch enables RSK to go back to having 15 signatories in one year.

@JeremyRubin
Copy link
Contributor

Tend to agree with darosior, see also the comment by achow101 in #20178 (comment) .

So if it is not a goal to get this patch merged, it might be easier to comment out the return false in the policy check and send that patch to a miner along with your transaction.

you misunderstand: if core does not merge this patch, a sufficient number of miners will change to a proprietary ruleset to support rootstock. it would be a mistake, IMO, for core to not try to match the policy miners will adopt in a case like this.

@JeremyRubin
Copy link
Contributor

strong concept ACK, will need to review the particulars of the patch.

@instagibbs
Copy link
Member

I think @darosior has already said what I would say. 15 sig limit seems like the limit due to legacy p2sh(unless you put all keys behind hashes, havent worked all that out), and future tooling subsumes this discussion post-segwit. So we're left with the immediate issue at hand.

Another alternative would be to ask RSK miners to mine these transactions directly?

@instagibbs
Copy link
Member

Only a single level of nesting is supported. "IF ELSE ENDIF CHECKMULTISIG" is supported but "IF IF ENDIF ENDIF CHECKMULTISIG" is not supported.
There must be only two paths (IF ELSE ELSE ELSE ENDIF CHECKMULTISIG" not supported).

Was this chosen for a particular reason? Most of the patch complexity appears to be this restriction.

@luke-jr
Copy link
Member

luke-jr commented Nov 3, 2022

if core does not merge this patch, a sufficient number of miners will change to a proprietary ruleset to support rootstock. it would be a mistake, IMO, for core to not try to match the policy miners will adopt in a case like this.

Strongly disagree. Miners should aim to match nodes, not vice-versa.

@JeremyRubin
Copy link
Contributor

i don't think that's particularly true to how the mempool is used today. for example, fee estimation uses transactions that miners likely have to estimate how much fee to pay to be in the percentile desired by the user. if the user nodes do not have transactions that miners have, then the users fee estimation will be off, which is not incentive aligned for those users. so the users will want to -- especially in "benign" cases like this -- want to match miner policy.

@SergioDemianLerner
Copy link
Contributor Author

Only a single level of nesting is supported. "IF ELSE ENDIF CHECKMULTISIG" is supported but "IF IF ENDIF ENDIF CHECKMULTISIG" is not supported.
There must be only two paths (IF ELSE ELSE ELSE ENDIF CHECKMULTISIG" not supported).

Was this chosen for a particular reason? Most of the patch complexity appears to be this restriction.

On the contrary, making it work for arbitrary IF-nesting requires using an additional stack, which brings the same complexities as normal script execution. Also, AFAIK the policy regarding standard transaction rules has always been to enable the minimum subset of standard scripts to cover existing use cases, as an additional security barrier. Right?

@SergioDemianLerner
Copy link
Contributor Author

if core does not merge this patch, a sufficient number of miners will change to a proprietary ruleset to support rootstock. it would be a mistake, IMO, for core to not try to match the policy miners will adopt in a case like this.

Strongly disagree. Miners should aim to match nodes, not vice-versa.

I appreciate the interesting discussion regarding who establishes the mempool policy, and I think this is a hot topic right now, but in this particular case, that political discussion will just obfuscate the simple fact that enabling a two-path multisig is a simple, concrete, rational, logical and necessary step to support the federated networks use case. The existence of the Rootstock sidechain is a proof that the two-path multisig script is useful.

Temporarily some miners will apply this patch, and then the rootstock community will switch to an alternate script (that supports less signatories) so that miners can later revert the patch.
But the expectation in the rootstock community is that Rootstock can go back to the previous script in ~1 year if most Bitcoin nodes and miners consider this script standard. Switching to segwit IS an alternative, but the rootstock core devs are extremely caution on upgrading this core functionality of Rootstock, as the risks are high.

@instagibbs
Copy link
Member

Apologies, I actually missed the crux of the solution here. I now know why the complexity is here.

Also, AFAIK the policy regarding standard transaction rules has always been to enable the minimum subset of standard scripts to cover existing use cases, as an additional security barrier. Right?

There is a counter-balance to making sure the new check isn't too complex, as there is a maintenance trade-off as well cost if we get the correctness wrong and it shuts off other scripts' standardness. If something was extremely simple and relaxed a bit further, it's worth consideration.

f.e. we could relax MAX_P2SH_SIGOPS to exactly MAX_PUBKEYS_PER_MULTISIG. I'll leave you as the bitcoin script expert to tell me how bad that could be for legacy performance in adversarial case :)

@SergioDemianLerner
Copy link
Contributor Author

SergioDemianLerner commented Nov 14, 2022

What is the current standard limit on the number of sigops in a P2WSH redeem script ?
It seems there is no per-script limit, but only a per-transaction limit.

@instagibbs
Copy link
Member

static constexpr unsigned int MAX_STANDARD_TX_SIGOPS_COST{MAX_BLOCK_SIGOPS_COST/5};

@SergioDemianLerner
Copy link
Contributor Author

Theoretically, if every opcode in the script is a checksig, you can do 10k checks per script in P2WSH vs just 15 in P2SH. It seems that the only reason for such gap in cost is quadratic-hashing attacks. Segwit checksigs do not force script re-hashing, and that's why the limit is much much higher

Therefore we can assume that MAX_P2SH_SIGOPS being low is a protection against quadratic-hashing attacks. When this limit was introduced (2014, see commit here and rationale here), the problem was already known (I reported it in 2013).

Therefore, changing MAX_P2SH_SIGOPS to match exactly MAX_PUBKEYS_PER_MULTISIG, even if trivial, slightly degrades Bitcoin's DoS security.

The patch I propose is a little more complex but does not reduce the DoS security of Bitcoin in any way.

@ajtowns
Copy link
Contributor

ajtowns commented Dec 10, 2022

Therefore we can assume that MAX_P2SH_SIGOPS being low is a protection against quadratic-hashing attacks.

I think MAX_P2SH_SIGOPS is just floor(MAX_SCRIPT_ELEMENT_SIZE/CPubKey::COMPRESSED_SIZE) = floor(520/33) = floor(15.75..) -- so the only way you could have more sigops than that is if you're doing multiple signatures by the same key (via OP_DUP) or if you're letting the signer provide keys as part of the p2sh stack.

As a result, I don't see how you get more than 10 normal keys in the script you've presented in the first place: the NOTIF .. ELSE <h> CSV DROP 3 <e1> <e2> <e3> <e4> 4 ENDIF CHECKMULTISIG part of your script will consume 148 bytes, leaving only 372 bytes for <m> <key1> .. <keyn> <n> which for n=10 consumes 342 bytes, and for n=11 would already overflow at 376 bytes. I believe that would cause a SCRIPT_ERR_PUSH_SIZE on the first run through of EvalScript, prior to the SCRIPT_VERIFY_P2SH run.

Playing with miniscript suggests a solution:

IF
  DUP HASH160 <hash(K1)> EQUALVERIFY CHECKSIG TOALT
  DUP HASH160 <hash(K2)> EQUALVERIFY CHECKSIG FROMALT ADD TOALT
  ..
  DUP HASH160 <hash(Kn)> EQUALVERIFY CHECKSIG FROMALT ADD
  <m>
ELSE
  DUP HASH160 <hash(E1)> EQUALVERIFY CHECKSIG TOALT
  DUP HASH160 <hash(E2)> EQUALVERIFY CHECKSIG FROMALT ADD TOALT
  DUP HASH160 <hash(E3)> EQUALVERIFY CHECKSIG FROMALT ADD TOALT
  DUP HASH160 <hash(E4)> EQUALVERIFY CHECKSIG FROMALT ADD
  <d> CSV DROP
  3
ENDIF
EQUAL

which would let you have n=14 via a 508 byte script (n=15 gives a 536 byte script). (If your emergency keys are also valid for regular usage, you could increase that up to n=18).

In general, the pkh approach would allow up to 20 distinct keys in a p2sh script (eg, if you wanted a 20-of-20 multisig), absent the MAX_P2SH_SIGOPS restriction. That makes me lean towards thinking it could make sense to increase MAX_P2SH_SIGOPS to 20.

Therefore, changing MAX_P2SH_SIGOPS to match exactly MAX_PUBKEYS_PER_MULTISIG, even if trivial, slightly degrades Bitcoin's DoS security.

I don't think this is meaningfully true? You get O(N**2) hashing due to N being the number of signatures across the entire transaction (and also proportional to the overall size of the transaction), not just for a single input; so, assuming you're constrained to spending p2sh inputs, a 100kB tx would go from perhaps 1455 signatures (144MB of hashed data) to 1480 signatures (146MB of hashed data). (If you're able to construct bare multisig inputs, then you can do 1560 sigs in a standard tx (155MB of hashed data), though the setup for your attack will then be visible).

I'm assuming a p2sh script of <n> <pubkey> {DUP}*n <n> CHECKMULTISIG for n equals 15 or 20 and that the signatures set R=G/2, to generate those numbers; I haven't checked how legacy hashing works in detail to be sure that actually generates n hashes over the entire tx rather than having some things cached. Presuming you can't reuse signatures, and each signature is at least 57 bytes then a 100kB tx can't contain more than 1754 signatures, providing an upper bound of 175MB of hashing, presuming the interpreter is clever enough to only do one hash per signature.

In the worst conceivable case, where you somehow achieve 15 sigops with a 107 byte script (34 bytes to push a pubkey, 58 bytes to push a tiny exploitable signature, and 15 opcodes to generate 15 sigops from that), that would result in a 100kB tx requiring 1GB of hashing, while having 20 opcodes generate 20 sigops would result in 1.3GB of hashing (ie a 29% increase, vs the 33% increase of 20 vs 15).

If you're a miner, you're not constrained to 100kB txs, but in that case you're also not constrained by MAX_P2SH_SIGOPS either.

So in conclusion:

  • I think that the IF .. n CHECKMULTISIG ELSE .. 4 CHECKMULTISIG ENDIF approach is strictly superior to the original proposal, as neither are able to support n>11.
  • For a short-term workaround, if I were a miner, I think the lower complexity of just increasing MAX_P2SH_SIGOPS from 15 to 20 is more appealing than running this patch.
  • I think RSK switching to segwit or taproot makes more sense than spending much development effort here
  • If RSK or others really want to do n>11 via p2sh and not p2wsh/p2tr, then increasing MAX_P2SH_SIGOPS from 15 to 20 in general seems plausible to me, but it's not at all clear it's worth the effort of either being sure that it doesn't make quadratic hashing more problematic, or improving the interpreter's caching to prevent that from being a problem.

@maflcko
Copy link
Member

maflcko commented Dec 10, 2022

Related: #2256

@JeremyRubin
Copy link
Contributor

I think RSK switching to segwit or taproot makes more sense than spending much development effort here

issue seems to be that there are existing coins stuck because of this.

@ajtowns
Copy link
Contributor

ajtowns commented Dec 11, 2022

I think RSK switching to segwit or taproot makes more sense than spending much development effort here

issue seems to be that there are existing coins stuck because of this.

The RSK mainnet address on bitcoin is https://mempool.space/address/3DsneJha6CY6X9gU2M9uEc4nSdbYECB4Gh which has no spends so does seem stuck, but I haven't been able to reconstruct the redeemScript that corresponds to that address. The commit_federation tx that updated to that address on the RSK chain seems to be https://explorer.rsk.co/tx/0x83708fd332cbe0f9614dd61d0c95c729aded0b7a94d793f22412493d6f867b43?__ctab=Logs but that just has the address hardcoded in the data as hex, and as far as I've tried, I can't make the newFederationBtcPublicKeys listed there and any of the script templates and emergency keys match up with the 3Dsne address; there's a tool at https://github.com/rsksmart/powpeg-details/ that seems like it should be able to provide the redeem script, but also doesn't seem to produce a match (it doesn't help that there seems to have been a bug where the CSV value was pushed with the wrong endinaness in some versions).

That stops me from being able to confirm that increasing MAX_P2SH_SIG_OPS actually unsticks these transactions.

If increasing MAX_P2SH_SIG_OPS does fix the problem, I think that is the immediate fix, and switching to IF .. CHECKMULTSIG ELSE .. CHECKMULTISIG ENDIF (see RSKIP353) is the short/medium term solution (hopefully prior to switching to segwit/taproot over the medium/long term).

Some RSK addresses on bitcoin testnet are:

Those at least seem like they match the script template in the way I'd expect.

@ajtowns
Copy link
Contributor

ajtowns commented Dec 20, 2022

and as far as I've tried, I can't make the newFederationBtcPublicKeys listed there and any of the script templates and emergency keys match up with the 3Dsne address;

Apparently I failed to sort the emergency keys correctly. The redeemScript is 645521020ace50bab1230f8002a0bfe619482af74b338cc9e4c956add228df47e6adae1c210231a395e332dde8688800a0025cccc5771ea1aa874a633b8ab6e5c89d300c7c3621025093f439fb8006fd29ab56605ffec9cdc840d16d2361004e1337a2f86d8bd2db21026b472f7d59d201ff1f540f111b6eb329e071c30a9d23e3d2bcd128fe73dc254c2103250c11be0561b1d7ae168b1f59e39cbc1fd1ba3cf4d2140c1a365b2723a2bf93210357f7ed4c118e581f49cd3b4d9dd1edb4295f4def49d6dcf2faaaaac87a1a0a422103ae72827d25030818c4947a800187b1fbcc33ae751e248ae60094cc989fb880f62103e05bf6002b62651378b1954820539c36ca405cbb778c225395dd9ebff67802992103ecd8af1e93c57a1b8c7f917bd9980af798adeb0205e9687865673353eb041e8d59670350cd00b275532102370a9838e4d15708ad14a104ee5606b36caaaaf739d833e67770ce9fd9b3ec80210257c293086c4d4fe8943deda5f890a37d11bebd140e220faa76258a41d077b4d42103c2660a46aa73078ee6016dee953488566426cf55fc8011edd0085634d75395f92103cd3e383ec6e12719a6c69515e5559bcbe037d0aa24c187e1e26ce932e22ad7b35468ae

So I think that confirms that changing MAX_P2SH_SIGOPS is a sufficient fix, and I've made a patchset which is available at https://github.com/ajtowns/bitcoin/tree/202212-rsk-p2sh-sigops . That only touches policy and test code so is hopefully fairly easy to verify for correctness, and should be a trivial merge on top of either 24.0.1 or master.

I'm not sure if it would make sense to merge in general though, as the risk of expanding the attack service for quadratic hashing does exist, and the benefits of being able to do weird multisig things in p2sh rather than switching to segwit or taproot seem very small.

Even if it's not merged in core, I think it would be reasonable for miners to run that patch temporarily in order to unstick RSK peg-out transactions, without requiring constant manual intervention.

If a miner really wanted to make sure that they don't introduce additional quadratic hashing risk, then I think you could set things up as follows:

  • public node 1, with MAX_P2SH_SIGOPS=20 and -limitancestorsize=86
  • public node 2, default settings, with MAX_P2SH_SIGOPS unchanged at 15 -- presumably advertised somehow to RSK folks can manually connect to it and send txs.
  • private node, producing block templates for mining, connected to public node 1 and 2, with MAX_P2SH_SIGOPS=20 and the default -limitancestorsize

Setting limitancestorsize would mean that even if the increase in MAX_P2SH_SIGOPS allows you to do an additional 33% of sighash operations per byte, you're reducing both the bytes that you're hashing and the number of sighash operations to 86%, and 86%*86%*133% < 100%. Past RSK txs seem to be capped at either 50 inputs or 50kB, either of which should work fine with that limitation.

@ajtowns
Copy link
Contributor

ajtowns commented Jan 15, 2023

Spends seem to be going through on occasion now, eg: https://mempool.space/tx/b3ade8b5d893da6a76f1977ea27352ab1cac679eb5d950793161607088f01c14 albeit at relatively high fee rates, and the hard fork to a p2sh script that is spendable with current standardness rules is apparently due to activate ~tomorrow https://blog.rsk.co/noticia/rsk-hop-4-0-1-and-4-1-1-are-here-mainnet-versions/

@glozow
Copy link
Member

glozow commented Jan 16, 2023

Setting limitancestorsize would mean that even if the increase in MAX_P2SH_SIGOPS allows you to do an additional 33% of sighash operations per byte, you're reducing both the bytes that you're hashing and the number of sighash operations to 86%, and 86%*86%*133% < 100%. Past RSK txs seem to be capped at either 50 inputs or 50kB, either of which should work fine with that limitation.

This seems like a safe way to preserve the cap on sighashing while permitting these transactions.

the hard fork to a p2sh script that is spendable with current standardness rules is apparently due to activate ~tomorrow https://blog.rsk.co/noticia/rsk-hop-4-0-1-and-4-1-1-are-here-mainnet-versions/

Am I correct in understanding that, in addition to changing future redeemScripts, the hard fork (namely RSKIP357) increases the window within which RSK will try to get nonstandard migration transactions mined, i.e. RSK still needs some miner to include the nonstandard txns? Hoping to clarify whether the hard fork by itself automatically unblocks all stuck funds, my understanding is it doesn't (?).

@ajtowns
Copy link
Contributor

ajtowns commented Jan 16, 2023

Am I correct in understanding that, in addition to changing future redeemScripts, the hard fork (namely RSKIP357) increases the window within which RSK will try to get nonstandard migration transactions mined, i.e. RSK still needs some miner to include the nonstandard txns? Hoping to clarify whether the hard fork by itself automatically unblocks all stuck funds, my understanding is it doesn't (?).

My understanding is the "stuck" funds are in 3DsneJha6CY6X9gU2M9uEc4nSdbYECB4Gh (2616 BTC currently [EDIT: not sure how I got this wrong, but it should have been 3486 BTC?]) and need to be moved to whatever the new address is, once the new address is agreed upon. The timeline for that happening once the new address has been determined is increased by ~16x, from ~3.5 days to ~60 days. Once the new address is chosen and the 60 days are up, there shouldn't be any funds left in the 3Dsne address, so no further non-standard transactions should be needed. As I understand it, anyway.

@ajtowns
Copy link
Contributor

ajtowns commented Feb 2, 2023

The stuck funds have been moved to 35iEoWHfDfEXRQ5ZWM5F6eMsY2Uxrc64YK which has successfully been spent from in tx 67e950218f85cf1c35ffcb0aad2436822b1de96c9d68a17ac8d9a14c4a02a85c (and that tx appears to have at least made it into my mempool when broadcast, prior to being mined). There's still about 0.0134 BTC in the old stuck address; meanwhile the new address now holds about 3476 BTC (funds were unstuck in block 773957, mined by marapool).

So I think this PR can be closed?

@fanquake
Copy link
Member

fanquake commented Feb 2, 2023

Going to close this for now.

@fanquake fanquake closed this Feb 2, 2023
fanquake added a commit that referenced this pull request Aug 29, 2023
13eb8aa doc: Release notes for testnet defaulting to -acceptnonstdtxn=0 (Anthony Towns)
e1dc15d config: default acceptnonstdtxn=0 on all chains (Anthony Towns)

Pull request description:

  Changes `-acceptnonstxtxn` to default to 0 on testnet, matching the other chains. Allowing non-standard txs on testnet by default contributed to the difficulties RSK described in #26348: "We see that there are two script paths and, to reduce the script size, a single CHECKMULTISIG is used for the two paths, separating the signer count from the CHECKMULTISIG opcode. This script worked on testnet, because it lacks the standard checks performed in Mainnet."

ACKs for top commit:
  MarcoFalke:
    lgtm ACK 13eb8aa
  sipa:
    utACK 13eb8aa
  instagibbs:
    utACK 13eb8aa
  theStack:
    Code-review ACK 13eb8aa

Tree-SHA512: eff7a3f9fc9b94003a730beb96e6f3399bc8b8e93fde4b15f20a11eda61d9a3e076f4423989f98b794b32681abecbc3756a54cd0d37b136e2fb2ffbb47ee7774
@bitcoin bitcoin locked and limited conversation to collaborators Feb 2, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

10 participants