You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unless I'm missing something, SigChecks are only protective in preventing abuse of OP_CODESEPARATOR. With consensus enforcement of the NULLFAIL rule (since Nov. 2017), any signature check that triggers validation will be cacheable, and therefore every new signature check requires at least 65 bytes of signature data. The signing serialization components that make up the preimage – hashPrevouts, hashUtxos, hashSequence, hashOutputs – are also cacheable across input evaluations within the same transaction, so only the coveredBytecode is a meaningful determiner of relevant preimage size.
So: only OP_CODESEPARATOR can create excessive resource usage by requiring validators to create new preimages for each signature in the evaluation. This is a potentially useful feature in smart contracts, as it allows a single public key to commit to the location within a contract at which different signatures can be included. (I probably wouldn't have included OP_CODESEPARATOR as a feature of the VM if we were starting from scratch, but I can't deny that because it already exists, it's worth preserving – it remains the most efficient way of accomplishing that commitment behavior, and would remain valuable in covenant and non-interactive contract use cases even if we e.g. added MAST to BCH.)
Following this discussion, @zander suggests just limiting OP_CODESEPARATOR directly, which I'm starting to think is the best approach.
I need to do a bit more review to identify reasonable limits. Simply carrying over equivalent limits from SigChecks is probably best, with a minor increase to remedy the 2020 issue mentioned in #8.
The text was updated successfully, but these errors were encountered:
Unless I'm missing something, SigChecks are only protective in preventing abuse of
OP_CODESEPARATOR
. With consensus enforcement of theNULLFAIL
rule (since Nov. 2017), any signature check that triggers validation will be cacheable, and therefore every new signature check requires at least 65 bytes of signature data. The signing serialization components that make up the preimage –hashPrevouts
,hashUtxos
,hashSequence
,hashOutputs
– are also cacheable across input evaluations within the same transaction, so only thecoveredBytecode
is a meaningful determiner of relevant preimage size.So: only
OP_CODESEPARATOR
can create excessive resource usage by requiring validators to create new preimages for each signature in the evaluation. This is a potentially useful feature in smart contracts, as it allows a single public key to commit to the location within a contract at which different signatures can be included. (I probably wouldn't have includedOP_CODESEPARATOR
as a feature of the VM if we were starting from scratch, but I can't deny that because it already exists, it's worth preserving – it remains the most efficient way of accomplishing that commitment behavior, and would remain valuable in covenant and non-interactive contract use cases even if we e.g. added MAST to BCH.)Following this discussion, @zander suggests just limiting
OP_CODESEPARATOR
directly, which I'm starting to think is the best approach.I need to do a bit more review to identify reasonable limits. Simply carrying over equivalent limits from
SigChecks
is probably best, with a minor increase to remedy the 2020 issue mentioned in #8.The text was updated successfully, but these errors were encountered: