New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segwit commitment structure is not midstate-compatible #8307

Open
maaku opened this Issue Jul 6, 2016 · 11 comments

Comments

Projects
None yet
6 participants
@maaku
Contributor

maaku commented Jul 6, 2016

The issue is on this line of code:

if (block.vtx[0].vout[o].scriptPubKey.size() >= 38 && block.vtx[0].vout[o].scriptPubKey[0] == OP_RETURN && block.vtx[0].vout[o].scriptPubKey[1] == 0x24 && block.vtx[0].vout[o].scriptPubKey[2] == 0xaa && block.vtx[0].vout[o].scriptPubKey[3] == 0x21 && block.vtx[0].vout[o].scriptPubKey[4] == 0xa9 && block.vtx[0].vout[o].scriptPubKey[5] == 0xed) {

Instead of checking for the commitment at the start of an output, it should be placed at the end of the very last output. Then midstate compression can be used to reduce the size of the coinbase tx for proofs that require access to the witness commitment.

@luke-jr

This comment has been minimized.

Member

luke-jr commented Jul 6, 2016

Miners can put it in the very last output as the only data. BIP 145 requires this for GBT. Surely that is good enough?

@gmaxwell

This comment has been minimized.

Member

gmaxwell commented Jul 6, 2016

It cannot be required to be in the last output without making some existing hardware unable to mine.

@maaku

This comment has been minimized.

Contributor

maaku commented Jul 6, 2016

@luke-jr No, the consensus rule needs to be that it is the very last data of the last output, otherwise there's no way to know (without seeing the full transaction) that one is not inside of a pushdata that looks like an output. This means that a miner could make the last output [realCommitment ... padding ... fakeCommitment] and then after midstate compression all the validator of the proof sees is [padding ... fakeCommitment].

To make it concrete and real, let's say that namecoin changes to using the segwit nonce to contain their aux header for merged mining. A single miner could commit to two blocks in this way.

EDIT: The reason the end-of-last-output trick works is that the validator then checks that all that follows is the 4-byte locktime, which is information it does have.

@sipa

This comment has been minimized.

Member

sipa commented Jul 6, 2016

@maaku

This comment has been minimized.

Contributor

maaku commented Jul 6, 2016

Pick any other example, focusing on namecoin is a red herring.

TXO commitments. You could commit to two different TXO structures.

@sipa

This comment has been minimized.

Member

sipa commented Jul 6, 2016

@petertodd

This comment has been minimized.

Contributor

petertodd commented Jul 6, 2016

In addition to @gmaxwell's point, p2pool also makes use of midstate-compressed witnesses commitments, so we'd have to change p2pool if we also made use of it.

Given that SPV can't make use of witness data right now, I have no qualms about the current design; at worst proving a witness commitment requires 1MB worth of additional data, which isn't a disaster, and average case would be just a few KB; full nodes of course already have the relevant data as @sipa mentions. In the future we'll probably eventually have a hard-fork that moves the commitment right next to the block header anyway.

edit: fixed typo

@maaku

This comment has been minimized.

Contributor

maaku commented Jul 7, 2016

@sipa The last commitment which starts an output. In a midstate utilizing proof one cannot be certain if the proof given is at the start of an output or inside a push statement later in that output. Full validators need the whole coinbase, yes, but proof validators may not. They might only be interested in the the commitment itself, in which case it is a huge drag to include the whole coinbase and its kilobytes of coinbase payouts, the data for which might be larger than the rest of the proof. To illustrate, it would also be acceptable if it was made a consensus rule that after the actual commitment that is the last one to start an output, there is no instance in the serialization of the sequence of bytes which indicate a commitment. Then the proof checker just does a linear scan of the remaining bytes for the segwit commitment marker and fails if it sees two. But this is layer violating to the extreme and a pain to enforce.

@gmaxwell It is important to consider which hardware is broken. The only hardware I can think of that is affected by this change is one that bakes in coinbase construction logic so as to force the owner of the hardware to pay a share to the manufacturer, while the manufacturer recouped costs on sale and isn't paying any portion of the electricity. This is firmware-enforced rent collection, and I'm very surprised if bricking hardware with that 'feature' is something we'd feel badly about. Are there other devices that would be broken by this change?

@petertodd I have lost track of how many times (8? 9?) p2pool has hardforked its share chain. That's really not a roadblock.

@petertodd

This comment has been minimized.

Contributor

petertodd commented Jul 7, 2016

Actually, worth noting that we aren't breaking any hardware if we continue to allow non-segwit-containing blocks to leave off the commitment... however that leads to subtle issues around peers withholding commitments. Probably not an issue in the current way we use the commitment, but could be in the future.

Anyway, what specific use cases do you think a 1MB worst case, and few KB average case, proof size hold back in the next year or two?

@sipa

This comment has been minimized.

Member

sipa commented Jul 7, 2016

It is also possible to add an additional commitment location for the same data in a future softfork that is more proof-compatible, if wanted.

@petertodd

This comment has been minimized.

Contributor

petertodd commented Jul 7, 2016

@sipa Quite true, and we need to eventually fix the design of the commitments anyway to fix the upgrade data propagation problem properly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment