Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider "max weight" consensus rule for block headers (not just full blocks) #3368

Closed
antiochp opened this issue Jun 27, 2020 · 0 comments · Fixed by #3395
Closed

Consider "max weight" consensus rule for block headers (not just full blocks) #3368

antiochp opened this issue Jun 27, 2020 · 0 comments · Fixed by #3395

Comments

@antiochp
Copy link
Member

We have a "max block weight" consensus rule on full blocks.
Weight is a function of the number of inputs, outputs and kernels in the block.

Proposal is to add a "max weight" consensus rule on block headers also.
This would allow nodes to identify "too-heavy" blocks earlier and without necessarily having to receive full blocks.
This would prevent spam and other undesirable behavior during block propagation.

We commit to the output_mmr_size and kernel_mmr_size in block headers and from these we can determine the number of outputs and kernels. We unfortunately cannot determine the number of inputs in a block given just the block headers.

One option would be to implement a similar but not identical "max weight" rule for headers, taking only outputs and kernels into consideration. This would still allow headers for "too heavy" blocks to be accepted, but would restrict the composition of "too heavy" blocks. A block containing an excessive number of kernels or outputs would not have a valid header due to the "max weight" rule.
This can be implemented today without any changes to the header structure.

The other option would be to commit to something in the block header that would allow the number of inputs to be determined. This would need to be a count or size related to either the stxo (spent outputs) or utxo (unspent outputs).
This would allow us to use the same "max weight" consensus, in a way that could be applied to both headers and full blocks consistently.

The following discussion on keybase covers this -


Keybase discussion involving @DavidBurkett, @tromp, @antiochp.
keybase://chat/grincoin.teams.node_dev#general/5220

We should probably add consensus rules that limit the number of kernels and outputs per header. I know we have the size rule for blocks already, but technically a 1-week reorg could allow a miner to fill the chain with terabytes of data. Checking the output mmr size and kernel mmr size during header sync could prevent that.

yeah that's actually an interesting idea - we don't need to add any new consensus rules, just apply them to headers in addition to full blocks (and use the mmr sizes as you suggest to determine counts for the block). So we can use the existing consensus size rules.

Yep, we can probably just include that code whenever. It's technically a softfork, but it's so rare we wouldn't have to worry about activation or anything. Just let me know if/when you make the change and I'll do the same.

ahh actually we can determine the number of outputs and kernels given just the sizes in the header, but not the number of inputs (which is annoying) still thinking about this
I guess one option would be to just consider outputs and kernels and check the block does not exceed max weight

the other option is to add a number_of_spent_outputs header field

I propose utxo_size as an alternative to number_of_spent_outputs in the header - its a "global" value, not per block and consistent with existing xxx_mmr_size and xxx_mmr_root etc. If we were to seriously consider adding a new header field.
and derive the number of spent inputs from this
utxo_size would potentially be useful to have - even if just for quick view into size of things over time

number_of_spent_outputs is global too (cumulative)

it's the size of the spent txo mmr if we had one

do you remember argument against adding spent txo mmr to fix utxo commitment issue?

it was significantly more complexity to implement vs the utxo bitmap approach we took - not sure if there were others
in terms of sync and p2p msgs and other related stuff
its subtly different as well right, thinking about it now - currently we care about utxo at a point in time (and do not care about the history in there) but the stxo would be strictly ordered

i see. it defines a total order on (spent) outputs, which bitmap does not.

to check for inputs being unspent, we need to check bitmap
and to update as well
which grows linearly with txo set size
but spent txo mmr is cheap to update, you only keep peaks
and you can choose any utxo set representation you want
which will be sublinear in txo set set

depends on your utxo set impl. e.g. pick a radix tree or red black tree. it will all be O(log n) checks and updates
it will be decoupled from header commitment fields
which are only on mmrs

yes, the txo mmr and stxo mmr define the commitments to utxo
and you're free to pick a utxo set representation to check membership
you just need to initialize it when starting up your node or syncing it

i guess you could argue that the bitmap mmr need not really be linear in size

you could still have your arbitrary utxo set representation that's linear in utxo set size, as long as you can efficiently compute bitmap mmr merkle root from it...
so i guess we're good

@antiochp antiochp changed the title Add "max weight" consensus rule for block headers (not just full blocks) Consider "max weight" consensus rule for block headers (not just full blocks) Jun 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant