New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] [CONSENSUS] Moving Maximum Block Size Consensus Rule #1542

Open
wants to merge 2 commits into
base: dev
from

Conversation

Projects
None yet
5 participants
@Greg-Griffith
Copy link
Contributor

Greg-Griffith commented Dec 31, 2018

This PR is for the moving maximum block size proposal by Imaginaryusername.
The proposal can be found here: bitcoincashorg/bitcoincash.org#149

Todo:

  • implement fork logic for activation
  • discussion is needed on how to treat excessive blocks if this activates
  • unit tests
@imaginaryusername

This comment has been minimized.

Copy link

imaginaryusername commented Jan 1, 2019

Thanks! I've edited the original proposal to include transition periods that accomodate pruned nodes, but if anyone can think of simpler ways it'll be awesome.

EB/AD logic can even stay the say with minimal disruption to how this works - they can always act as "escape hatches" if the node operator or miner wants to exit the mechanism.

@GitCash send 0.1 BCH to @Greg-Griffith

@GitCash

This comment has been minimized.

Copy link

GitCash commented Jan 1, 2019

Hey Greg-Griffith, user imaginaryusername tipped you 100000 bits in Bitcoin Cash ( ~ $16.2 ).

Click here to claim it!

You can also add the "thumbs down" reaction to imaginaryusername's comment above to prevent future tips.

}
else
{
uint64_t avg_year = year_total / year_blocks.size();

This comment has been minimized.

@sickpig

sickpig Jan 2, 2019

Collaborator

@imaginaryusername's original spec propose to use the median rather than the average. Quoting the relevant part of the spec (under Rationale section)

Using the moving average over the look back period to calculate the maximum block size consensus rule would allow individual miners to influence this consensus rule in a way that is not proportional to their historical hash rate on the network. In other words, a single miner could have a disproportionately large influence over the block size limit by building very large or very small blocks.

This comment has been minimized.

@imaginaryusername

imaginaryusername Jan 2, 2019

Thanks, exactly - media is much more robust than average.

This comment has been minimized.

@Greg-Griffith

Greg-Griffith Jan 3, 2019

Contributor

Oh, my mistake. i will fix that.

@sickpig

This comment has been minimized.

Copy link
Collaborator

sickpig commented Jan 2, 2019

So if I understand correctly to solve the pruned node issue we are just going to store the value of incoming blocks during the time the pruned node is on line and wait until we get enough data to have the correct value for the current max block size.

In the period need to gather enough data the EB/AD logic will let the pruned node to actually follow the right chain.

Am I correct?

@imaginaryusername

This comment has been minimized.

Copy link

imaginaryusername commented Jan 2, 2019

@sickpig My proposed method is to mandate that pruned nodes collect and store blocksize data during the transition period, then activate when everyone presumably got the data they need to follow consensus.

Non-pruning nodes don't need to worry (since they have historic blocks at hand they can calculate blocksize from), and fresh nodes don't need to worry whether pruned or unpruned (since they need to get the blocks before pruning anyway; unless UTXO commitment is implemented, at least), so this is for existing pruned nodes that wish to continue running through activation.

Basically by flagging the first activation date as "you must start keeping blocksize data after this date" we make sure everyone's on the same page.

As for EB/AD I personally think it can stay 32 through the transition/"data gathering" period (in your words) to make sure nodes follow consensus, then after transition the default can snap to one of the following logic with relationship to the adjustable blockcap:

  1. Limit larger: EB is only considered if the adjustable blockcap exceeds the EB number. In that case the EB can snap to something really large by default unless the operator specifies it (say, 1TB).
  2. Limit smaller: EB is only considered if the adjustable blockcap is smaller than the EB number. EB can stay at 32MB in that case.
  3. Deactivated: EB/AD is deactivated unless the operator turns it back on, in which case it replaces the adjustable cap.

I personally favor 2) as it also elegantly fits into the adjustable blockcap scheme (it basically fulfills the 32MB floor part of the spec), but other people might have different ideas and I'll love to hear it.

@sickpig sickpig added the consensus label Jan 3, 2019

@Greg-Griffith

This comment has been minimized.

Copy link
Contributor

Greg-Griffith commented Jan 4, 2019

i did implement the logic for the minimum block size of 3.2MB and starting with a history of all 3.2MB blocks as was suggested by @gandrewstone on the spec page since it was seconded by @imaginaryusername and @sickpig

@imaginaryusername

This comment has been minimized.

Copy link

imaginaryusername commented Jan 4, 2019

Updating the spec tonight.

@AndrewClifford

This comment has been minimized.

Copy link
Member

AndrewClifford commented Jan 6, 2019

Great progress on this.
Much as it pains me to say it, as I got as enthusiastic as anyone about EC for permanently removing the maxblocksize constant, I think that trying to shoehorn it into the MMBS logic just adds complexity.
Option 3 to deactivate EB/AD when the MMBS is activated seems compelling. #1542 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment