New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BIP 102: Increase block size limit to 2MB #6451

Closed
wants to merge 4 commits into
base: master
from

Conversation

Projects
None yet
@jgarzik
Copy link
Contributor

jgarzik commented Jul 16, 2015

BIP 102 - hard fork increase to 2M on flag day (currently May 5 2016)

Specification summary:

  • Increase MAX_BLOCK_SIZE to 2,000,000 bytes at trigger point.
  • Increase max block sigops by similar factor, preserving size/50 formula.
  • Trigger: (1) Block time 00:00:00 on flag day, AND (2) 95% of the last 1,000 blocks have signaled support via IsSuperMajority (ISM).
@djpnewton

View changes

src/consensus/consensus.h Outdated
/** The maximum allowed size for a serialized block, in bytes (network rule) */
static const unsigned int MAX_BLOCK_SIZE = 1000000;
inline unsigned int MaxBlockSize(uint64_t nBlockTimestamp) {
// 1MB blocks until 1 March 2016, then 20MB

This comment has been minimized.

@djpnewton

djpnewton Jul 16, 2015

Contributor

comment should be // 1MB blocks until 11 Nov 2015, then 2MB

@djpnewton

View changes

src/test/block_size_tests.cpp Outdated
//
// Unit test CheckBlock() for conditions around the block size hard fork
//
BOOST_AUTO_TEST_CASE(TwentyMegFork)

This comment has been minimized.

@djpnewton

djpnewton Jul 16, 2015

Contributor

twenty? (also in comments below)

@petertodd

This comment has been minimized.

Copy link
Contributor

petertodd commented Jul 17, 2015

Note that discussion of this patch should be limited to the code itself on github; discussion of the value of a 2MB hard fork is better suited for the mailing list.

@jl2012

This comment has been minimized.

Copy link
Member

jl2012 commented Jul 17, 2015

Except the difference in size, is this same as Gavin's original 20MB proposal?

@jwilkins

This comment has been minimized.

Copy link

jwilkins commented Jul 17, 2015

@jl2012 no, this is a one off change.

@jl2012

This comment has been minimized.

Copy link
Member

jl2012 commented Jul 17, 2015

@jwilkins I'm talking about this: gavinandresen@5f46da2 They look very similar, if not identical

@jgarzik

This comment has been minimized.

Copy link
Contributor

jgarzik commented Jul 17, 2015

@jl2012 is correct that this patch is adapted from the original static 20MB change from @gavinandresen

That means this has actually received some testing and thought beyond my own personal testing.

@jgarzik jgarzik force-pushed the jgarzik:2015_2mb_blocksize branch Jul 17, 2015

@theuni

This comment has been minimized.

Copy link
Member

theuni commented Jul 17, 2015

Could you please move the block size and tx max size to consensusparams, as was done here: gavinandresen@7148527 and gavinandresen@b58d925 ? That would allow (for ex) a different blocksize or flagday on testnet so that we could test before the real thing.

Also, I think we could avoid a good bit of competing proposals if we pulled in a generic version of this that just parametrizes things as necessary to make the block size dynamic. It'd then just be a matter of debating what MaxBlockSize(uint64_t nBlockTimestamp) does internally.

We could even go as far as making it:
uint64_t MaxBlockSize(uint64_t nBlockTimestamp) { (void)nBlockTimestamp; return 1000000; }
for now, which should be 100% uncontentious, and could be merged immediately.

Any real blocksize changes would then be very straightforward in code.

@jtimon

View changes

src/consensus/consensus.h Outdated
@@ -6,10 +6,24 @@
#ifndef BITCOIN_CONSENSUS_CONSENSUS_H
#define BITCOIN_CONSENSUS_CONSENSUS_H

#include <stdint.h>

static const uint64_t MEG_FORK_TIME = 1447200000; // 11 Nov 2015 00:00:00 UTC

This comment has been minimized.

@jtimon

jtimon Jul 17, 2015

Member

This should be in Consesnsus::Params. Also, why timestamp instead of height?

@jtimon

View changes

src/consensus/consensus.h Outdated
static const unsigned int MAX_BLOCK_SIZE = 1000000;
inline unsigned int MaxBlockSize(uint64_t nBlockTimestamp) {
// 1MB blocks until 11 Nov 2015, then 2MB
return (nBlockTimestamp < MEG_FORK_TIME ? 1000*1000 : 2*1000*1000);

This comment has been minimized.

@jtimon

jtimon Jul 17, 2015

Member

The old and new sizes should also be in Consesnsus::Params. MaxBlockSize should therefore take a height and a const Consesnsus::Params& as parameters. The same goes for MaxBlockSigops().

@jtimon

View changes

src/miner.cpp Outdated
@@ -329,7 +330,6 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)

// Fill in header
pblock->hashPrevBlock = pindexPrev->GetBlockHash();
UpdateTime(pblock, Params().GetConsensus(), pindexPrev);

This comment has been minimized.

@jtimon

jtimon Jul 17, 2015

Member

Why this? won't this break tesnet mining?

@jtimon

This comment has been minimized.

Copy link
Member

jtimon commented Jul 17, 2015

ACK on building sometjing that can be meged without consensus changes first, without new size or activation height. Then to make the consensus changes we just need to add those two to Consesnsus::Params and change 1 line in MaxBlockSize(nHeight, consensusParams).

@laanwj laanwj added the Consensus label Jul 18, 2015

@jtimon

This comment has been minimized.

Copy link
Member

jtimon commented Jul 18, 2015

I'm happy to adapt jtimon@0734eef from #6382 to use MaxBlockSize MaxBlockSigops functions as in here and separate it as a PR if there's interest.
Then, as said, a rebase of this on top of that new PR should be a 3-line change (plus documentation).
Thoughts?
By "if there's interest" I mean something along the lines of "if it's going to be reviewed and merged/nacked within one month or less". Otherwise I have enough "rebase mouse-wheels" to maintain already.

@veqtrus

This comment has been minimized.

Copy link
Contributor

veqtrus commented Jul 18, 2015

I think it would be good time to increment PROTOCOL_VERSION.

@ondra-novak

This comment has been minimized.

Copy link

ondra-novak commented Jul 20, 2015

What about to stop discussion about "the constant" and invent some rules how the max block size should be calculated from "fullness" of blocks in the recent history - similar to calculating the difficulty.

@jgarzik

This comment has been minimized.

Copy link
Contributor

jgarzik commented Jul 20, 2015

@ondra-novak That is outside the scope of this PR. Feel free to mention that on the mailing list.

@andrewtoth

View changes

src/test/block_size_tests.cpp Outdated
uint64_t yearAfter = MEG_FORK_TIME+60*60*24*365;
BOOST_CHECK(TestCheckBlock(*pblock, yearAfter, 1000*1000)); // 1MB : valid
BOOST_CHECK(TestCheckBlock(*pblock, yearAfter, 2*1000*1000)); // 20MB : valid
BOOST_CHECK(!TestCheckBlock(*pblock, yearAfter, 2*1000*1000+1)); // >20MB : invalid

This comment has been minimized.

@andrewtoth

andrewtoth Jul 20, 2015

Contributor

These comments should be changed to 2MB.

@ondra-novak

This comment has been minimized.

Copy link

ondra-novak commented Jul 21, 2015

@jgarzik I can't access mailing-list

This is only official visible discussion on the internet.

@jtimon

This comment has been minimized.

@petertodd

This comment has been minimized.

Copy link
Contributor

petertodd commented Jul 21, 2015

NACK due to lack of miner vote mechanism.

See http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009504.html for details on why this is a very bad idea.

@jl2012

This comment has been minimized.

Copy link
Member

jl2012 commented Jul 21, 2015

@petertodd Do you mean NACK ONLY due to lack of miner vote mechanism? I don't think that would be difficult as we could just borrow the voting code from BIP66. I also think it's good to have a vote, just to make sure the legacy chain will extinct and the value of the legacy coins will approach zero.

@petertodd

This comment has been minimized.

Copy link
Contributor

petertodd commented Jul 21, 2015

@jl2012 I'm making no statement other than the pull-req as proposed is unacceptable by to that reason alone; I'll review further versions later. (though right now given how close we are to the scalability limits of the system already, simple blocksize increases as a scaling mechanism are going to be either big enough to be dangerous, or too small to be worth it)

The reason for a vote isn't to ensure the original chain "goes extinct" - it's because without miner support the chain is insecure. Furthermore with this pull-req every block mined on the original Bitcoin chain acts to 51% "attack" the Garzik chain; you can't mine original Bitcoin without participating in that 51% "attack".

Note how the normal nVersion supermajority soft-fork mechanism is explicitly designed so that once 95% support is reached, that overwhelming majority "attacks" the 5% minority, ensuring that users still on the 5% minority only see the secure, 95% majority chain. Equally, the Bitcoin Core software is designed to allow that "attack" to happen, because with a 95% majority we'd prefer to go along. (the definition of a soft-fork!)

@dcousens

This comment has been minimized.

Copy link
Contributor

dcousens commented Jul 30, 2015

@petertodd is it necessary then to make the code soft-fork capable?

@jtimon

This comment has been minimized.

Copy link
Member

jtimon commented Jul 30, 2015

No, this just can't be a softfork, must be a hardfork. But if it's an uncontroversial hardfork, you can use the same mechanism that you use in uncontroversial softforks to confirm that miners have upgraded.

@vladroberto

This comment has been minimized.

Copy link

vladroberto commented Aug 13, 2015

I too cannot find another visible debate for the Blocksize issue. Especially regarding the 8mb & 20mb proposals.

@pointbiz

This comment has been minimized.

Copy link

pointbiz commented Aug 19, 2015

Can someone prepare a pull request with miner voting (maybe 75%)? A 2MB approach is low risk and could gain consensus if binaries are available. This bip102 can be a consensus response to bip101. The community focus is on this issue right now and people are "voting". I would like to vote for Bitcoin Core and it's process whilst asking that you who know better than I give an extra round of thought to this Garzik proposal and augment Bitcoin Core with a short term block size increase as a response to recent events.

@jtimon

This comment has been minimized.

Copy link
Member

jtimon commented Aug 19, 2015

Again, why would we want miners to vote on the rule to limit mining
centralization? Let's analyze them all, chose aan uncontroversial one and
just deploy that (with or without mining upgrade confirmation (but if so
use 95% like with uncontroversial softforks).

@jgarzik jgarzik changed the title BIP 102: Increase block size limit to 2MB + 20b/10min BIP 102: Increase block size limit to 2MB Dec 18, 2015

@tulip0

This comment has been minimized.

Copy link

tulip0 commented Dec 18, 2015

95% of the hashrate is stronger but still an awful metric, there was >30% (effectively 100%) of the Bitcoin hashpower mining BIP65 blocks before there was even a single release of Bitcoin Core which supported the soft fork.

@jameshilliard

This comment has been minimized.

Copy link
Contributor

jameshilliard commented Dec 18, 2015

@tulip0 I think 95% is ok but it should be measured over a much longer period, maybe 10k blocks with a large activation delay of another 10k blocks.

@tulip0

This comment has been minimized.

Copy link

tulip0 commented Dec 18, 2015

@btcdrak F2Pool and Bitcoin Affiliate Network were, possibly more.

@jameshilliard That's sub-optimal, if you have >50% (maybe 33%) of the hash power you also have 100% by virtue of censoring any other non-conforming blocks. It really gives nobody any insight into who is going to accept the blocks other than SPV clients which blindly follow the most work.

@jameshilliard

This comment has been minimized.

Copy link
Contributor

jameshilliard commented Dec 18, 2015

@tulip0 What would you suggest for a threshold? I think the majority of miners will want as many other miners to support the fork before they switch.

@tulip0

This comment has been minimized.

Copy link

tulip0 commented Dec 18, 2015

@jameshilliard Talk of miner thresholds are completely irrelevant in a hard fork, they can mine whatever they want but anything invalid according to local rules will be rejected by validating nodes. There's no measure of validating nodes which isn't trivial to forge and it's not clear if number means anything to begin with. Choosing your metric wrong means people accepting Bitcoin as payment end up undetectably on different incompatible networks and viciously double spent against.

At any rate, supermajority of miners is un-measurable.

@jtimon

This comment has been minimized.

Copy link
Member

jtimon commented Dec 18, 2015

@jtimon #6625 was closed at the time.

Actually it was closed 10 hours ago when I realised that nobody that I know is writing a blocksize patch is using that code.

@jameshilliard

This comment has been minimized.

Copy link
Contributor

jameshilliard commented Dec 18, 2015

@tulip0 I think a miner threshold is still important to have since we need majority hashrate to secure the forked chain. Maybe also add a minimum block height as well.

@dcousens

This comment has been minimized.

Copy link
Contributor

dcousens commented Dec 18, 2015

@jgarzik this PR has changed a few times, and it'd probably be good to see if its worth re-opening a-fresh?

concept ACK

@@ -6,10 +6,26 @@
#ifndef BITCOIN_CONSENSUS_CONSENSUS_H
#define BITCOIN_CONSENSUS_CONSENSUS_H

#include <stdint.h>

static const uint64_t BIP102_FORK_TIME = 1462406400; // May 5 2016, midnight UTC

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

Should be in Consensus::Params.

if (nTime < BIP102_FORK_TIME)
return 1000*1000;

return (2*1000*1000);

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

Should also be in Consensus::Params, and the current limit as well. Also why 1000*1000 instead of 1000000 like it was?

This comment has been minimized.

@MarcoFalke

MarcoFalke Dec 20, 2015

Member

It's easier to see when there the zeros are separated.

/** The maximum allowed size for a serialized block, in bytes (network rule) */
static const unsigned int MAX_BLOCK_SIZE = 1000000;
inline unsigned int MaxBlockSize(uint64_t nTime) {

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

Should use median time or height so that you can always predict whether the next block needs to follow the new rules or not.

@@ -3055,6 +3063,11 @@ bool ContextualCheckBlockHeader(const CBlockHeader& block, CValidationState& sta
return state.Invalid(error("%s : rejected nVersion=3 block", __func__),
REJECT_OBSOLETE, "bad-version");

// Reject block.nVersion=4 blocks when 95% (75% on testnet) of the network has upgraded:
if (block.nVersion < 5 && IsSuperMajority(5, pindexPrev, consensusParams.nMajorityRejectBlockOutdated, consensusParams))
return state.Invalid(error("%s : rejected nVersion=3 block", __func__),

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

Wrong version in the error message.

@@ -2124,7 +2124,7 @@ void CNode::RecordBytesSent(uint64_t bytes)
void CNode::SetMaxOutboundTarget(uint64_t limit)
{
LOCK(cs_totalBytesSent);
uint64_t recommendedMinimum = (nMaxOutboundTimeframe / 600) * MAX_BLOCK_SIZE;
uint64_t recommendedMinimum = (nMaxOutboundTimeframe / 600) * (1*1000*1000);
nMaxOutboundLimit = limit;

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

At the very least create a new constant instead of using "1_1000_1000" here.

@@ -958,7 +958,7 @@ bool AcceptToMemoryPoolWorker(CTxMemPool& pool, CValidationState &state, const C
// Check that the transaction doesn't have an excessive number of
// sigops, making it impossible to mine. Since the coinbase transaction
// itself can contain sigops MAX_STANDARD_TX_SIGOPS is less than
// MAX_BLOCK_SIGOPS; we still consider this an invalid rather than
// MaxBlockSigops; we still consider this an invalid rather than

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

maybe s/MaxBlockSigops/MaxBlockSigops()/ to make it clearer that now is a function?

@@ -2068,6 +2068,13 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin
flags |= SCRIPT_VERIFY_CHECKLOCKTIMEVERIFY;
}

bool fBIP102Enforcing = (pindex->GetBlockTime() >= (int64_t)BIP102_FORK_TIME);
if (fBIP102Enforcing && block.nVersion >= 5 && !IsSuperMajority(5, pindex->pprev, chainparams.GetConsensus().nMajorityRejectBlockOutdated, chainparams.GetConsensus()))

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

If version is lower than 5 it doesn't matter the result of the IsSuperMajority call?
You don't need an if and to assign false to the bool: just assign all the conditions to the variable directly . It's simpler and you would have probably not made the mistake you did if it wasn't for the unnecessary if statement.

@@ -2978,7 +2985,8 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
// because we receive the wrong transactions for it.

// Size limits
if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
unsigned int nMaxSize = MaxBlockSize(block.GetBlockTime());

This comment has been minimized.

@jtimon

jtimon Dec 20, 2015

Member

This now depends on CBlockIndex (for ISM or bip9), the check should be moved to contextualcheckblock.

@rebroad

This comment has been minimized.

Copy link
Contributor

rebroad commented Jan 5, 2016

Agree with @petertodd i.e. strong NACK without at least a 95% (I'd say 90%) miner vote. I don't think this is the answer - off-chain transactions is a better solution. There have been many papers written on how important it is for the security of the network to keep block sizes small AFAIK.

@ripper234

This comment has been minimized.

Copy link

ripper234 commented Jan 17, 2016

Does the "consensus" flag mean that this pull request has achieved consensus?
Or does it mean that consensus is required for it, or something else?

@wangchun

This comment has been minimized.

Copy link

wangchun commented Jan 17, 2016

I agree the 95% rule. But the voting process should be AFTER the merge of this PR. If 95% cannot be met, nothing happens, if we have 95% in the future, as the code is already there, it is not only running on miners' side, but also on countless users and merchants. It saves us lots of time and could absolutely accelerate the fork process. In fact the switch over of BIP 34/66 was exactly like that.

@sipa

This comment has been minimized.

Copy link
Member

sipa commented Jan 17, 2016

@rebroad

This comment has been minimized.

Copy link
Contributor

rebroad commented Jan 17, 2016

I would NACK any BIP that proposes a fixed way forward in terms of block size, prediction of future requirements.

Why not create a BIP that allows the miners to vote in each block whether to increase or decrease the blocksize, in addition to increasing or decreasing the average delay between blocks. This would require 9 different values, i.e. vote size same/up/down, delay same/up/down. I'd suggest 12.5% increases or decreases. This way the miners can tune as they require, and they are the main people with the incentive to make it sustainable (is this a correct assumption?)

I've reserved bitcoinmv.com (MV stands for Miner Vote) as a potential fork to the bitcoin project to give people another option in addition to Bitcoin Core, Bitcoin XT, etc... I think we need more options than the current two proposals. Hopefully the domain will never get used, as I'm hoping either Bitcoin Core or Bitcoin XT will go forward with this idea of letting the miners decide.

@seweso

This comment has been minimized.

Copy link

seweso commented Feb 3, 2016

@wangchun A request from a majority of miners to have an activation percentage of 95% doesn't make sense, as that majority would already be able to abstain with enough hashing power to emulate/simulate a 95% activation percentage. Just make sure that about 20% abstains until 75% is met and you are golden.

Or 40% of miners can abstain until 55% is reached, or 30% can abstain until 65% is reached. Take your pick.

A majority of miners is in control over the activation anyway. It is just easier to increase the effective activation percentage than it is to lower it. Because lowering needs a completely new software release, new version numbers, or it needs miners to orphan votes which abstain for nefarious reasons.

Giving a minority some kind of veto is very dangerous and irresponsible. Obviously a minority (which aligns with preventing such a change) isn't going to admit that when that means losing the power to force things upon the majority with this inaction.

You go drive a bus with 100 passengers, and then when 5 people disagree with the direction you stop steeing completely. Clearly inaction is always better. Forcing "not steering" upon 95% makes total sense.

For non contentious non time sensitive issues a 95% activation percentage is fine, as that means the majority of miners don't have to do any coordination amongst themselves to safely activate the soft/hard fork. But if there is any significant cost to not upgrading or not upgrading on time and there is a chance that not everyone is going to upgrade in time (for whatever reason) then a 75% activation percentage gives you way more flexibility. It is a nice number, not too high, not too low.

@rebroad

This comment has been minimized.

Copy link
Contributor

rebroad commented Feb 5, 2016

Two things I don't understand, and would appreciate having explained. 1) Why does it suddenly jump to 2MB rather than gradually? 2) Why fix it at a number rather than allow for it to change per demand? i.e. allow the miners to vote per block on whether to increase or decrease the block size limit (or indeed the average time between blocks). Mining pools would gain subscribers based upon their advertised policy/voting on this.

I see no disadvantages with adding this to the BIP, and it makes it far less contentious, and far more future proof, plus reduces any sudden effects on the economy by allowing the change to occur gradually.

@sipa

This comment has been minimized.

Copy link
Member

sipa commented Feb 5, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment