Skip to content

Commit

Permalink
Hard fork: allow 20MB blocks after 1 March 2016
Browse files Browse the repository at this point in the history
Allows any block with a timestamp on or after 1 March 2016 00:00:00 UTC to
be up to 20,000,000 bytes big (serialized).

I believe this is the simplest possible set of changes that will work.
  • Loading branch information
gavinandresen committed May 1, 2015
1 parent f026ab6 commit 5f46da2
Show file tree
Hide file tree
Showing 10 changed files with 156 additions and 38 deletions.
1 change: 1 addition & 0 deletions src/Makefile.test.include
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ BITCOIN_TESTS =\
test/base58_tests.cpp \ test/base58_tests.cpp \
test/base64_tests.cpp \ test/base64_tests.cpp \
test/bip32_tests.cpp \ test/bip32_tests.cpp \
test/block_size_tests.cpp \
test/bloom_tests.cpp \ test/bloom_tests.cpp \
test/checkblock_tests.cpp \ test/checkblock_tests.cpp \
test/Checkpoints_tests.cpp \ test/Checkpoints_tests.cpp \
Expand Down
2 changes: 1 addition & 1 deletion src/bitcoin-tx.cpp
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ static void MutateTxAddInput(CMutableTransaction& tx, const string& strInput)
uint256 txid(uint256S(strTxid)); uint256 txid(uint256S(strTxid));


static const unsigned int minTxOutSz = 9; static const unsigned int minTxOutSz = 9;
static const unsigned int maxVout = MAX_BLOCK_SIZE / minTxOutSz; static const unsigned int maxVout = MaxBlockSize(std::numeric_limits<uint64_t>::max())/ minTxOutSz;


// extract and validate vout // extract and validate vout
string strVout = strInput.substr(pos + 1, string::npos); string strVout = strInput.substr(pos + 1, string::npos);
Expand Down
16 changes: 14 additions & 2 deletions src/consensus/consensus.h
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -6,10 +6,22 @@
#ifndef BITCOIN_CONSENSUS_CONSENSUS_H #ifndef BITCOIN_CONSENSUS_CONSENSUS_H
#define BITCOIN_CONSENSUS_CONSENSUS_H #define BITCOIN_CONSENSUS_CONSENSUS_H


static const uint64_t TWENTY_MEG_FORK_TIME = 1456790400; // 1 March 2016 00:00:00 UTC

/** The maximum allowed size for a serialized block, in bytes (network rule) */ /** The maximum allowed size for a serialized block, in bytes (network rule) */
static const unsigned int MAX_BLOCK_SIZE = 1000000; inline unsigned int MaxBlockSize(uint64_t nBlockTimestamp) {
// 1MB blocks until 1 March 2016, then 20MB
return (nBlockTimestamp < TWENTY_MEG_FORK_TIME ? 1000*1000 : 20*1000*1000);
}

/** The maximum allowed size for a serialized transaction, in bytes */
static const unsigned int MAX_TRANSACTION_SIZE = 1000*1000;

/** The maximum allowed number of signature check operations in a block (network rule) */ /** The maximum allowed number of signature check operations in a block (network rule) */
static const unsigned int MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE/50; inline unsigned int MaxBlockSigops(uint64_t nBlockTimestamp) {
return MaxBlockSize(nBlockTimestamp)/50;
}

/** Coinbase transaction outputs can only be spent after this number of new blocks (network rule) */ /** Coinbase transaction outputs can only be spent after this number of new blocks (network rule) */
static const int COINBASE_MATURITY = 100; static const int COINBASE_MATURITY = 100;
/** Threshold for nLockTime: below this value it is interpreted as block number, otherwise as UNIX timestamp. */ /** Threshold for nLockTime: below this value it is interpreted as block number, otherwise as UNIX timestamp. */
Expand Down
18 changes: 10 additions & 8 deletions src/main.cpp
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -787,7 +787,7 @@ bool CheckTransaction(const CTransaction& tx, CValidationState &state)
return state.DoS(10, error("CheckTransaction(): vout empty"), return state.DoS(10, error("CheckTransaction(): vout empty"),
REJECT_INVALID, "bad-txns-vout-empty"); REJECT_INVALID, "bad-txns-vout-empty");
// Size limits // Size limits
if (::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE) if (::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION) > MAX_TRANSACTION_SIZE)
return state.DoS(100, error("CheckTransaction(): size limits failed"), return state.DoS(100, error("CheckTransaction(): size limits failed"),
REJECT_INVALID, "bad-txns-oversize"); REJECT_INVALID, "bad-txns-oversize");


Expand Down Expand Up @@ -971,7 +971,7 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa
// Check that the transaction doesn't have an excessive number of // Check that the transaction doesn't have an excessive number of
// sigops, making it impossible to mine. Since the coinbase transaction // sigops, making it impossible to mine. Since the coinbase transaction
// itself can contain sigops MAX_STANDARD_TX_SIGOPS is less than // itself can contain sigops MAX_STANDARD_TX_SIGOPS is less than
// MAX_BLOCK_SIGOPS; we still consider this an invalid rather than // MaxBlockSigops; we still consider this an invalid rather than
// merely non-standard transaction. // merely non-standard transaction.
unsigned int nSigOps = GetLegacySigOpCount(tx); unsigned int nSigOps = GetLegacySigOpCount(tx);
nSigOps += GetP2SHSigOpCount(tx, view); nSigOps += GetP2SHSigOpCount(tx, view);
Expand Down Expand Up @@ -1764,7 +1764,7 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin


nInputs += tx.vin.size(); nInputs += tx.vin.size();
nSigOps += GetLegacySigOpCount(tx); nSigOps += GetLegacySigOpCount(tx);
if (nSigOps > MAX_BLOCK_SIGOPS) if (nSigOps > MaxBlockSigops(block.GetBlockTime()))
return state.DoS(100, error("ConnectBlock(): too many sigops"), return state.DoS(100, error("ConnectBlock(): too many sigops"),
REJECT_INVALID, "bad-blk-sigops"); REJECT_INVALID, "bad-blk-sigops");


Expand All @@ -1780,7 +1780,7 @@ bool ConnectBlock(const CBlock& block, CValidationState& state, CBlockIndex* pin
// this is to prevent a "rogue miner" from creating // this is to prevent a "rogue miner" from creating
// an incredibly-expensive-to-validate block. // an incredibly-expensive-to-validate block.
nSigOps += GetP2SHSigOpCount(tx, view); nSigOps += GetP2SHSigOpCount(tx, view);
if (nSigOps > MAX_BLOCK_SIGOPS) if (nSigOps > MaxBlockSigops(block.GetBlockTime()))
return state.DoS(100, error("ConnectBlock(): too many sigops"), return state.DoS(100, error("ConnectBlock(): too many sigops"),
REJECT_INVALID, "bad-blk-sigops"); REJECT_INVALID, "bad-blk-sigops");
} }
Expand Down Expand Up @@ -2569,7 +2569,8 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
// because we receive the wrong transactions for it. // because we receive the wrong transactions for it.


// Size limits // Size limits
if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE) unsigned int nMaxSize = MaxBlockSize(block.GetBlockTime());
if (block.vtx.empty() || block.vtx.size() > nMaxSize || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > nMaxSize)
return state.DoS(100, error("CheckBlock(): size limits failed"), return state.DoS(100, error("CheckBlock(): size limits failed"),
REJECT_INVALID, "bad-blk-length"); REJECT_INVALID, "bad-blk-length");


Expand All @@ -2592,7 +2593,7 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
{ {
nSigOps += GetLegacySigOpCount(tx); nSigOps += GetLegacySigOpCount(tx);
} }
if (nSigOps > MAX_BLOCK_SIGOPS) if (nSigOps > MaxBlockSigops(block.GetBlockTime()))
return state.DoS(100, error("CheckBlock(): out-of-bounds SigOpCount"), return state.DoS(100, error("CheckBlock(): out-of-bounds SigOpCount"),
REJECT_INVALID, "bad-blk-sigops", true); REJECT_INVALID, "bad-blk-sigops", true);


Expand Down Expand Up @@ -3309,7 +3310,8 @@ bool LoadExternalBlockFile(FILE* fileIn, CDiskBlockPos *dbp)
int nLoaded = 0; int nLoaded = 0;
try { try {
// This takes over fileIn and calls fclose() on it in the CBufferedFile destructor // This takes over fileIn and calls fclose() on it in the CBufferedFile destructor
CBufferedFile blkdat(fileIn, 2*MAX_BLOCK_SIZE, MAX_BLOCK_SIZE+8, SER_DISK, CLIENT_VERSION); unsigned int nAbsoluteMaxBlockSize = MaxBlockSize(std::numeric_limits<uint64_t>::max());
CBufferedFile blkdat(fileIn, 2*nAbsoluteMaxBlockSize, nAbsoluteMaxBlockSize+8, SER_DISK, CLIENT_VERSION);
uint64_t nRewind = blkdat.GetPos(); uint64_t nRewind = blkdat.GetPos();
while (!blkdat.eof()) { while (!blkdat.eof()) {
boost::this_thread::interruption_point(); boost::this_thread::interruption_point();
Expand All @@ -3328,7 +3330,7 @@ bool LoadExternalBlockFile(FILE* fileIn, CDiskBlockPos *dbp)
continue; continue;
// read size // read size
blkdat >> nSize; blkdat >> nSize;
if (nSize < 80 || nSize > MAX_BLOCK_SIZE) if (nSize < 80 || nSize > nAbsoluteMaxBlockSize)
continue; continue;
} catch (const std::exception&) { } catch (const std::exception&) {
// no valid block header found; don't complain // no valid block header found; don't complain
Expand Down
2 changes: 1 addition & 1 deletion src/main.h
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ static const unsigned int MAX_STANDARD_TX_SIZE = 100000;
/** Maximum number of signature check operations in an IsStandard() P2SH script */ /** Maximum number of signature check operations in an IsStandard() P2SH script */
static const unsigned int MAX_P2SH_SIGOPS = 15; static const unsigned int MAX_P2SH_SIGOPS = 15;
/** The maximum number of sigops we're willing to relay/mine in a single tx */ /** The maximum number of sigops we're willing to relay/mine in a single tx */
static const unsigned int MAX_STANDARD_TX_SIGOPS = MAX_BLOCK_SIGOPS/5; static const unsigned int MAX_STANDARD_TX_SIGOPS = 4000;

This comment has been minimized.

Copy link
@thesoftwarejedi

thesoftwarejedi May 27, 2015

As I understand it, this is set to 4000 based on the previous math of 1,000,000/50/5, leaving the 4,000 sigops/tx as is going forward. It does make me wonder why it was previously set to a percentage of block size.

We need to keep in mind that the increase in block size allows for an increase in tx volume but not an increase in sigops/tx, which could come into consideration as merchants and payment processors sweep more and more payments at the same time. I would expect that the 4,000 limit is already taken into account in such cases, and transactions are batched.

Are we missing anything that might mean an increase in tx volume would require an increase in sigops/tx?
Why have a maximum at all?
Why not a smaller maximum?
Is this an attack vector we're trying to prevent? DoS via large scripts?
Is a 12,000 sigop tx any different from 3 x 4,000 sigop txs? This is now going to allow 100 x 4,000 sigops txs (disregarding size).
Is it better kept at 20,000/block (or some other fixed amount smaller than 400,000) instead of 4,000/tx?

/** Default for -maxorphantx, maximum number of orphan transactions kept in memory */ /** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100; static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100;
/** The maximum size of a blk?????.dat file (since 0.8) */ /** The maximum size of a blk?????.dat file (since 0.8) */
Expand Down
2 changes: 1 addition & 1 deletion src/merkleblock.cpp
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ uint256 CPartialMerkleTree::ExtractMatches(std::vector<uint256> &vMatch) {
if (nTransactions == 0) if (nTransactions == 0)
return uint256(); return uint256();
// check for excessively high numbers of transactions // check for excessively high numbers of transactions
if (nTransactions > MAX_BLOCK_SIZE / 60) // 60 is the lower bound for the size of a serialized CTransaction if (nTransactions > MaxBlockSize(std::numeric_limits<uint64_t>::max()) / 60) // 60 is the lower bound for the size of a serialized CTransaction
return uint256(); return uint256();
// there can never be more hashes provided than one for every txid // there can never be more hashes provided than one for every txid
if (vHash.size() > nTransactions) if (vHash.size() > nTransactions)
Expand Down
44 changes: 23 additions & 21 deletions src/miner.cpp
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -114,30 +114,33 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
pblocktemplate->vTxFees.push_back(-1); // updated at end pblocktemplate->vTxFees.push_back(-1); // updated at end
pblocktemplate->vTxSigOps.push_back(-1); // updated at end pblocktemplate->vTxSigOps.push_back(-1); // updated at end


// Largest block you're willing to create:
unsigned int nBlockMaxSize = GetArg("-blockmaxsize", DEFAULT_BLOCK_MAX_SIZE);
// Limit to betweeen 1K and MAX_BLOCK_SIZE-1K for sanity:
nBlockMaxSize = std::max((unsigned int)1000, std::min((unsigned int)(MAX_BLOCK_SIZE-1000), nBlockMaxSize));

// How much of the block should be dedicated to high-priority transactions,
// included regardless of the fees they pay
unsigned int nBlockPrioritySize = GetArg("-blockprioritysize", DEFAULT_BLOCK_PRIORITY_SIZE);
nBlockPrioritySize = std::min(nBlockMaxSize, nBlockPrioritySize);

// Minimum block size you want to create; block will be filled with free transactions
// until there are no more or the block reaches this size:
unsigned int nBlockMinSize = GetArg("-blockminsize", DEFAULT_BLOCK_MIN_SIZE);
nBlockMinSize = std::min(nBlockMaxSize, nBlockMinSize);

// Collect memory pool transactions into the block
CAmount nFees = 0;

{ {
LOCK2(cs_main, mempool.cs); LOCK2(cs_main, mempool.cs);
CBlockIndex* pindexPrev = chainActive.Tip(); CBlockIndex* pindexPrev = chainActive.Tip();
const int nHeight = pindexPrev->nHeight + 1; const int nHeight = pindexPrev->nHeight + 1;
CCoinsViewCache view(pcoinsTip); CCoinsViewCache view(pcoinsTip);


UpdateTime(pblock, Params().GetConsensus(), pindexPrev);
uint64_t nBlockTime = pblock->GetBlockTime();

// Largest block you're willing to create:
unsigned int nBlockMaxSize = GetArg("-blockmaxsize", DEFAULT_BLOCK_MAX_SIZE);
// Limit to betweeen 1K and max-size-minus-1K for sanity:
nBlockMaxSize = std::max((unsigned int)1000, std::min((unsigned int)(MaxBlockSize(nBlockTime)), nBlockMaxSize));

// How much of the block should be dedicated to high-priority transactions,
// included regardless of the fees they pay
unsigned int nBlockPrioritySize = GetArg("-blockprioritysize", DEFAULT_BLOCK_PRIORITY_SIZE);
nBlockPrioritySize = std::min(nBlockMaxSize, nBlockPrioritySize);

// Minimum block size you want to create; block will be filled with free transactions
// until there are no more or the block reaches this size:
unsigned int nBlockMinSize = GetArg("-blockminsize", DEFAULT_BLOCK_MIN_SIZE);
nBlockMinSize = std::min(nBlockMaxSize, nBlockMinSize);

// Collect memory pool transactions into the block
CAmount nFees = 0;

// Priority order to process transactions // Priority order to process transactions
list<COrphan> vOrphan; // list memory doesn't move list<COrphan> vOrphan; // list memory doesn't move
map<uint256, vector<COrphan*> > mapDependers; map<uint256, vector<COrphan*> > mapDependers;
Expand Down Expand Up @@ -243,7 +246,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)


// Legacy limits on sigOps: // Legacy limits on sigOps:
unsigned int nTxSigOps = GetLegacySigOpCount(tx); unsigned int nTxSigOps = GetLegacySigOpCount(tx);
if (nBlockSigOps + nTxSigOps >= MAX_BLOCK_SIGOPS) if (nBlockSigOps + nTxSigOps >= MaxBlockSigops(nBlockTime))
continue; continue;


// Skip free transactions if we're past the minimum block size: // Skip free transactions if we're past the minimum block size:
Expand All @@ -270,7 +273,7 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)
CAmount nTxFees = view.GetValueIn(tx)-tx.GetValueOut(); CAmount nTxFees = view.GetValueIn(tx)-tx.GetValueOut();


nTxSigOps += GetP2SHSigOpCount(tx, view); nTxSigOps += GetP2SHSigOpCount(tx, view);
if (nBlockSigOps + nTxSigOps >= MAX_BLOCK_SIGOPS) if (nBlockSigOps + nTxSigOps >= MaxBlockSigops(nBlockTime))
continue; continue;


// Note that flags: we don't want to set mempool/IsStandard() // Note that flags: we don't want to set mempool/IsStandard()
Expand Down Expand Up @@ -327,7 +330,6 @@ CBlockTemplate* CreateNewBlock(const CScript& scriptPubKeyIn)


// Fill in header // Fill in header
pblock->hashPrevBlock = pindexPrev->GetBlockHash(); pblock->hashPrevBlock = pindexPrev->GetBlockHash();
UpdateTime(pblock, Params().GetConsensus(), pindexPrev);
pblock->nBits = GetNextWorkRequired(pindexPrev, pblock, Params().GetConsensus()); pblock->nBits = GetNextWorkRequired(pindexPrev, pblock, Params().GetConsensus());
pblock->nNonce = 0; pblock->nNonce = 0;
pblocktemplate->vTxSigOps[0] = GetLegacySigOpCount(pblock->vtx[0]); pblocktemplate->vTxSigOps[0] = GetLegacySigOpCount(pblock->vtx[0]);
Expand Down
4 changes: 2 additions & 2 deletions src/net.h
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -46,8 +46,8 @@ static const int TIMEOUT_INTERVAL = 20 * 60;
static const unsigned int MAX_INV_SZ = 50000; static const unsigned int MAX_INV_SZ = 50000;
/** The maximum number of new addresses to accumulate before announcing. */ /** The maximum number of new addresses to accumulate before announcing. */
static const unsigned int MAX_ADDR_TO_SEND = 1000; static const unsigned int MAX_ADDR_TO_SEND = 1000;
/** Maximum length of incoming protocol messages (no message over 2 MiB is currently acceptable). */ /** Maximum length of incoming protocol messages (no message over 20 MiB is currently acceptable). */
static const unsigned int MAX_PROTOCOL_MESSAGE_LENGTH = 2 * 1024 * 1024; static const unsigned int MAX_PROTOCOL_MESSAGE_LENGTH = 20 * 1024 * 1024;
/** -listen default */ /** -listen default */
static const bool DEFAULT_LISTEN = true; static const bool DEFAULT_LISTEN = true;
/** -upnp default */ /** -upnp default */
Expand Down
4 changes: 2 additions & 2 deletions src/rpcmining.cpp
Original file line number Original file line Diff line number Diff line change
Expand Up @@ -579,8 +579,8 @@ Value getblocktemplate(const Array& params, bool fHelp)
result.push_back(Pair("mintime", (int64_t)pindexPrev->GetMedianTimePast()+1)); result.push_back(Pair("mintime", (int64_t)pindexPrev->GetMedianTimePast()+1));
result.push_back(Pair("mutable", aMutable)); result.push_back(Pair("mutable", aMutable));
result.push_back(Pair("noncerange", "00000000ffffffff")); result.push_back(Pair("noncerange", "00000000ffffffff"));
result.push_back(Pair("sigoplimit", (int64_t)MAX_BLOCK_SIGOPS)); result.push_back(Pair("sigoplimit", (int64_t)MaxBlockSigops(pblock->GetBlockTime())));
result.push_back(Pair("sizelimit", (int64_t)MAX_BLOCK_SIZE)); result.push_back(Pair("sizelimit", (int64_t)MaxBlockSize(pblock->GetBlockTime())));
result.push_back(Pair("curtime", pblock->GetBlockTime())); result.push_back(Pair("curtime", pblock->GetBlockTime()));
result.push_back(Pair("bits", strprintf("%08x", pblock->nBits))); result.push_back(Pair("bits", strprintf("%08x", pblock->nBits)));
result.push_back(Pair("height", (int64_t)(pindexPrev->nHeight+1))); result.push_back(Pair("height", (int64_t)(pindexPrev->nHeight+1)));
Expand Down
101 changes: 101 additions & 0 deletions src/test/block_size_tests.cpp
Original file line number Original file line Diff line number Diff line change
@@ -0,0 +1,101 @@
// Copyright (c) 2011-2014 The Bitcoin Core developers
// Distributed under the MIT software license, see the accompanying
// file COPYING or http://www.opensource.org/licenses/mit-license.php.

#include "main.h"
#include "miner.h"
#include "pubkey.h"
#include "uint256.h"
#include "util.h"

#include "test/test_bitcoin.h"

#include <boost/test/unit_test.hpp>

BOOST_FIXTURE_TEST_SUITE(block_size_tests, TestingSetup)

// Fill block with dummy transactions until it's serialized size is exactly nSize
static void
FillBlock(CBlock& block, unsigned int nSize)
{
assert(block.vtx.size() > 0); // Start with at least a coinbase

unsigned int nBlockSize = ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION);
if (nBlockSize > nSize) {
block.vtx.resize(1); // passed in block is too big, start with just coinbase
nBlockSize = ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION);
}

// Create a block that is exactly 1,000,000 bytes, serialized:
CMutableTransaction tx;
tx.vin.resize(1);
tx.vin[0].scriptSig = CScript() << OP_11;
tx.vin[0].prevout.hash = block.vtx[0].GetHash(); // passes CheckBlock, would fail if we checked inputs.
tx.vin[0].prevout.n = 0;
tx.vout.resize(1);
tx.vout[0].nValue = 1LL;
tx.vout[0].scriptPubKey = block.vtx[0].vout[0].scriptPubKey;

unsigned int nTxSize = ::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION);
uint256 txhash = tx.GetHash();

// ... add copies of tx to the block to get close to 1MB:
while (nBlockSize+nTxSize < nSize) {
block.vtx.push_back(tx);
nBlockSize += nTxSize;
tx.vin[0].prevout.hash = txhash; // ... just to make each transaction unique
txhash = tx.GetHash();
}
// Make the last transaction exactly the right size by making the scriptSig bigger.
block.vtx.pop_back();
nBlockSize = ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION);
unsigned int nFill = nSize - nBlockSize - nTxSize;
for (unsigned int i = 0; i < nFill; i++)
tx.vin[0].scriptSig << OP_11;
block.vtx.push_back(tx);
nBlockSize = ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION);
assert(nBlockSize == nSize);
}

static bool TestCheckBlock(CBlock& block, uint64_t nTime, unsigned int nSize)
{
SetMockTime(nTime);
block.nTime = nTime;
FillBlock(block, nSize);
CValidationState validationState;
bool fResult = CheckBlock(block, validationState, false, false) && validationState.IsValid();
SetMockTime(0);
return fResult;
}

//
// Unit test CheckBlock() for conditions around the block size hard fork
//
BOOST_AUTO_TEST_CASE(TwentyMegFork)
{
CScript scriptPubKey = CScript() << ParseHex("04678afdb0fe5548271967f1a67130b7105cd6a828e03909a67962e0ea1f61deb649f6bc3f4cef38c4f35504e51ec112de5c384df7ba0b8d578a4c702b6bf11d5f") << OP_CHECKSIG;
CBlockTemplate *pblocktemplate;

LOCK(cs_main);

BOOST_CHECK(pblocktemplate = CreateNewBlock(scriptPubKey));
CBlock *pblock = &pblocktemplate->block;

// Before fork time...
BOOST_CHECK(TestCheckBlock(*pblock, TWENTY_MEG_FORK_TIME-1LL, 1000*1000)); // 1MB : valid
BOOST_CHECK(!TestCheckBlock(*pblock, TWENTY_MEG_FORK_TIME-1LL, 1000*1000+1)); // >1MB : invalid
BOOST_CHECK(!TestCheckBlock(*pblock, TWENTY_MEG_FORK_TIME-1LL, 20*1000*1000)); // 20MB : invalid

// Exactly at fork time...
BOOST_CHECK(TestCheckBlock(*pblock, TWENTY_MEG_FORK_TIME, 1000*1000)); // 1MB : valid
BOOST_CHECK(TestCheckBlock(*pblock, TWENTY_MEG_FORK_TIME, 20*1000*1000)); // 20MB : valid
BOOST_CHECK(!TestCheckBlock(*pblock, TWENTY_MEG_FORK_TIME, 20*1000*1000+1)); // >20MB : invalid

// A year after fork time:
uint64_t yearAfter = TWENTY_MEG_FORK_TIME+60*60*24*365;
BOOST_CHECK(TestCheckBlock(*pblock, yearAfter, 1000*1000)); // 1MB : valid
BOOST_CHECK(TestCheckBlock(*pblock, yearAfter, 20*1000*1000)); // 20MB : valid
BOOST_CHECK(!TestCheckBlock(*pblock, yearAfter, 20*1000*1000+1)); // >20MB : invalid
}

BOOST_AUTO_TEST_SUITE_END()

10 comments on commit 5f46da2

@kangasbros
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about a block size, that is linearly dependant on block count? In this snippet it is each half an year, 15768000 secs: (would make more sense to define it via block count IMHO than by time).

static const unsigned int MAX_BLOCK_SIZE = 1000000;
inline unsigned int MaxBlockSize(uint64_t nBlockTimestamp) {
    // 1MB blocks until 1 March 2016, then 20MB
    return (nBlockTimestamp < TWENTY_MEG_FORK_TIME ? 1000*1000 : 
                ((nBlockTimestamp - TWENTY_MEG_FORK_TIME) / 15768000)*1000*1000);
}

@mikehearn
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@schildbach I believe the problem is that the slower/incrementing formula didn't "get consensus" in private discussions. I don't know exactly what that means either, but presumably something about the exact formula was controversial with someone.

I don't personally think verifying the entire block chain on a watch is a reasonable goal we should care about, but it doesn't matter - your Intel Edison will still be able to process the block chain, so don't worry. You can either set it to prune (delete old blocks) which means it won't service SPV clients any more or serve the block chain to other full nodes, but it still verifies everything and so can act as your trusted node. Or you can just assume that miners won't actually build 20mb blocks right away due to lack of demand/excessively large blocks triggering unacceptable orphan rates. Whichever the outcome is, it's not something to worry about.

@gavinandresen Change looks good to me. I will accept it into Bitcoin XT if it does not get accepted into Bitcoin Core. Comments:

DEFAULT_BLOCK_MAX_SIZE is unchanged in this patch. We know from experience that quite a few miners don't modify the default size, presumably they expect that It Just Works(tm) out of the box. If leaving it at 750kb is deliberate, that deserves discussion in the code+release notes IMHO. If it's not deliberate, then I suggest we either:

  1. Set it to be 20mb as well, i.e. force miners to pick an appropriate value if their blocks are too big and getting orphaned.
  2. Remove the default entirely and refuse to service work/block template requests if no soft limit has been set. Again, force miners to pick..
  3. Same as above but provide some kind of "reasonable" suggestion in the error text, where by reasonable what I mean is some figure that sounds plausible based on your propagation experiments.

The fork point is defined in terms of timestamp rather than height. Is that OK? My gut sense is that it works fine - we might get a >1mb block with a timestamp after the switchover time, and then another block afterwards with a timestamp lower than the switchover time (old rules), but I don't think that will cause any issues: the second block will be checked according to the old rules. However, it'd be nice to see a discussion in the code about this case.

@gavinandresen
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deleted any comments that are not about this particular code. Please argue about the costs/benefits of 20MB versus 1MB blocks in the usual places: reddit, bitcoin-development mailing list, blog posts (I'll be writing several), etc.

@mikehearn : yes, I deliberately am not changing DEFAULT_BLOCK_MAX_SIZE. Policy on how to handle that is a whole 'nother discussion for a whole 'nother pull request, in my humble opinion.

RE: fork in terms of timestamp: I will write a couple rpc-test that uses setmocktime to simulate mining through the switchover, and will test to be absolutely certain there will be no issues (but yes, it seems obvious that if the max block size is a pure function of block.nTime there will be no issues).

@gavinandresen
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kangasbros : I'll write more about why just a 20MB increase and no increase-over-time in a blog post. In short, it is impossible to predict the future, and the fear is that increases in network bandwidth to the home and/or cpu power increases may plateau sometime in the next couple of years.

@tinspin
Copy link

@tinspin tinspin commented on 5f46da2 May 5, 2015

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But why a timestamp instead of block number. How secure are timestamps? Is there no risk at all that some clients have another opinion of the time? What happens if they do?

@mikehearn
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to predict when the chain will reach a certain height - you can do it in a fuzzy manner but it's not very precise. If the crossover was set as a block height then it might potentially occur weeks earlier or later than planned. Probably not a big deal, but I guess that's the reason.

The timestamps are allowed to drift by a couple of hours. They're cross checked by miners.

@gavinandresen
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tinspin : code up a switch-by-height implementation, and you'll see why by-timestamp is much better.

There is no danger of confusion, the block timestamp is in the block header and is part of what is hashed. Either it is before the switchover time or after, and if you're checking the block you have the timestamp to check against right there in the block's data. Your node's notion of local time is irrelevant (if your time is too far off the network's idea of the current time you'll reject the block, but that is true today and independent of the max block size check).

@jl2012
Copy link

@jl2012 jl2012 commented on 5f46da2 May 6, 2015

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would there be any problem if MAX_PROTOCOL_MESSAGE_LENGTH is too close to MaxBlockSize? Now it is 2,097,152 vs 1,000,000 and you propose 20,971,520 vs 20,000,000

@gavinandresen
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jl2012 : no, 1MB extra is actually a lot bigger than necessary, overhead for a block message is on the order of 10 bytes.

@rebroad
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not the answer, IMHO, unless 1) a significant majority of miners agree to this change, and 2) a significant majority of full-node operators agree to this change. The latter I suspect will not, and will desire miners also who will not.

The change doesn't need to force people to choose if the change can be done to merge-mine with the current 1MB-limit bitcoin. This way the mining power will not be split, and wallet providers can simply provide a config option to choose which bitcoin to default to.

The market will decide on the price of each bitcoin - and we'll probably find that the market cap of the two bitcoins combined will stay about the same - perhaps increase with the increased confidence. To force a change to 20MB blocks will further CENTRALIZE bitcoin with only a small number of organisations running full nodes.

With the merge-mining way - that can still happen with the 20MB bitcoin, but at least we can continue to see where the 1MB bitcoin would have evolved to become, rather than abort it while still a relative fetus.

Please sign in to comment.