Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big blocks #191

Closed
wants to merge 5 commits into from
Closed

Big blocks #191

wants to merge 5 commits into from

Conversation

zander
Copy link

@zander zander commented Oct 25, 2016

This reverts BIP109. Classic will no longer vote for BIP109.

This reuses the code and starts voting for a new bit which is a protocol
upgrade and the only change in that one is that the maximum block size is
removed.

This new fork is activated using bit 26, while requiring signalling for
BIP9 making the block version 0x28000000 When 75% of 1000 blocks have this
bit set we lock in this change and have a 4 week wait period.
After that period blocks no longer have a size limit.

Miners can continue to use the -blockmaxsize argument or bitcoin.conf
setting to limit the size of blocks they generate! All nodes can also use
the -blocksizeacceptlimit option to limit the size of the blocks accepted
from other miners.

This change simplifies some of the designs that were introduced for BIP9, like the overloading of the IsSuperMajority method and the txdb getter/setter is now made substantially simpler too. Overdesigns are good to remove.

Last, I moved the fork-bit to be namespaced inside CBlockHeader, being a tiny little bit more structured instead of everything in the global namespace.

This option allows a user to configure the behaviour of the
block-acceptance algorithm. A node will reject incoming blocks that are
larger than the block size accept limit and as such a miner would be able
to decide to limit blocks it will build on top of based on size.

The blocksizeacceptlimit is a number in 100KB increments.
The default is 20, which makes the default accepted block 2MB.
This reverts BIP109. Classic will no longer vote for BIP109.

This reuses the code and starts voting for a new bit which is a protocol
upgrade and the only change in that one is that the maximum block size is
removed.

This new fork is activated using bit 26, while requiring signalling for
BIP9 making the block version 0x28000000 When 85% of 1000 blocks have this
bit set we lock in this change and have a 4 week wait period.
After that period blocks no longer have a size limit.

Miners can continue to use the -blockmaxsize argument or bitcoin.conf
setting to limit the size of blocks they generate!  All nodes can also use
the `-blocksizeacceptlimit` option to limit the size of the blocks accepted
from other miners.
@dskloet
Copy link

dskloet commented Oct 25, 2016

Why increase the trigger from 750 blocks to 850 blocks?

@@ -489,6 +489,8 @@ std::string HelpMessage(HelpMessageMode mode)
strUsage += HelpMessageOpt("-bytespersigop", strprintf(_("Minimum bytes per sigop in transactions we relay and mine (default: %u)"), DEFAULT_BYTES_PER_SIGOP));
strUsage += HelpMessageOpt("-datacarrier", strprintf(_("Relay and mine data carrier transactions (default: %u)"), DEFAULT_ACCEPT_DATACARRIER));
strUsage += HelpMessageOpt("-datacarriersize", strprintf(_("Maximum size of data in data carrier transactions we relay and mine (default: %u)"), MAX_OP_RETURN_RELAY));
// called excessiveblocksize in BU
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why abbreviate Bitcoin Unlimited? It might be less clear in some future.

@zander
Copy link
Author

zander commented Oct 25, 2016

@dskloet
The numbers in this pull request are not final, they are suggestions and this PR is one way to go, out of many possible ones. The number one complaint we got in BIP109 was the low acceptance, lets try something larger.

@@ -489,6 +489,8 @@ std::string HelpMessage(HelpMessageMode mode)
strUsage += HelpMessageOpt("-bytespersigop", strprintf(_("Minimum bytes per sigop in transactions we relay and mine (default: %u)"), DEFAULT_BYTES_PER_SIGOP));
strUsage += HelpMessageOpt("-datacarrier", strprintf(_("Relay and mine data carrier transactions (default: %u)"), DEFAULT_ACCEPT_DATACARRIER));
strUsage += HelpMessageOpt("-datacarriersize", strprintf(_("Maximum size of data in data carrier transactions we relay and mine (default: %u)"), MAX_OP_RETURN_RELAY));
// called excessiveblocksize in BU
strUsage += HelpMessageOpt("-blocksizeacceptlimit=<n>", strprintf("This node will not accept blocks of this size or larger. Unit is multiples of 100KB (default: %u)", DEFAULT_BLOCK_ACCEPT_SIZE));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the code, blocks of this size are accepted, but not larger, which is what you'd expect from a "limit".
For the doc string?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll change it to;

This node will not accept blocks larger than this limit.

@@ -2046,18 +2040,26 @@ static int64_t nTimeTotal = 0;

static bool DidBlockTriggerSizeFork(const CBlock &block, const CBlockIndex *pindex, const CChainParams &chainparams)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename to DidBlockTriggerSizeRemovalFork?

for (int i = 0; i < params.nMajorityWindow && current; ++i)
{
if ((current->nVersion & CBlockHeader::SIZE_FORK_BIT) == CBlockHeader::SIZE_FORK_BIT) {
if (++found > params.nActivateSizeForkMajority)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use && instead of nested if?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought the same at first but that ++found is a little tricky. Does the C++ standard say anything about how sequence points apply to pre-increment operators? Does the short circuit ensure ++ does not happen?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, from the standard:

Unlike &, && guarantees left-to-right
evaluation: the second operand is not evaluated if the first operand is false.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

C specifies the order of parsing and if one is 'false' then the rest will not be done.
see man 7 operator for order.

In this case I could have joined them. I guess I should ;)

uint32_t nBlockMaxSize = std::max<uint32_t>(1000, GetArg("-blockmaxsize", DEFAULT_BLOCK_MAX_SIZE));
if (pblock->nTime < sizeForkTime.load()) { // before the protocol upgrade, limit size.
nBlockMaxSize = std::min<uint32_t>(nBlockMaxSize, MAX_BLOCK_SIZE);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer curly brackets but for consistency should this if have them removed like all the others?

@@ -27,6 +27,9 @@ static const unsigned int MAX_P2SH_SIGOPS = 15;
static const unsigned int MAX_STANDARD_TX_SIGOPS = MAX_BLOCK_SIGOPS/5;
/** Default for -maxmempool, maximum megabytes of mempool memory usage */
static const unsigned int DEFAULT_MAX_MEMPOOL_SIZE = 300;
/** Default for -blocksizeacceptlimit */
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment that it's multiples of 100kB. Actually, why is it multiples of 100kB?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, will fix.
Its multiples of 100kB because its much easier to write 20 than 2000000. I do think people want to allow things like 1.5MB which would make the finest grained change 100kB.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does seem odd to specify 2MB as 20.
How about making it multiples of 1MB but allowing floating point numbers so 1.5MB can be specified as 1.5?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The argument processing in Bitcoin doesn't support that, so I'd have to do some more work ;) May be worth it, I don't know.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you guys be open to using User-Defined literals? This is a post C++11 feature though so not sure if it's a problem building on platforms.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a sensitive option, if you set it below 10 then you will create a permanent chain split (a tightening of the rules is a soft fork, and with minor hash power you can extend the minority chain, with major hash power you orphan others

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@greatwolf I'm not sure if user-defined literals can help us parsing user-provided strings. Please note that Classic already requires C++11.

@dskloet
Copy link

dskloet commented Oct 25, 2016

Can you say something about how this interacts with BU?

@dskloet
Copy link

dskloet commented Oct 25, 2016

My impression of the 75% complainers was that they prefer 95%, which tells me they just prefer it doesn't trigger at all. Have people asked for 85% or is this the kind of compromise that makes nobody happy?

@zander
Copy link
Author

zander commented Oct 25, 2016

Can you say something about how this interacts with BU?

The change is compatible with BU and a discussion starter on safe upgrade paths.
I explicitly added support for the commandline option from BU for the limits.

My impression of the 75% complainers was that they prefer 95%, which tells me they just prefer it doesn't trigger at all.

You may have a point. I think the 95% stems from a lack of understanding of how Bitcoin works. We may have to work on that and just keep the voting to 75%

@dskloet
Copy link

dskloet commented Oct 25, 2016

The change is compatible with BU

What does that mean? If 850/1000 blocks are mined by BU will Classic trigger? Will/does Classic signal acceptance of larger blocks? Before the trigger? After the trigger?

@greatwolf
Copy link

greatwolf commented Oct 25, 2016

So can we assume this initial PR is for making inroads into making classic more inline with BU's concept of emergent consensus? For example, are there plans to add other parameters from BU like acceptance depth? What about modifying user-agent of classic to include ADx:EBx like BU to communicate what's being accepted by nodes running classic?

@zander
Copy link
Author

zander commented Oct 25, 2016

@dskloet
The phrase "The change is compatible with BU" means that this change would make BU and Classic behave the same after a hard fork. There is no activation concept in BU currently, so naturally the details are not compatible. Yet. The entire point is to find a way to do the forking safely and in a coordinated manner.

This change makes Classic have only configurable limits on both blocks it creates and receives. This is basically according to our roadmap and planned for Q3. So it was overdue ;)

@dskloet
Copy link

dskloet commented Oct 25, 2016

My questions still stand :)

If 850/1000 blocks are mined by BU will Classic trigger? Will/does Classic signal acceptance of larger blocks? Before the trigger? After the trigger?

I didn't ask if BU would activate. I asked if BU signaling could cause Classic to activate.

@zander
Copy link
Author

zander commented Oct 26, 2016

@greatwolf I'm warming up to the idea of using the ADx:EBx etc. I won't have time to do it this month, so if I would welcome a pull request on that matter.

The biggest difference is that the parameter 'limit' is now a floating
point number parsed in megabytes. So you could make the limit to be
 "1.6"
@zander
Copy link
Author

zander commented Oct 26, 2016

I pushed an update that addresses all comments.

}
if (nSizeLimit <= 0)
nSizeLimit = DEFAULT_BLOCK_ACCEPT_SIZE * 1E5;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1E6?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is more wrong with that line, thanks for pointing it out.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why wasn't this caught by a test?

@dgenr8
Copy link

dgenr8 commented Oct 26, 2016

It would be very helpul if you were to write a precise text description of the parameters, their defaults, and their interaction with the proposed version bit. Without this, it's difficult to know which behavior is intentional.

I suggest not using any floating-point for block size calculations.

std::int32_t nSizeLimit = GetArg("-excessiveblocksize", -1) * 1E6;
auto userlimit = mapArgs.find("-blocksizeacceptlimit");
if (userlimit != mapArgs.end()) {
double limitInMB = atof(userlimit->second.c_str());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about std::stod(userlimit->second); instead?

return true;
}
for (int i = 0; i < params.nMajorityWindow && current; ++i) {
if ((current->nVersion & CBlockHeader::SIZE_FORK_BIT) == CBlockHeader::SIZE_FORK_BIT && ++found >= params.nActivateSizeForkMajority)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't == CBlockHeader::SIZE_FORK_BIT kind of redundant?

@@ -3136,7 +3136,7 @@ bool CheckBlock(const CBlock& block, CValidationState& state, bool fCheckPOW, bo
}
}
if (nSizeLimit <= 0)
nSizeLimit = DEFAULT_BLOCK_ACCEPT_SIZE * 1E5;
nSizeLimit = static_cast<int32_t>(DEFAULT_BLOCK_ACCEPT_SIZE * 10) * 1E5;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably leave a comment to indicate that the decimal truncation is intentional. The commit message could be confusing because you're still doing a float operation at the end -- 1E5 is a float literal.

For consistency maybe std::int32_t or change std::uint32_t to just uint32_t.

@zander
Copy link
Author

zander commented Nov 21, 2016

Closing as #195 supersedes this proposal.

@zander zander closed this Nov 21, 2016
@zander zander deleted the bigBlocks branch January 13, 2017 12:40
jamoes pushed a commit to jamoes/bitcoinclassic that referenced this pull request Feb 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants