Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement BIP 101 and accurate sigop/sighash counting #22

Closed
wants to merge 19 commits into from

Conversation

gavinandresen
Copy link
Contributor

Patches against the final 0.11 Core release that implement BIP 101 and implement accurate sigop/sighash counting (as described in draft BIP that hasn't been assigned a number yet).

luke-jr and others added 13 commits July 3, 2015 06:41
stephenhutchings commented 3 Jul 2015, 6:35 GMT:
> Hi Luke, happy for these to be distributed under the terms of the MIT licence.
> Let me know if you need anything further from me.
dae0a89 assets-attribution: Update typicons to MIT license (Luke Dashjr)
9a2469e release notes for fee estimation changes (Alex Morcos)
5460b24 Fix typo in release notes. (spin)
Encapsulate mapNodeState in a smart-pointer class with its own lock.

Why? So Misbehaving() can be called from the sanity-check-a-message code without holding cs_main.

And to get better exception safety (the smart-pointer approach gives RAII semantics).

And because protecting fewer things with cs_main is a good idea.

Tested by compiling with -DDEBUG_LOCKORDER, running all of the qa/rpc-tests, and running a node on the main network overnight.
I need this to write some unit tests for the
CNode::ReceiveMsgBytes() function.
New, undocumented-on-purpose -mocktime=timestamp command-line
argument to startup with mocktime set. Needed because
time-related blockchain sanity checks are done on startup, before a
test has a chance to make a setmocktime RPC call.

And changed the setmocktime RPC call so calling it will not result in
currently connected peers being disconnected due to inactivity timeouts.
Blocks generated in -regtest mode would pay to a key that was left
in the keypool, which can cause subtle bugs. In particular, I expected
every block generated to be unique, but got duplicate blocks between
calls to setgenerate.

Fixes Bitcoin Core issue 6268.
@mikehearn
Copy link
Contributor

Did any of the earlier patches change too, or can I assume my old reviews are still valid for them?

@gavinandresen
Copy link
Contributor Author

Earlier patches are exactly the same, just rebased against the final Core 0.11 release.

@gavinandresen
Copy link
Contributor Author

Updated last commit with better comments in response to @mikehearn review.

uint64_t MaxBlockLegacySigops(uint64_t nBlockTimestamp, uint64_t nSizeForkActivationTime) const {
if (nBlockTimestamp < nEarliestSizeForkTime || nBlockTimestamp < nSizeForkActivationTime)
return MaxBlockSize(nBlockTimestamp, nSizeForkActivationTime)/50;
return std::numeric_limits<uint64_t>::max(); // Post-fork uses accurate method
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This and further std:: references in this file do not compile for me without adding

#include <limits>

I wonder why that's not necessary for you.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not for the first time, something doesn't build for you when it does for others. I wonder if the build system is non-deterministic in some way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm. It should include limits; it works for me because uint256.h includes string, which indirectly, eventually, includes limits.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tom, what platform / STL are you building with? Perhaps on some STLs does not include . We should probably spell it out specifically rather than rely on implementation details of the STL headers.

@mikehearn
Copy link
Contributor

My thinking around the check limit was, what if someone mines an invalid block with an overlimit transaction. In this case there will be a check against the limits every 8 megabytes of data serialized and hashed. If we end up hashing 8MB too much, then it seems like no big deal. As block size goes up the "slack" by which we might verify too much data goes up though, and then the limit can't be made really hard without another consensus rule change. We could end up hashing 16MB too much data (or more, if done in parallel).

However I guess a lot of the overhead is actually not the hashing but rather the serialization. So serializing, then checking the size of the serialised data against the limits, then hashing that, maybe it doesn't save enough time to be worthwhile.

I'll merge this into the only-bigblocks branch and master, and kick off some builds.

gavinandresen and others added 4 commits August 4, 2015 10:25
Unit test and code for a bigger-block hard fork.
Parameters are:
  8MB cap
  ... doubling every two years (so 16MB in 2018)
  ... for twenty years
  ... earliest possible chain fork: 11 Jan 2016
  ... after miner supermajority
  ... and grace period once miner supermajority achieved

qa/rpc-tests/bigblocks.py mines a 50% upgraded -regtest chain,
generates 4MB of transactions, and then tests the upgrade and
mining code to make sure larger blocks are only created when
the fork conditions are met.

The activation state of the fork is stored in the block tree database;
it is written when the threshold is met (and unwritten if the
threshold block is re-orged out of the best chain), and read at startup.

Code review and bug fixes by Mike Hearn.
…fault.

This avoids the situation that has happened several times where the network starts running out of capacity and miners must be manually contacted in a rush to tell them about the command line flag.
…m this point on all changes will be triggered by a bit in nVersion rather than an incrementing integer.
Adds a BlockValidationResourceTracker class that is passed to
CheckInputs() / CScriptCheck() to keep track of the exact number
of signature operations required to validate a block, and the
exact number of bytes hashed to compute signature hashes.

Also extends CHashWriter to keep track of number of bytes hashed.

Intended to be the starting point for consensus rule changes
to replace the incorrect, messy sigop counting and size-limiting
rules currently in place.
Replace the inaccurate / somewhat ineffective consensus rule for
number of signature operations per block with new consensus rules
that accurately count the number of ECDSA signature operations needed
to validate a block, and the number of bytes of data needed to compute
signature hashes (to mitigate the attack described in CVE-2013-2292).

BIP number for this to be determined. Constants were chosen such that
any 'non-attack' transaction valid under the old rules is also valid
under the new rules, but maximum possible block validation times
are well-bounded, but tied to block size increases.

Summary of old rules / new rules:

Old rules: 20,000 inaccurately-counted-sigops for a 1MB block
New: 80,000 accurately-counted sigops for an 8MB block

A scan of the last 100,000 blocks for high-sigop blocks gets
a maximum of 7,350 sigops in block 364,773 (in a single, huge,
~1MB transaction).

For reference, Pieter Wuille's libsecp256k1 validation code
validates about 10,000 signatures per second on a single
2.7GHZ CPU core.

Old rules: no limit for number of bytes hashed to generate
signature hashes

New rule: 1.3gigabytes hashed per 8MB block to generate
signature hashes

Block 364,422 contains a single ~1MB transaction that requires
1.2GB of data hashed to generate signature hashes.
@mikehearn
Copy link
Contributor

This was merged and launched already.

@mikehearn mikehearn closed this Aug 23, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants