-
Notifications
You must be signed in to change notification settings - Fork 146
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement BIP 101 and accurate sigop/sighash counting #22
Conversation
stephenhutchings commented 3 Jul 2015, 6:35 GMT: > Hi Luke, happy for these to be distributed under the terms of the MIT licence. > Let me know if you need anything further from me.
dae0a89 assets-attribution: Update typicons to MIT license (Luke Dashjr)
9a2469e release notes for fee estimation changes (Alex Morcos)
5460b24 Fix typo in release notes. (spin)
Encapsulate mapNodeState in a smart-pointer class with its own lock. Why? So Misbehaving() can be called from the sanity-check-a-message code without holding cs_main. And to get better exception safety (the smart-pointer approach gives RAII semantics). And because protecting fewer things with cs_main is a good idea. Tested by compiling with -DDEBUG_LOCKORDER, running all of the qa/rpc-tests, and running a node on the main network overnight.
I need this to write some unit tests for the CNode::ReceiveMsgBytes() function.
New, undocumented-on-purpose -mocktime=timestamp command-line argument to startup with mocktime set. Needed because time-related blockchain sanity checks are done on startup, before a test has a chance to make a setmocktime RPC call. And changed the setmocktime RPC call so calling it will not result in currently connected peers being disconnected due to inactivity timeouts.
Blocks generated in -regtest mode would pay to a key that was left in the keypool, which can cause subtle bugs. In particular, I expected every block generated to be unique, but got duplicate blocks between calls to setgenerate. Fixes Bitcoin Core issue 6268.
Did any of the earlier patches change too, or can I assume my old reviews are still valid for them? |
Earlier patches are exactly the same, just rebased against the final Core 0.11 release. |
4eec022
to
e3cc5da
Compare
Updated last commit with better comments in response to @mikehearn review. |
uint64_t MaxBlockLegacySigops(uint64_t nBlockTimestamp, uint64_t nSizeForkActivationTime) const { | ||
if (nBlockTimestamp < nEarliestSizeForkTime || nBlockTimestamp < nSizeForkActivationTime) | ||
return MaxBlockSize(nBlockTimestamp, nSizeForkActivationTime)/50; | ||
return std::numeric_limits<uint64_t>::max(); // Post-fork uses accurate method |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and further std:: references in this file do not compile for me without adding
#include <limits>
I wonder why that's not necessary for you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for the first time, something doesn't build for you when it does for others. I wonder if the build system is non-deterministic in some way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. It should include limits; it works for me because uint256.h includes string, which indirectly, eventually, includes limits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tom, what platform / STL are you building with? Perhaps on some STLs does not include . We should probably spell it out specifically rather than rely on implementation details of the STL headers.
My thinking around the check limit was, what if someone mines an invalid block with an overlimit transaction. In this case there will be a check against the limits every 8 megabytes of data serialized and hashed. If we end up hashing 8MB too much, then it seems like no big deal. As block size goes up the "slack" by which we might verify too much data goes up though, and then the limit can't be made really hard without another consensus rule change. We could end up hashing 16MB too much data (or more, if done in parallel). However I guess a lot of the overhead is actually not the hashing but rather the serialization. So serializing, then checking the size of the serialised data against the limits, then hashing that, maybe it doesn't save enough time to be worthwhile. I'll merge this into the only-bigblocks branch and master, and kick off some builds. |
e3cc5da
to
93ac0f3
Compare
5a0a1c8
to
33a97ef
Compare
Unit test and code for a bigger-block hard fork. Parameters are: 8MB cap ... doubling every two years (so 16MB in 2018) ... for twenty years ... earliest possible chain fork: 11 Jan 2016 ... after miner supermajority ... and grace period once miner supermajority achieved qa/rpc-tests/bigblocks.py mines a 50% upgraded -regtest chain, generates 4MB of transactions, and then tests the upgrade and mining code to make sure larger blocks are only created when the fork conditions are met. The activation state of the fork is stored in the block tree database; it is written when the threshold is met (and unwritten if the threshold block is re-orged out of the best chain), and read at startup. Code review and bug fixes by Mike Hearn.
…fault. This avoids the situation that has happened several times where the network starts running out of capacity and miners must be manually contacted in a rush to tell them about the command line flag.
…m this point on all changes will be triggered by a bit in nVersion rather than an incrementing integer.
Adds a BlockValidationResourceTracker class that is passed to CheckInputs() / CScriptCheck() to keep track of the exact number of signature operations required to validate a block, and the exact number of bytes hashed to compute signature hashes. Also extends CHashWriter to keep track of number of bytes hashed. Intended to be the starting point for consensus rule changes to replace the incorrect, messy sigop counting and size-limiting rules currently in place.
Replace the inaccurate / somewhat ineffective consensus rule for number of signature operations per block with new consensus rules that accurately count the number of ECDSA signature operations needed to validate a block, and the number of bytes of data needed to compute signature hashes (to mitigate the attack described in CVE-2013-2292). BIP number for this to be determined. Constants were chosen such that any 'non-attack' transaction valid under the old rules is also valid under the new rules, but maximum possible block validation times are well-bounded, but tied to block size increases. Summary of old rules / new rules: Old rules: 20,000 inaccurately-counted-sigops for a 1MB block New: 80,000 accurately-counted sigops for an 8MB block A scan of the last 100,000 blocks for high-sigop blocks gets a maximum of 7,350 sigops in block 364,773 (in a single, huge, ~1MB transaction). For reference, Pieter Wuille's libsecp256k1 validation code validates about 10,000 signatures per second on a single 2.7GHZ CPU core. Old rules: no limit for number of bytes hashed to generate signature hashes New rule: 1.3gigabytes hashed per 8MB block to generate signature hashes Block 364,422 contains a single ~1MB transaction that requires 1.2GB of data hashed to generate signature hashes.
17adac5
to
2030a5a
Compare
d05f99f
to
2b918be
Compare
This was merged and launched already. |
Patches against the final 0.11 Core release that implement BIP 101 and implement accurate sigop/sighash counting (as described in draft BIP that hasn't been assigned a number yet).