Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement BIP 101 and accurate sigop/sighash counting #22

Closed
wants to merge 19 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
dae0a89
assets-attribution: Update typicons to MIT license
luke-jr Jul 3, 2015
7bf37e1
Merge pull request #6369
laanwj Jul 3, 2015
9a2469e
release notes for fee estimation changes
morcos Jul 6, 2015
ebad618
Merge pull request #6383
laanwj Jul 6, 2015
5460b24
Fix typo in release notes.
spinza Jul 8, 2015
757ceaa
Merge pull request #6397
laanwj Jul 8, 2015
d26f951
doc: add important information about tx flood to release notes
laanwj Jul 10, 2015
1e67b90
Refactor: protect mapNodeState with its own lock
gavinandresen Jul 13, 2015
59fd6fc
Refactor, new CNode::FinalizeHeader method
gavinandresen Jun 5, 2015
c0dbdd5
Unit test for CNode::ReceiveMsgBytes
gavinandresen Jun 5, 2015
4004cda
Allow per-message sanity checking when reading from wire
gavinandresen Jun 5, 2015
b4f2d1d
Testing infrastructure: mocktime fixes
gavinandresen Jun 19, 2015
4d03e8d
Testing: remove coinbase payment key from keypool
gavinandresen Jun 19, 2015
bd82dd5
Implement hard fork to allow bigger blocks
gavinandresen Jun 16, 2015
2264ba1
Remove bipdersig.py as it duplicates bipdersig-p2p.py but isn't finis…
mikehearn Jul 7, 2015
ae1cc25
Set default block size for miners to be equal to the hard limit by de…
mikehearn Jul 7, 2015
13d2024
Make fork warning use version bit masking, on the assumption that fro…
mikehearn Jul 7, 2015
b1e5f5b
Allow precise tracking of validation sigops / bytes hashed
gavinandresen Jul 22, 2015
2030a5a
Implement accurate sigop / sighashbytes counting block consensus rules
gavinandresen Jul 24, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions doc/assets-attribution.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The following is a list of assets used in the bitcoin source and their proper at
### Info
* Icon Pack: Typicons (http://typicons.com)
* Designer: Stephen Hutchings (and more)
* License: CC BY-SA
* License: MIT
* Site: [https://github.com/stephenhutchings/typicons.font](https://github.com/stephenhutchings/typicons.font)

### Assets Used
Expand All @@ -30,7 +30,7 @@ Jonas Schnelli
### Info
* Designer: Jonas Schnelli
* Bitcoin Icon: (based on the original bitcoin logo from Bitboy)
* Some icons are based on Stephan Hutchings Typicons (these are under CC BY-SA license)
* Some icons are based on Stephan Hutchings Typicons
* License: MIT

### Assets Used
Expand Down
43 changes: 42 additions & 1 deletion doc/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,36 @@ supported and may break as soon as the older version attempts to reindex.
This does not affect wallet forward or backward compatibility. There are no
known problems when downgrading from 0.11.x to 0.10.x.

Important information
======================

Transaction flooding
---------------------

At the time of this release, the P2P network is being flooded with low-fee
transactions. This causes a ballooning of the mempool size.

If this growth of the mempool causes problematic memory use on your node, it is
possible to change a few configuration options to work around this. The growth
of the mempool can be monitored with the RPC command `getmempoolinfo`.

One is to increase the minimum transaction relay fee `minrelaytxfee`, which
defaults to 0.00001. This will cause transactions with fewer BTC/kB fee to be
rejected, and thus fewer transactions entering the mempool.

The other is to restrict the relaying of free transactions with
`limitfreerelay`. This option sets the number of kB/minute at which
free transactions (with enough priority) will be accepted. It defaults to 15.
Reducing this number reduces the speed at which the mempool can grow due
to free transactions.

For example, add the following to `bitcoin.conf`:

minrelaytxfee=0.00005
limitfreerelay=5

More robust solutions are being worked on for a follow-up release.

Notable changes
===============

Expand Down Expand Up @@ -125,12 +155,23 @@ There have been many changes in this release to reduce the default memory usage
of a node, among which:

- Accurate UTXO cache size accounting (#6102); this makes the option `-dbcache`
precise, where is did a gross underestimation of memory usage before
precise where this grossly underestimated memory usage before
- Reduce size of per-peer data structure (#6064 and others); this increases the
number of connections that can be supported with the same amount of memory
- Reduce the number of threads (#5964, #5679); lowers the amount of (esp.
virtual) memory needed

Fee estimation changes
----------------------

This release improves the algorithm used for fee estimation. Previously, -1
was returned when there was insufficient data to give an estimate. Now, -1
will also be returned when there is no fee or priority high enough for the
desired confirmation target. In those cases, it can help to ask for an estimate
for a higher target number of blocks. It is not uncommon for there to be no
fee or priority high enough to be reliably (85%) included in the next block and
for this reason, the default for `-txconfirmtarget=n` has changed from 1 to 2.

Privacy: Disable wallet transaction broadcast
----------------------------------------------

Expand Down
2 changes: 1 addition & 1 deletion qa/pull-tester/rpc-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,8 @@ testScripts=(
'walletbackup.py'
);
testScriptsExt=(
'bigblocks.py'
'bipdersig-p2p.py'
'bipdersig.py'
'getblocktemplate_longpoll.py'
'getblocktemplate_proposals.py'
'pruning.py'
Expand Down
279 changes: 279 additions & 0 deletions qa/rpc-tests/bigblocks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,279 @@
#!/usr/bin/env python2
# Copyright (c) 2014 The Bitcoin Core developers
# Distributed under the MIT software license, see the accompanying
# file COPYING or http://www.opensource.org/licenses/mit-license.php.

#
# Test mining and broadcast of larger-than-1MB-blocks
#
from test_framework.test_framework import BitcoinTestFramework
from test_framework.util import *

from decimal import Decimal

CACHE_DIR = "cache_bigblock"

# regression test / testnet fork params:
FORK_TIME = 1438387200
FORK_BLOCK_VERSION = 0x20000007
FORK_GRACE_PERIOD = 60*60*24

class BigBlockTest(BitcoinTestFramework):

def setup_chain(self):
print("Initializing test directory "+self.options.tmpdir)
print("Be patient, this test can take 5 or more minutes to run.")

if not os.path.isdir(os.path.join(CACHE_DIR, "node0")):
print("Creating initial chain")

for i in range(4):
initialize_datadir(CACHE_DIR, i) # Overwrite port/rpcport in bitcoin.conf

first_block_time = FORK_TIME - 200 * 10*60

# Node 0 tries to create as-big-as-possible blocks.
# Node 1 creates really small, old-version blocks
# Node 2 creates empty up-version blocks
# Node 3 creates empty, old-version blocks
self.nodes = []
# Use node0 to mine blocks for input splitting
self.nodes.append(start_node(0, CACHE_DIR, ["-blockmaxsize=8000000", "-debug=net",
"-mocktime=%d"%(first_block_time,),
"-blockversion=%d"%(FORK_BLOCK_VERSION,)]))
self.nodes.append(start_node(1, CACHE_DIR, ["-blockmaxsize=50000", "-debug=net",
"-mocktime=%d"%(first_block_time,),
"-blockversion=3"]))
self.nodes.append(start_node(2, CACHE_DIR, ["-blockmaxsize=1000",
"-mocktime=%d"%(first_block_time,),
"-blockversion=%d"%(FORK_BLOCK_VERSION,)]))
self.nodes.append(start_node(3, CACHE_DIR, ["-blockmaxsize=1000",
"-mocktime=%d"%(first_block_time,),
"-blockversion=3"]))

set_node_times(self.nodes, first_block_time)

connect_nodes_bi(self.nodes, 0, 1)
connect_nodes_bi(self.nodes, 1, 2)
connect_nodes_bi(self.nodes, 2, 3)
connect_nodes_bi(self.nodes, 3, 0)

self.is_network_split = False
self.sync_all()

# Have node0 and node1 alternate finding blocks
# before the fork time, so it's 50% / 50% vote
block_time = first_block_time
for i in range(0,200):
miner = i%2
set_node_times(self.nodes, block_time)
self.nodes[miner].generate(1)
assert(self.sync_blocks(self.nodes[0:2]))
block_time = block_time + 10*60

# Generate 1200 addresses
addresses = [ self.nodes[3].getnewaddress() for i in range(0,1200) ]

amount = Decimal("0.00125")

send_to = { }
for address in addresses:
send_to[address] = amount

tx_file = open(os.path.join(CACHE_DIR, "txdata"), "w")

# Create four megabytes worth of transactions ready to be
# mined:
print("Creating 100 40K transactions (4MB)")
for node in range(0,2):
for i in range(0,50):
txid = self.nodes[node].sendmany("", send_to, 1)
txdata = self.nodes[node].getrawtransaction(txid)
tx_file.write(txdata+"\n")
tx_file.close()

stop_nodes(self.nodes)
wait_bitcoinds()
self.nodes = []
for i in range(4):
os.remove(log_filename(CACHE_DIR, i, "debug.log"))
os.remove(log_filename(CACHE_DIR, i, "db.log"))
os.remove(log_filename(CACHE_DIR, i, "peers.dat"))
os.remove(log_filename(CACHE_DIR, i, "fee_estimates.dat"))


for i in range(4):
from_dir = os.path.join(CACHE_DIR, "node"+str(i))
to_dir = os.path.join(self.options.tmpdir, "node"+str(i))
shutil.copytree(from_dir, to_dir)
initialize_datadir(self.options.tmpdir, i) # Overwrite port/rpcport in bitcoin.conf

def sync_blocks(self, rpc_connections, wait=1, max_wait=30):
"""
Wait until everybody has the same block count
"""
for i in range(0,max_wait):
if i > 0: time.sleep(wait)
counts = [ x.getblockcount() for x in rpc_connections ]
if counts == [ counts[0] ]*len(counts):
return True
return False

def setup_network(self):
self.nodes = []
last_block_time = FORK_TIME - 10*60

self.nodes.append(start_node(0, self.options.tmpdir, ["-blockmaxsize=8000000", "-debug=net",
"-mocktime=%d"%(last_block_time,),
"-blockversion=%d"%(FORK_BLOCK_VERSION,)]))
self.nodes.append(start_node(1, self.options.tmpdir, ["-blockmaxsize=50000", "-debug=net",
"-mocktime=%d"%(last_block_time,),
"-blockversion=3"]))
self.nodes.append(start_node(2, self.options.tmpdir, ["-blockmaxsize=1000",
"-mocktime=%d"%(last_block_time,),
"-blockversion=%d"%(FORK_BLOCK_VERSION,)]))
self.nodes.append(start_node(3, self.options.tmpdir, ["-blockmaxsize=1000",
"-mocktime=%d"%(last_block_time,),
"-blockversion=3"]))
connect_nodes_bi(self.nodes, 0, 1)
connect_nodes_bi(self.nodes, 1, 2)
connect_nodes_bi(self.nodes, 2, 3)
connect_nodes_bi(self.nodes, 3, 0)

# Populate node0's mempool with cached pre-created transactions:
with open(os.path.join(CACHE_DIR, "txdata"), "r") as f:
for line in f:
self.nodes[0].sendrawtransaction(line.rstrip())

def copy_mempool(self, from_node, to_node):
txids = from_node.getrawmempool()
for txid in txids:
txdata = from_node.getrawtransaction(txid)
to_node.sendrawtransaction(txdata)

def TestMineBig(self, expect_big):
# Test if node0 will mine big blocks.
b1hash = self.nodes[0].generate(1)[0]
b1 = self.nodes[0].getblock(b1hash, True)
assert(self.sync_blocks(self.nodes))

if expect_big:
assert(b1['size'] > 1000*1000)

# Have node1 mine on top of the block,
# to make sure it goes along with the fork
b2hash = self.nodes[1].generate(1)[0]
b2 = self.nodes[1].getblock(b2hash, True)
assert(b2['previousblockhash'] == b1hash)
assert(self.sync_blocks(self.nodes))

else:
assert(b1['size'] < 1000*1000)

# Reset chain to before b1hash:
for node in self.nodes:
node.invalidateblock(b1hash)
assert(self.sync_blocks(self.nodes))


def run_test(self):
# nodes 0 and 1 have 50 mature 50-BTC coinbase transactions.
# Spend them with 50 transactions, each that has
# 1,200 outputs (so they're about 41K big).

print("Testing fork conditions")

# Fork is controlled by block timestamp and miner super-majority;
# large blocks may only be created after a supermajority of miners
# produce up-version blocks plus a grace period AND after a
# hard-coded earliest-possible date.

# At this point the chain is 200 blocks long
# alternating between version=3 and version=FORK_BLOCK_VERSION
# blocks.

# NOTE: the order of these test is important!
# set_node_times must advance time. Local time moving
# backwards causes problems.

# Time starts a little before earliest fork time
set_node_times(self.nodes, FORK_TIME - 100)

# No supermajority, and before earliest fork time:
self.TestMineBig(False)

# node2 creates empty up-version blocks; creating
# 50 in a row makes 75 of previous 100 up-version
# (which is the -regtest activation condition)
t_delta = FORK_GRACE_PERIOD/50
blocks = []
for i in range(50):
set_node_times(self.nodes, FORK_TIME + t_delta*i - 1)
blocks.append(self.nodes[2].generate(1)[0])
assert(self.sync_blocks(self.nodes))

# Earliest time for a big block is the timestamp of the
# supermajority block plus grace period:
lastblock = self.nodes[0].getblock(blocks[-1], True)
t_fork = lastblock["time"] + FORK_GRACE_PERIOD

self.TestMineBig(False) # Supermajority... but before grace period end

# Test right around the switchover time.
set_node_times(self.nodes, t_fork-1)
self.TestMineBig(False)

# Note that node's local times are irrelevant, block timestamps
# are all that count-- so node0 will mine a big block with timestamp in the
# future from the perspective of the other nodes, but as long as
# it's timestamp is not too far in the future (2 hours) it will be
# accepted.
self.nodes[0].setmocktime(t_fork)
self.TestMineBig(True)

# Shutdown then restart node[0], it should
# remember supermajority state and produce a big block.
stop_node(self.nodes[0], 0)
self.nodes[0] = start_node(0, self.options.tmpdir, ["-blockmaxsize=8000000", "-debug=net",
"-mocktime=%d"%(t_fork,),
"-blockversion=%d"%(FORK_BLOCK_VERSION,)])
self.copy_mempool(self.nodes[1], self.nodes[0])
connect_nodes_bi(self.nodes, 0, 1)
connect_nodes_bi(self.nodes, 0, 3)
self.TestMineBig(True)

# Test re-orgs past the activation block (blocks[-1])
#
# Shutdown node[0] again:
stop_node(self.nodes[0], 0)

# Mine a longer chain with two version=3 blocks:
self.nodes[3].invalidateblock(blocks[-1])
v3blocks = self.nodes[3].generate(2)
assert(self.sync_blocks(self.nodes[1:]))

# Restart node0, it should re-org onto longer chain, reset
# activation time, and refuse to mine a big block:
self.nodes[0] = start_node(0, self.options.tmpdir, ["-blockmaxsize=8000000", "-debug=net",
"-mocktime=%d"%(t_fork,),
"-blockversion=%d"%(FORK_BLOCK_VERSION,)])
self.copy_mempool(self.nodes[1], self.nodes[0])
connect_nodes_bi(self.nodes, 0, 1)
connect_nodes_bi(self.nodes, 0, 3)
assert(self.sync_blocks(self.nodes))
self.TestMineBig(False)

# Mine 4 FORK_BLOCK_VERSION blocks and set the time past the
# grace period: bigger block OK:
self.nodes[2].generate(4)
assert(self.sync_blocks(self.nodes))
set_node_times(self.nodes, t_fork + FORK_GRACE_PERIOD)
self.TestMineBig(True)


print("Cached test chain and transactions left in %s"%(CACHE_DIR))
print(" (remove that directory if you will not run this test again)")


if __name__ == '__main__':
BigBlockTest().main()
Loading