Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No forking Extra nonce added to Bitcoin header. #5102

Closed

Conversation

timohanke
Copy link

Block version set to 3.
Unittest included.
Re-define 15 unused bits of the version field as an extra nonce inside the block header.
Accompanied and explained by a BIP, see wiki (https://github.com/BlockheaderNonce2/bitcoin/wiki).
Not forking.
Backwards compatible with GBT clients that are not aware of this.

Timo Hanke & Sergio Lerner

@@ -50,7 +50,7 @@ BITCOIN_TESTS =\
test/hash_tests.cpp \
test/key_tests.cpp \
test/main_tests.cpp \
test/miner_tests.cpp \
test/blockchain_tests.cpp \
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Alphabetical ordering.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, fixed.

@maaku
Copy link
Contributor

maaku commented Oct 18, 2014

I'm not convinced of the necessity of this idea in general, but if it were to be implemented I would recommend serializing the nVersion field as a VarInt (Pieter Wuille's multi-byte serialization format) and using the remaining space of the 4 bytes as your extra nonce.

That would allow serialization of numbers up to 0x1020407f (slightly over 28 bits) before the 4-byte field is exhausted. For version numbers less than 0x204080 there will be at least one byte of padding space left over for extra-nonce usage (two bytes if less than 0x4080, three bytes if less than 0x80). For version values up to 127, the format is exactly identical when the padding bytes are zero.

@timohanke
Copy link
Author

@maaku If someone relies on the availability of the nonce2 space then he would not like to see the range change, for example from 3 bytes while the version is <0x80 to 2 bytes after that. For it to be usable it has to be the same size forever.

I heard of ideas being thrown around of not regarding the version field as a counter anymore but using it as a bit vector (for flags of features) sometime in the future. If that's the case then there is no point switching to a varint now.

@sipa
Copy link
Member

sipa commented Oct 20, 2014

If there were no downsides to this, it would be a clear improvement to need less frequent merkle tree updates to be sent to hashing hardware. But still, if you use ntime rolling in hardware, you just need 128 variations of the merkle tree roots known to the hardware per TiH/s to last indefinitely (until you want to update the block contents), so the benefit of this seems mostly convenience and not really a fundamental improvement.

Also, why 15 bits? It seems like a strange number, and at some point the same problems will inevitably appear again - if one update per 4 GiH is an inconvience now, I'm sure that one update per 128 TiH will be an inconvenience too someday.

@sipa
Copy link
Member

sipa commented Oct 20, 2014

Sorry, I completely missed your link to the BIP and the explanation of why 15 bits in it.

So to comment: the nversion signedness is a problem, but in both directions: the reference client will start complaining if the version in headers exceeds with the software knows about - similarly, bip34 is triggered by nVersion >= 2 as you say, so you're really stuck in both directions and you need to pretty much first introduce new software that 1) turns the signed nversion into an unsigned one (which we should have done long ago) and 2) ignores the upper 16 bits. Once that is really very sufficiently rolled out, you could start using it as nonce2 (the full 16 bits of it). You also wouldn't actually need a new block version.

Anyway, still, if you use ntime rolling you only need a merkle root variation per 4 GiH/s to drive hardware until you want the transaction set updated. I'm not up to date enough to know whether this is currently used in practice, but it seems pretty simple to do - and needs much less time to deploy than this nversion number reinterpretation.

@timohanke
Copy link
Author

@sipa I'm not sure if I understand correctly what you want to say about the signedness of nVersion. So excuse if I am exactly repeating what you just said: We should have turned off the signedness of nVersion a long time ago, before bip34. It would not have been a fork at that time. Now, after bip34, turning the signedness off is a hardfork. If you wanted to do a hardfork then it doesn't matter if you do it before or after this bip. You can either turn signedness off first (hardfork) and then split 16bits off for nonce2 (no fork), or you can split 15bits off for nonce2 first (no fork) and then release the 16th bit for use in nonce2 as well (hardfork).

[EDIT] When you say "introduce new software that ignores the upper 16 bits" that is a hardfork. [/EDIT]

Your other argument is valid: 15 or 16 bits achieves something but not everything. As I said in the bip: it reduces (not eliminates) the incentive to mess with timestamp etc by a factor of ~2^16. The choice of 16 bits is a compromise. Having 24 bits for nonce2 would be better. But until it is decided how exactly we want to use the nVersion going forward, I think nobody would want to prematurely shrink the nVersion further. Technically version numbers could be recycled, in which case a few bits of nVersion would be enough. Should that turn out to be the way nVersion is used then we can still assign more bits to nNonce2 at a later point in time (with a non-forking change).

Regarding "real-time rolling of ntime": that is irrelevant because what dictates the design of a miner are the bursts after new block-events. You don't want to spend 1s to fill everything. So if your system can handle a new block-event ok (without too much latency) then it can also keep up with the work demand afterwards. Having created work that is good "indefinitely" would not be relevant. For all practical purposes we can assume there is a new block (or block update that we want to follow) every second.

@sipa
Copy link
Member

sipa commented Oct 20, 2014

(written before you edited) @bcpki you're very right, and it's even worse than you say. Due to BIP34, getting rid of the sign in block headers' nVersion is a hard fork, as a minority of old nodes will reject blocks that set the high bit.

The suggestion of rolling version numbers is interesting, and it can be parallellized as well (allowing multiple softforking changes to be rolled out simultaneously, which needs less coordination): each feature choses a not-yet-defined bit, uses it for BIP34 style voting, and when the feature becomes mandatory, the bit switches to 0 back and becomes available again. To combine with the actual nVersion >=2 requirement, this really just means that bits 0x80000000 and 0x2 are unavailable - so we have 30 bits left.

Regarding the improvement itself - right, I understand your focus is latency and not throughput, and thus having big work units that allow indefinite hashing is not advantageous. Still, I'm not very convinced by the "let's improve it a bit, for now" argument.

Some numbers: to update the headers for 1 PH/s to mine on, you need 233k merkle roots. With 4096-transaction blocks, that means 30.3M double-sha256 operations. With 1GH/s of hardware dedicated to this, that takes 30ms. Is that unreasonable, or am I missing how the hardware controller systems work?

@jgarzik
Copy link
Contributor

jgarzik commented Oct 20, 2014

It still sounds like ntime rolling can be employed, with no need for a hard fork.

ntime can go backwards, as well as forwards.

@timohanke
Copy link
Author

@sipa 1GH/s on a CPU is quite a lot..

@jgarzik ntime rolling can be employed and is the norm. The question is if that is desirable and how much is tolerable. If there is demand for 16 bits of rolling then that translates to 18h in ntime.

@sipa
Copy link
Member

sipa commented Oct 21, 2014

@bcpki Why do you use a CPU for updating work, if you're in the business of building hardware that exactly does sha256 hashing...?

@luke-jr
Copy link
Member

luke-jr commented Oct 21, 2014

@sipa We don't really want the hardware doing the work updates. Existing attempts to do just that, besides severely limiting what we can do in a hardfork, also can be limiting on scalability if they cannot hash the generation transaction or merkle links... (although a FPGA could help, it won't get to 1 Gh/s...)

@timohanke
Copy link
Author

@sipa I tried to explain that in the bip. There is a difference between the sha256 of the blockheader which is highly specialized (1. it starts with a midstate, 2. it is based on 80 bytes, 3. you are not interested in all results, just the matching ones) and the sha256 in the merkle tree which is general purpose. Nobody is going to build special purpose hardware for the merkle tree hashing (which I called pre-hashing).

I was also trying to explain the economic incentives created by the cost of pre-hashing and how they will unfold going forward. If the cost of pre-hashing reaches a certain % of the cost of the blockheader-hashing then there is suddenly an economic incentive to mess with anything possible in the blockheader itself, the timestamp being the first target. This is what I predict will happen.

@sipa
Copy link
Member

sipa commented Oct 21, 2014

Thanks for the clarification, I understand, and sorry that I didn't get that from the BIP proposal right away.

It still seems to me that at some point prehashing will hit scalability problems, and will require more customized setups/hardware - at which point this will not really affect things much anymore.

@rebroad
Copy link
Contributor

rebroad commented Oct 21, 2014

What are "4096-transaction" blocks?
On Oct 21, 2014 7:11 AM, "Pieter Wuille" notifications@github.com wrote:

(written before you edited) @bcpki https://github.com/bcpki you're very
right, and it's even worse than you say. Due to BIP34, getting rid of the
sign in block headers' nVersion is a hard fork, as a minority of old nodes
will reject blocks that set the high bit.

The suggestion of rolling version numbers is interesting, and it can be
parallellized to (allowing multiple softforking changes to be rolled out
simultaneously, which needs less coordination): each feature choses a
not-yet-defined bit, uses it for BIP34 style voting, and when the feature
becomes mandatory, the bit switches to 0 back and becomes available again.
To combine with the actual nVersion >=2 requirement, this really just means
that bits 0x80000000 and 0x2 are unavailable - so we have 30 bits left.

Regarding the improvement itself - right, I understand your focus is
latency and not throughput, and thus having big work units that allow
indefinite hashing is not advantageous. Still, I'm not very convinced by
the "let's improve it a bit, for now" argument.

Some numbers: to update the headers for 1 PH/s to mine on, you need 233k
merkle roots. With 4096-transaction blocks, that means 30.3M double-sha256
operations. With 1GH/s of hardware dedicated to this, that takes 30ms. Is
that unreasonable, or am I missing how the hardware controller systems work?


Reply to this email directly or view it on GitHub
#5102 (comment).

@@ -318,9 +318,10 @@ uint256 CBlock::CheckMerkleBranch(uint256 hash, const std::vector<uint256>& vMer
std::string CBlock::ToString() const
{
std::stringstream s;
s << strprintf("CBlock(hash=%s, ver=%d, hashPrevBlock=%s, hashMerkleRoot=%s, nTime=%u, nBits=%08x, nNonce=%u, vtx=%u)\n",
s << strprintf("CBlock(hash=%s, ver=%d, nNonce2=%u, hashPrevBlock=%s, hashMerkleRoot=%s, nTime=%u, nBits=%08x, nNonce=%u, vtx=%u)\n",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Even if Tinyformat catches such things, you could update this to be ver=%u also :), as version is now unsigned.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

@timohanke
Copy link
Author

@rebroad a block with 4096 transactions in it, making the merkle tree 12 levels deep. An example of what would currently be considered a "large" block.

@timohanke
Copy link
Author

Anybody knows what needs to change to make it pass the Travis CI build?

@luke-jr
Copy link
Member

luke-jr commented Oct 27, 2014

@bcpki Why are you trying to #include the test .cpp files like that? That's probably related to the compile failures, but I'm not sure how. I'd just add the nonce2_tests to the makefile, and skip the combining of the two with #includes

@sipa
Copy link
Member

sipa commented Oct 27, 2014

test/blockchain_tests.cpp:13:32: fatal error: test/miner_tests.cpp: No such file or directory

miner_tests.cpp is not listed in any Makefile anymore, so it's not included in any source package, I assume.

@timohanke
Copy link
Author

@luke-jr The reason for the two #includes in one file is to define the order of execution. If I just add the two tests to the Makefile then their order of execution is not well-defined. But miner_tests has to execute first because it needs to start with version 1 blocks.

@luke-jr
Copy link
Member

luke-jr commented Oct 27, 2014

.cpp files don't have an order of execution, they just define things... There is no guarantee AFAIK that the tests will run in the order defined in the files.

@timohanke
Copy link
Author

Re: "There is no guarantee AFAIK that the tests will run in the order defined IN the files."
Do you mean the order in which test suites are defined inside one and the same .cpp file does not guarantee the order of execution? I would think it does.

@luke-jr
Copy link
Member

luke-jr commented Oct 28, 2014

That is what I mean, correct. I would not think it does, since C++ does not usually have a defined order for initialisation (in practice, it's usually done pseudo-randomly).

@timohanke
Copy link
Author

https://groups.google.com/forum/#!topic/boost-list/MtcvrVP0uXg has the following quote:

In what order are executed automatically registered test suites and
test cases?
For the test units registered in different test files there is no order.
For the test units with the same test file the order will be "natural"
from top to bottom.

Block version set to 3.
Unittest included (in src/test/miner_tests.cpp).
Re-define 15 unused bits of the version field as an extra nonce inside the block header.
Accompanied by a BIP, see wiki (https://github.com/BlockheaderNonce2/bitcoin/wiki).
Not forking.
Backwards compatible with GBT clients that are not aware of this.

Timo Hanke & Sergio Lerner
@laanwj
Copy link
Member

laanwj commented Jan 26, 2015

In general, the unit tests shouldn't depend on their order of execution (they should not have side effects). If that's the case, that's a bug.

@laanwj
Copy link
Member

laanwj commented Feb 4, 2015

Just for info: Block version 3 has now been used by BIP66, that needs a bump.

I'm going to close this pull for now, as it is not clear to me how and whether to move forward on it, there appears to be no consensus to do this.

@laanwj laanwj closed this Feb 4, 2015
@bitcoin bitcoin locked as resolved and limited conversation to collaborators Sep 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants