Skip to content

Latest commit

 

History

History
1301 lines (1084 loc) · 55.9 KB

dcp-0011.mediawiki

File metadata and controls

1301 lines (1084 loc) · 55.9 KB

DCP: 0011
Title: Change PoW to BLAKE3 and ASERT
Author: Dave Collins <davec@decred.org>
Status: Active
Created: 2023-04-13
License: CC0-1.0
License-Code: ISC
Requires: DCP0001, DCP0005, DCP0006, DCP0010

Table of Contents

Abstract

This specifies modifications to the Decred Proof of Work (PoW) requirements as follows:

  • Separate the block hash from the PoW hash
  • Switch to the BLAKE3 hashing algorithm for PoW
  • Reset the required target difficulty for mining upon activation
  • Change to a more responsive difficulty adjustment algorithm named ASERT

Motivation

This proposal is the result of continued analysis after the previous subsidy split change made by DCP0010 which shows that the majority of the PoW hash power continues to be highly centralized and controlled by a mining cartel.

The proposed change to the BLAKE3 hashing algorithm is intended to break up the aforementioned mining cartel and further improve decentralization of the issuance process by making the specialized mining hardware (ASICs) that enables the centralization obsolete.

Block Hash vs Proof of Work Hash

The block hash currently serves multiple functions. Some important functions that are relevant to this proposal are:

  • It acts as a unique identifier for each block
  • It commits to all of the information in the block to ensure immutability
  • It ties each block to the previous one thereby forming the blockchain
  • It serves as the hash that is used to impose the PoW requirements
Changing to a new hashing algorithm for PoW necessarily means a different hash value will be generated as compared to the one it replaces.

In light of that, using a distinct hash to impose the PoW requirements instead of the block hash allows the hashing algorithm to be changed independently without affecting any of the other functions that rely on the block hash as generated by BLAKE-256 using 14 rounds.

Required Target Difficulty Reset

The current required target difficulty is based on the existence of the aforementioned specialized hardware and would therefore be significantly too difficult for non-specialized hardware under the new hashing algorithm. Thus, the required target difficulty also needs to be reset to a much lower initial value when moving to the new hashing algorithm.

Difficulty Adjustment Algorithm

An important aspect of PoW is maintaining the target average block time as the amount of mining hash power varies. This is accomplished by dynamically varying the required difficulty by means of a difficulty adjustment algorithm (DAA).

The algorithm that Decred has used since launch is based on an exponential moving average (EMA) and does a fairly good job of maintaining the target block time. Concretely, in over 7 years of operation as of the time of this writing, the average block time for the main network is the expected average time of 5 minutes and the network is only about 8.5 hours behind the perfectly ideal schedule.

Nevertheless, the DAA prior to this proposal only updates the target difficulty every so often (144 blocks on the main network) and thus is marginally susceptible to some undesirable behavior when the network is predominantly mined by GPUs as opposed to ASICs.

One of the most notable cases is related to "hit and run" mining. In short, it involves miners taking advantage of weaknesses in the various DAAs implemented by different blockchains by constantly switching the hash power between the affected blockchains in order to achieve higher profitability than steady mining would provide. Since it is often effective, a lot of popular modern mining software automatically attempts to take advantage of this technique in order to maximize profitability. This behavior, in turn, can lead to extremely large swings in the hash power of the affected blockchains.

The aforementioned behavior is not desirable as it can lead to consequences that are harmful to the overall network such as:

  • Extended transaction confirmation times
  • Reduced profitability for the remaining steady miners who are keeping the network running
  • A snowball effect whereby remaining steady miners are either forced to follow suit or stop mining altogether further exacerbating the situation
The proposed ASERT DAA hardens the network further against such undesirable behaviors by improving the responsiveness and avoiding some other minor edge conditions such as error accumulation and clipping that miners might try to use to gain an unfair advantage. The proposed algorithm also has the benefits of being simpler to implement and more efficient to calculate than the EMA DAA.

Finally, it is worth noting that Decred's EMA DAA is an approximation of the relative version of the proposed ASERT algorithm. In other words, the proposed algorithm is a slightly improved version of the algorithm with a proven track record that it replaces.

Specification

Integer Math

In order to facilitate better compatibility across implementations and languages, the various formulas defined by this specification make use of integer math instead of floating point math as denoted by the use of the truncated integer part (Int) and floor[1] functions. This is highly desirable for consensus code since floating point math can be problematic across languages due to issues such as rounding errors and uncertainty in their respective libraries.

Proof of Work Hash

As of this specification, a new PoW hash that is distinct from the block hash MUST be introduced and the block hash MUST remain the same as it is prior to this specification. That is, the block hash MUST continue to commit to all fields of the block header with BLAKE-256 using 14 rounds.

The new PoW hash MUST also commit to all fields of the block header with BLAKE3 by using the exact same serialization of the block header data as the block hash.

Additionally, all consensus rules that involve the block hash, with the exception of the modifications explicitly detailed in the PoW Enforcement section, MUST be retained.

The serialized data for both hashes is as follows:

Zero-based Buffer Offset Number of Bytes Name Data Type Notes
0 4 Block Version int32 Little endian.
4 32 Previous Block Header Hash [32]byte Internal byte order. The BLAKE-256 hash of the previous block header. This is NOT the BLAKE3 PoW hash this specification introduces.
36 32 Merkle Root Hash [32]byte Internal byte order. At the time of this specification, this is the combined transaction tree merkle root as defined in DCP0005.
68 32 Commitment Root Hash [32]byte Internal byte order. At the time of this specification, this is the version 1 block header commitment root hash as defined in DCP0005.
100 2 Vote Bits uint16 Little endian.
102 6 Final State [6]byte Internal byte order.
108 2 Num Voters uint16 Little endian.
110 1 Fresh Stake uint8
111 1 Num Revocations uint8
112 4 Pool Size uint32 Little endian.
116 4 Difficulty Bits uint32 Little endian. Difficulty Bits Format.
120 8 Stake Difficulty int64 Little endian. At the time of this specification, this is the result of the stake difficulty formulas as defined in DCP0001.
128 4 Block Height uint32 Little endian.
132 4 Block Size uint32 Little endian.
136 4 Block Timestamp uint32 Little endian.
140 36 Extra Data [36]byte Internal byte order. Implementations MUST treat this field as a binary blob and serialize it unaltered.

Note that it is common for implementations to define separate fields with their internal block header data structures that split this space up in order to facilitate the mining process.

For example, for legacy reasons, the first 4 bytes are often a separate Nonce field treated as a little-endian uint32 in the data structures with the remaining 32 bytes denominated as the ExtraData field.

Miners and pools may allocate this space in whatever manner is the most efficient for them. However, it is HIGHLY recommended to use the first 8 bytes as the nonce for each individual miner, the next 4 bytes for the pool nonce, and set the remaining bytes to all zero.

This recommendation allows pools to support up to 232 devices each capable of hash rates up to around 18.4 EH/s (18.4x1018 hashes/sec).
176 4 Stake Version uint32 Little endian

Difficulty Bits Format

The target difficulty threshold MUST retain the existing difficulty bits format when specified in the difficulty bits field of each header.

The difficulty bits format is a representation of the most significant bits of a target difficulty with an unsigned 32-bit integer. The representation is similar to IEEE754 floating point[2].

Like IEEE754 floating point, there are three basic components: the sign, the exponent, and the mantissa (also called the significand). They are broken out per the following diagram and specification:

Bits Description
0-22 Mantissa (Significand). The most significant bits. A bitmask of 0x007fffff may be applied to extract this value.
23 Sign bit. The encoded value MUST be interpreted as negative when this bit is set.
24-31 Exponent. The base 256 exponent plus 3.

Implementation Warning

The provided formulas for encoding specify the required calculations precisely, however, most programming languages will not produce the correct results for all values when naively implementing the formulas directly due to rounding errors and precision constraints in libraries that implement logarithms [Details].

Therefore, it is HIGHLY RECOMMENDED to use the algorithms outlined in the remainder of this section to perform the encoding and decoding versus attempting to implement the formulas directly. The provided algorithms implement the formulas with a combination of integer math, modular arithmetic, bit shifts, and bitmasks, all of which produce exact results and are highly portable.

Precision Considerations

The following algorithms rely on arbitrary precision integers in order to support decoding the full range of possible values that can be encoded.

However, in practice, all target difficulties in Decred are enforced to be unsigned 256-bit integers.

Due to that, implementations MAY alternatively use unsigned 256-bit integers so long as they properly detect and reject encoded values that are negative or are greater than 2256 - 1, the maximum value representable by an unsigned 256-bit integer, when decoding.

Decoding Algorithm

The following algorithm will decode a target difficulty T from difficulty bits c using intermediate variables m for the mantissa and x for the exponent:

  1. Set m = c (mod 2^23) so that it is the least significant 23 bits of c
    Implementations MAY optimize this by using a bitmask: m = c & 0x007fffff
  2. Set x = floor(c / 2^24) so that it is the most significant 8 bits of c
    Implementations MAY optimize this by using a right bit shift: x = c >> 24
  3. If x ≤ 3 then set T = m / 256^(3 - x) otherwise set T = m * 256^(x - 3)
    Implementations MAY optimize the x ≤ 3 case by using a right bit shift: T = m >> (8 * (3-x)) and the x > 3case by using a left bit shift: T = m << (8 * (x-3))
    NOTE: Ensure T is capable of handling big integers up to the maximum supported decodable values (± 8,388,607 x 22016 when supporting the entire range or 2256 - 1 if only supporting unsigned 256-bit integers)
  4. If bit 23 of c is set then return -T otherwise return T

Encoding Algorithm

The following algorithm will encode a target difficulty T to difficulty bits c using intermediate variables m for the mantissa and x for the exponent:

  1. OPTIONAL: Assert T ≥ 0 and T ≤ 2^256 - 1
    Implementations MAY optionally assert the input value is non-negative and is less than or equal to the max value representable by an unsigned 256-bit integer since target difficulties outside of that range are invalid in Decred
  2. If T = 0 then return 0
  3. Set b = bitlen(T) where bitlen is a function that returns the number of bits necessary to represent the value in binary
  4. Set x = floor((b + 7) / 8)
  5. If x ≤ 3 then set m = abs(T) * 256^(3 - x) otherwise set m = floor(abs(T) / 256^(x - 3))
    Implementations MAY omit taking the absolute value if the optional first step to assert the input value is non-negative is implemented
    Implementations MAY optimize the x ≤ 3 case by using a left bit shift: m = abs(T) << (8 * (3-x)) and the x > 3 case by using a right bit shift: m = abs(T) >> (8 * (x-3))
  6. If bit 23 of m is set then set m = floor(m / 256) and set x = x + 1
    Implementations MAY optimize this by using a right bit shift: m = m >> 8
  7. Set c = (x * 2^24) + m
    Implementations MAY optimize this by using a left bit shift and a bitwise OR: c = (x << 24) | m
  8. If T < 0 then set c = c + 2^23 in order to set the sign bit
    Implementations MAY omit this step if the optional first step to assert the input value is non-negative is implemented instead
    Implementations MAY optimize this by using a bitwise OR with a bitmask: c = c | 0x00800000
  9. Return c

ASERT Difficulty Adjustment Algorithm

This specification proposes a new difficulty adjustment algorithm (DAA) named ASERT (Absolutely Scheduled Exponentially Rising Targets).

ASERT Difficulty Calculation Interval

The required target difficulty MUST be calculated every block using the new ASERT algorithm.

Note that the per-block calculation requirement is different than the periodic calculation requirement enforced by the DAA prior to this specification.

ASERT Anchor Block

All target difficulty calculations MUST be performed relative to an anchor block determined as follows:

  • For the main network and version 3 of the test network:
    • The anchor block MUST be set to one block prior to the block at which the new rules take effect
  • For other networks that use the proposed algorithm from the start:
    • The anchor block MUST be set to block 1 once it is mined
    • The target difficulty for block 1 MUST bet set to the initial starting difficulty for the network
UPDATE:

Now that the agenda votes for this proposal on the main network and version 3 of the test network have passed and the new rules are active, their concrete anchor blocks have been determined as follows:

Network Block Hash Block Height
mainnet 0000000000000000c293d8c67409d05e960447ea25cdaf770e864d995c764ef0 794367
testnet version 3 000000b396bfeaa6ae6fa9e3cee441d7215191630bdaa9b979a872985caed727 1170047

Implementations MAY hard code these values for simplicity.

ASERT Difficulty Calculation Formula

Explanation of terms:

TN = Target difficulty at a given height N
Ts = Initial starting difficulty
Δt = Time delta in seconds between the most recent block and the anchor block
Δh = Height delta between the most recent block and the anchor block
Ib = Target block interval in seconds
Tub = Upper bound for the final computed target difficulty (Proof of Work limit)
𝜏 = Half life in seconds

The following per-network values MUST be used for the calculations:

Parameter Main Network Value Test Network (version 3) Value
Ts 42,406 x 2192 65,535 x 2216
Ib 300 120
Tub 2224 - 1 2232 - 1
𝜏 43,200 720

Important implementation considerations:

  • Negative height deltas (Δh) are NOT possible in a correct implementation due to the constraints imposed by the anchor block selection
    • Implementations MAY add an assertion for this precondition
  • The Int function used in the calculation of the exponent (x) denotes the integer part of the value which implies truncation towards 0
    • Note that this differs from the floor function which truncates towards negative infinity for negative values
    • Implementations MUST ensure truncation towards 0 for this calculation
  • The fractional portion (f) of the exponent (x) is expected to be normalized to the range [0,216) which is the standard behavior for the modulo operator
    • Implementations MAY optimize the calculation by performing a bitwise AND with the mask 0xffff
  • The integer portion (n) of the exponent (x) is a floored division which truncates towards negative infinity for negative values
    • Implementations MAY optimize the calculation by using an arithmetic[3] right shift by 16
      • WARNING: A logical[4] right shift for this calculation will NOT produce correct results for all inputs
  • For the calculation of the fractional factor F:
    • The numerator inside the floor is guaranteed to be positive due to the range of f
    • The numerator inside the floor requires a full 64 bits to avoid overflow
    • Implementations MAY optimize the floored division by 248 by using a right shift by 48 instead
    • The overall result is guaranteed to be a maximum of 17 bits
  • Implementations MAY optimize the multiplication by 2n-16 in the final floored target difficulty calculation by either using a left shift by n-16 when n ≥ 16 or a right shift by 16-n when n < 16

Proof of Work Enforcement

This section uses the terms target difficulty to refer to a concrete numeric value and difficulty bits to refer to the encoded compact form of the target difficulty.

The following difficulty rules MUST be enforced:

  • The target difficulty obtained by decoding the difficulty bits in the block header MUST be larger than zero
  • The target difficulty obtained by decoding the difficulty bits in the block header MUST be less than or equal to the Proof of Work limit for the active network:
    • mainnet: 2224 - 1
    • testnet version 3: 2232 - 1
  • The difficulty bits in the block header MUST match the expected difficulty bits obtained by encoding the target difficulty calculated by the ASERT Difficulty Calculation Formula to the compact form
    • CAUTION: It is important to compare the encoded difficulty bits as opposed to the decoded target difficulty values because the difficulty bits encoding format truncates all but the most significant bits of target difficulties
The following rule MUST be transitioned to use the the new Proof of Work hash introduced by this specification instead of the block hash:

  • The PoW hash MUST be treated as a little endian unsigned 256-bit integer and be less than or equal to the target difficulty obtained by decoding the difficulty bits in the block header

Rationale

Difficulty Bits Format Retention

The difficulty bits format was inherited from legacy code and is less than ideal in various ways. For example, since it is only ever necessary to encode target difficulties that are unsigned 256-bit integers, the sign bit really is not required and its existence only serves to complicate encoding and decoding in practice.

Changing the format was strongly considered for this proposal, however, the inherited format is retained, despite its shortcomings, in order to reduce the scope of changes software in the ecosystem is required to handle.

Typical Logarithm Precision Limitations

Many languages implement logarithms with 64-bit floating point values which only provide 53 bits of precision for the significand. This means that it is not possible to fully represent any values that have more than 15 to 17 significant decimal digits depending on the exponent.

For example, consider the value 255 - 1 = 36,028,797,018,963,967. Converting it to a 64-bit floating point and back to an unsigned 64-bit integer yields 36,028,797,018,963,968 instead of the original value.

This precision loss can lead to inaccurate results when directly implementing formulas that involve logarithms.

Concretely, consider the formula for calculating the exponent provided in the Difficulty Bits Format section which involves a base 256 logarithm. The following Python code that calculates the exponent via the formula directly for the aforementioned value of 255 - 1 produces an incorrect exponent of 8 instead of the expected correct value of 7:

import math
def calcExponent(n):
 # WARNING: Incorrect implementation!
 return math.floor(math.log(2 * n) / math.log(256)) + 1

print(calcExponent(2**55 - 1)) # WARNING: Incorrect result!

For this reason, implementations of consensus code are STRONGLY encouraged to avoid implementing formulas that involve logarithms directly unless they can fully guarantee correct results for all possible values in the input domain.

ASERT Algorithm Derivation

The ASERT algorithm specified in this proposal is an approximation of the ideal exponential equation:

See the specification section for an explanation of the terms.

However, since e[5] is a transcendental number, any calculations used in consensus code necessarily have to choose a well-defined approximation in order to achieve reproducible results. Ideally, the approximation should be an integer to avoid floating point math. While 3 is the closest integer approximation, computers are particularly efficient when calculating with powers of 2, so the derivation starts by approximating e as 2, which yields the goal equation:

Note that the goal equation involves a fractional exponent which similarly requires choosing a well-defined approximation for the same reasons. Simply rounding or truncating the exponent would produce an approximation that introduces more error than is desired. Instead, the next steps in the derivation recast the formula to an approximation that uses a combination of fixed-point integer arithmetic[6] and a cubic polynomial approximation for the 2x term.

The scaling factor for the fixed-point integer arithmetic is chosen to be 216 (x.16 fixed-point). The primary motivation for this choice is that it provides enough precision to avoid introducing any additional error over and above the 2-15 error the difficulty bits format incurs and it is also a power of 2 making it efficient to calculate via bit shifts. Applying fixed-point modifications with that scaling factor yields:

Turning to the calculation of the 2x term, a good goal approximation to 2x over the interval 0 ≤ x < 1 is the cubic polynomial:

The final form of this approximation derived below provides an absolute error margin < 0.013% over the aforementioned interval of [0,1) which is well under the 0.1% error margin needed for good results.

Keeping in mind that the overall calculation has been scaled by 216 for the fixed-point integer arithmetic, the scaled form of the goal approximation to 2x is:

However, there are still a few things that need to be handled in order to use the goal approximation in consensus-critical code:

  1. The input domain is not constrained to the valid interval [0,1)
  2. The exponent is not an integer and can be negative
  3. The goal approximation formula involves non-integer coefficients
In order to achieve the correct input domain, note that by exponent rules 2n+f = 2n x 2f. Using this fact, the exponent is decomposed into an integer part, n, and a fractional part, f, such that f is in the desired range of [0,1). Then, the result is calculated by applying the scaled cubic polynomial approximation to the fractional part and multiplying by 2n.

To make the exponent an integer, it is also scaled by 216 and calculated via fixed-point integer arithmetic. Note this implies the integer part, n, and the fractional part, f, in the aforementioned decomposition are also scaled by 216.

Further, since the exponent can be negative and involves a division, a choice needs to be made to either round towards 0 (truncation) or round towards negative infinity (floor). This implementation chooses truncation for convenience since the division is by a term that is not a power of two and most programming languages implement truncated division with their built-in operators. On the other hand, floor is preferred for division that involves powers of two since it can be computed efficiently via arithmetic shifts which behave the same as the floor function.

Taking everything into account, the resulting approximated exponent and decomposition is:

Substituting the scaled fractional part into the scaled goal approximation for 2x and dividing out the additional scaling factors introduced yields:

Next, the coefficients are converted to integers by multiplying by the remaining scaling factors and taking the ceiling. Also, an additional 247 term is added inside the floor to round up. The result of these modifications is:

Finally, substituting the integer and fractional components for the 2x approximation back into the original fixed-point formulation of the goal equation for calculating the target difficulty, while also observing that by exponent rules 2n / 216 = 2n-16, yields the final form used in the specification:

ASERT Relative Cubic Polynomial Approximation Error

The following graph shows the relative approximation error vs 2x across the range of [0,1) for the fixed-point approximation used in this specification:

Simulation Graphs

The following graphs provide a visualization of the results of simulating three mining scenarios using the new algorithm with main network parameters. Namely, each of these scenarios involve random solve times that follow a Poisson distribution for a target average block time so that they simulate realistic mining behavior.

Two graphs are provided for each scenario:

  • A per-block difficulty ratio in order to illustrate the behavior and responsiveness of the difficulty algorithm
  • A histogram of the solve times overlaid with the ideal Poisson distribution to illustrate the simulation follows a realistic Poisson distribution as desired
These scenarios, along with several others, are also included in the ASERT Reference Test Vectors.

Scenario: Stable Hashrate

This scenario simulates a stable hashrate by targeting an average solve time of 300 seconds which matches the actual target average block time for the main network. The difficulty remains fairly stable with minor adjustments each block in order to maintain the desired block production rate.

Scenario: Rapid Hashrate Increase

This scenario simulates a rapid increase in hashrate by targeting an average solve time of 15 seconds which is much faster than the actual target average block time for the main network. The difficulty rapidly increases in order to decrease the block production rate back to the desired target.

Scenario: Rapid Hashrate Decrease

This scenario simulates a rapid decrease in hashrate by targeting an average solve time of 1000 seconds which is much slower than the actual target average block time for the main network. The difficulty rapidly decreases in order to increase the block production rate back to the desired target.

Deployment

Voting Agenda Parameters

This proposal will be deployed to mainnet using the standard Decred on-chain voting infrastructure as follows:

Name Setting
Deployment Version 10
Agenda ID blake3pow
Agenda Description Change proof of work hashing algorithm to BLAKE3 as defined in in DCP0011
Start Time 1682294400 (Apr 24th, 2023 00:00:00 +0000 UTC)
Expire Time 1745452800 (Apr 24th, 2025 00:00:00 +0000 UTC)
Mask 0x0006 (Bits 1 and 2)
Choices
Choice English Description Bits
abstain abstain voting for change 0x00
no keep the existing consensus rules 0x0002 (Bit 1)
yes change to the new consensus rules 0x0004 (Bit 2)

Voting Results

This proposal was approved by the stakeholder voting process and is now active.

Status Block Hash Block Height
Voting Started 0000000000000000d608dc4ec7fcae5c650353db68a77f2954df74c25462f337 778240
Locked In 0000000000000000896042b2b0536a4046e56ef505f20f7301ca7e042a5c218e 786304
Active 071683030010299ab13f139df59dc98d637957b766e47f8da6dd5ac762f1e8c7 794368

Compatibility

This is a hard-forking change to the Decred consensus. This means that once the agenda is voted in and becomes locked in, anybody running code that fully validates blocks must upgrade before the activation time or they will reject all new blocks since their proof of work is invalid under the old rules.

Other software that performs full validation will need to modify their consensus enforcement rules accordingly and any software that performs header validation, such as lightweight wallet and other SPV clients, will need to be updated to handle the changes specified herein.

Reference Implementation

The following implementations are simplified versions intended to clearly illustrate the exact semantics of this specification with self-contained functions. See the pull requests section for links to the full implementations.

Proof of Work Hash Computation

import (
	"encoding/binary"
	"lukechampine.com/blake3"
)

// putUint8 writes the passed uint8 to the provided slice and returns 1 to
// signify the number of bytes written.  The target byte slice must be at least
// large enough to handle the write or it will panic.
func putUint8(buf []byte, val uint8) int {
	buf[0] = val
	return 1
}

// putUint16LE writes the provided uint16 as little endian to the provided slice
// and returns 2 to signify the number of bytes written.  The target byte slice
// must be at least large enough to handle the write or it will panic.
func putUint16LE(buf []byte, val uint16) int {
	binary.LittleEndian.PutUint16(buf, val)
	return 2
}

// putUint32LE writes the provided uint32 as little endian to the provided slice
// and returns 4 to signify the number of bytes written.  The target byte slice
// must be at least large enough to handle the write or it will panic.
func putUint32LE(buf []byte, val uint32) int {
	binary.LittleEndian.PutUint32(buf, val)
	return 4
}

// putUint64LE writes the provided uint64 as little endian to the provided slice
// and returns 8 to signify the number of bytes written.  The target byte slice
// must be at least large enough to handle the write or it will panic.
func putUint64LE(buf []byte, val uint64) int {
	binary.LittleEndian.PutUint64(buf, val)
	return 8
}

// blake3PowHash computes the BLAKE3 proof of work hash for the given block
// header fields.
func blake3PowHash(version int32, prevBlockHash, merkleRoot,
	commitmentRoot [32]byte, voteBits uint16, finalState [6]byte,
	numVoters uint16, freshStake, revocations uint8, poolSize,
	diffBits uint32, stakeDiff int64, blockHeight, blockSize uint32,
	timestamp uint32, extraData [36]byte, stakeVersion uint32) [32]byte {

	var target [180]byte
	offset := putUint32LE(target[0:], uint32(version))
	offset += copy(target[offset:], prevBlockHash[:])
	offset += copy(target[offset:], merkleRoot[:])
	offset += copy(target[offset:], commitmentRoot[:])
	offset += putUint16LE(target[offset:], voteBits)
	offset += copy(target[offset:], finalState[:])
	offset += putUint16LE(target[offset:], numVoters)
	offset += putUint8(target[offset:], freshStake)
	offset += putUint8(target[offset:], revocations)
	offset += putUint32LE(target[offset:], poolSize)
	offset += putUint32LE(target[offset:], diffBits)
	offset += putUint64LE(target[offset:], uint64(stakeDiff))
	offset += putUint32LE(target[offset:], blockHeight)
	offset += putUint32LE(target[offset:], blockSize)
	offset += putUint32LE(target[offset:], timestamp)
	offset += copy(target[offset:], extraData[:])
	putUint32LE(target[offset:], stakeVersion)
	return blake3.Sum256(target[:])
}

Difficulty Bits Decoding

// diffBitsToBig converts the compact representation used to encode difficulty
// targets to a big integer.  The representation is similar to IEEE754 floating
// point numbers.
//
// Like IEEE754 floating point, there are three basic components: the sign,
// the exponent, and the mantissa.  They are broken out as follows:
//
//  1. the most significant 8 bits represent the unsigned base 256 exponent
//  2. zero-based bit 23 (the 24th bit) represents the sign bit
//  3. the least significant 23 bits represent the mantissa
//
// Diagram:
//
//	-------------------------------------------------
//	|   Exponent     |    Sign    |    Mantissa     |
//	|-----------------------------------------------|
//	| 8 bits [31-24] | 1 bit [23] | 23 bits [22-00] |
//	-------------------------------------------------
//
// The formula to calculate the encoded difficulty target is:
//
//	T = (-1)^sign * floor(mantissa * 256^(exponent-3))
func diffBitsToBig(bits uint32) *big.Int {
	// Extract the mantissa, sign bit, and exponent.
	mantissa := bits & 0x007fffff
	isNegative := bits&0x00800000 != 0
	exponent := uint(bits >> 24)

	// Since the base for the exponent is 256, the exponent can be treated as
	// the number of bytes to represent the full 256-bit number.  So, treat the
	// exponent as the number of bytes and shift the mantissa right or left
	// accordingly.  This is equivalent to:
	// T = mantissa * 256^(exponent-3)
	var bn *big.Int
	if exponent <= 3 {
		mantissa >>= 8 * (3 - exponent)
		bn = big.NewInt(int64(mantissa))
	} else {
		bn = big.NewInt(int64(mantissa))
		bn.Lsh(bn, 8*(exponent-3))
	}

	// Make it negative if the sign bit is set.
	if isNegative {
		bn = bn.Neg(bn)
	}

	return bn
}

Difficulty Bits Encoding

// bigToDiffBits converts a big integer to the compact representation used to
// encode difficulty targets as an unsigned 32-bit integer.  The compact
// representation only provides 23 bits of precision, so values larger than
// (2^23 - 1) only encode the most significant bits of the number.
func bigToDiffBits(t *big.Int) uint32 {
	// No need to do any work if it's zero.
	if t.Sign() == 0 {
		return 0
	}

	// Since the base for the exponent is 256, the exponent can be treated as
	// the number of bytes.  So, shift the number right or left accordingly.
	// This is equivalent to:
	// mantissa = mantissa / 256^(exponent-3)
	var mantissa uint32
	exponent := uint(len(t.Bytes()))
	if exponent <= 3 {
		mantissa = uint32(t.Bits()[0])
		mantissa <<= 8 * (3 - exponent)
	} else {
		// Use a copy to avoid modifying the caller's original number.
		tt := new(big.Int).Set(t)
		tt.Abs(tt)
		mantissa = uint32(tt.Rsh(tt, 8*(exponent-3)).Bits()[0])
	}

	// When the mantissa already has the sign bit set, the number is too large
	// to fit into the available 23-bits, so divide the number by 256 and
	// increment the exponent accordingly.
	if mantissa&0x00800000 != 0 {
		mantissa >>= 8
		exponent++
	}

	// Pack the exponent, sign bit, and mantissa into an unsigned 32-bit int and
	// return it.
	bits := uint32(exponent<<24) | mantissa
	if t.Sign() < 0 {
		bits |= 0x00800000
	}
	return bits
}

ASERT Difficulty Calculation

// calcASERTDiff calculates a target difficulty for the given set of parameters
// using the ASERT algorithm and returns that target encoded as difficulty bits.
func calcASERTDiff(startDiffBits uint32, powLimit *big.Int, targetSecsPerBlock,
	timeDelta, heightDelta, halfLife int64) uint32 {

	// Ensure parameter assumptions are not violated.
	//
	// 1. The starting target difficulty must be in the range [1, powLimit]
	// 2. The height to calculate the difficulty for must come after the height
	//    of the reference block
	startDiff := diffBitsToBig(startDiffBits)
	if startDiff.Sign() <= 0 || startDiff.Cmp(powLimit) > 0 {
		panic(fmt.Sprintf("starting difficulty %064x is not in the valid "+
			"range [1, %064x]", startDiff, powLimit))
	}
	if heightDelta < 0 {
		panic(fmt.Sprintf("provided height delta %d is negative", heightDelta))
	}

	// Calculate the target difficulty by multiplying the provided starting
	// target difficulty by an exponential scaling factor that is determined
	// based on how far ahead or behind the ideal schedule the given time delta
	// is along with a half life that acts as a smoothing factor.
	//
	// Per DCP0011, the goal equation is:
	//
	//   nextDiff = min(max(startDiff * 2^((Δt - Δh*Ib)/halfLife), 1), powLimit)
	//
	// However, in order to avoid the need to perform floating point math which
	// is problematic across languages due to uncertainty in floating point math
	// libs, the formula is implemented using a combination of fixed-point
	// integer arithmetic and a cubic polynomial approximation to the 2^x term.
	//
	// In particular, the goal cubic polynomial approximation over the interval
	// 0 <= x < 1 is:
	//
	//   2^x ~= 1 + 0.695502049712533x + 0.2262697964x^2 + 0.0782318x^3
	//
	// This approximation provides an absolute error margin < 0.013% over the
	// aforementioned interval of [0,1) which is well under the 0.1% error
	// margin needed for good results.  Note that since the input domain is not
	// constrained to that interval, the exponent is decomposed into an integer
	// part, n, and a fractional part, f, such that f is in the desired range of
	// [0,1).  By exponent rules 2^(n + f) = 2^n * 2^f, so the strategy is to
	// calculate the result by applying the cubic polynomial approximation to
	// the fractional part and using the fact that multiplying by 2^n is
	// equivalent to an arithmetic left or right shift depending on the sign.
	//
	// In other words, start by calculating the exponent (x) using 64.16 fixed
	// point and decompose it into integer (n) and fractional (f) parts as
	// follows:
	//
	//       2^16 * (Δt - Δh*Ib)   (Δt - Δh*Ib) << 16
	//   x = ------------------- = ------------------
	//            halfLife              halfLife
	//
	//        x
	//   n = ---- = x >> 16
	//       2^16
	//
	//   f = x (mod 2^16) = x & 0xffff
	//
	// The use of 64.16 fixed point for the exponent means both the integer (n)
	// and fractional (f) parts have an additional factor of 2^16.  Since the
	// fractional part of the exponent is cubed in the polynomial approximation
	// and (2^16)^3 = 2^48, the addition step in the approximation is internally
	// performed using 16.48 fixed point to compensate.
	//
	// In other words, the fixed point formulation of the goal cubic polynomial
	// approximation for the fractional part is:
	//
	//                 195766423245049*f + 971821376*f^2 + 5127*f^3 + 2^47
	//   2^f ~= 2^16 + ---------------------------------------------------
	//                                          2^48
	//
	// Finally, the final target difficulty is calculated using x.16 fixed point
	// and then clamped to the valid range as follows:
	//
	//              startDiff * 2^f * 2^n
	//   nextDiff = ---------------------
	//                       2^16
	//
	//   nextDiff = min(max(nextDiff, 1), powLimit)
	//
	// NOTE: The division by the half life uses Quo instead of Div because it
	// must be truncated division (which is truncated towards zero as Quo
	// implements) as opposed to the Euclidean division that Div implements.
	idealTimeDelta := heightDelta * targetSecsPerBlock
	exponentBig := big.NewInt(timeDelta - idealTimeDelta)
	exponentBig.Lsh(exponentBig, 16)
	exponentBig.Quo(exponentBig, big.NewInt(halfLife))

	// Decompose the exponent into integer and fractional parts.  Since the
	// exponent is using 64.16 fixed point, the bottom 16 bits are the
	// fractional part and the integer part is the exponent arithmetic right
	// shifted by 16.
	frac64 := uint64(exponentBig.Int64() & 0xffff)
	shifts := exponentBig.Rsh(exponentBig, 16).Int64()

	// Calculate 2^16 * 2^(fractional part) of the exponent.
	//
	// Note that a full unsigned 64-bit type is required to avoid overflow in
	// the internal 16.48 fixed point calculation.  Also, the overall result is
	// guaranteed to be positive and a maximum of 17 bits, so it is safe to cast
	// to a uint32.
	const (
		polyCoeff1 uint64 = 195766423245049 // ceil(0.695502049712533 * 2^48)
		polyCoeff2 uint64 = 971821376       // ceil(0.2262697964 * 2^32)
		polyCoeff3 uint64 = 5127            // ceil(0.0782318 * 2^16)
	)
	fracFactor := uint32(1<<16 + (polyCoeff1*frac64+
		polyCoeff2*frac64*frac64+
		polyCoeff3*frac64*frac64*frac64+
		1<<47)>>48)

	// Calculate the target difficulty per the previous discussion:
	//
	//              startDiff * 2^f * 2^n
	//   nextDiff = ---------------------
	//                       2^16
	//
	// Note that by exponent rules 2^n / 2^16 = 2^(n - 16).  This takes
	// advantage of that property to reduce the multiplication by 2^n and
	// division by 2^16 to a single shift.
	//
	// This approach also has the benefit of lowering the maximum magnitude
	// relative to what would be the case when first left shifting by a larger
	// value and then right shifting after.  Since arbitrary precision integers
	// are used for this implementation, it doesn't make any difference from a
	// correctness standpoint, however, it does potentially lower the amount of
	// memory for the arbitrary precision type and can be used to help prevent
	// overflow in implementations that use fixed precision types.
	nextDiff := new(big.Int).Set(startDiff)
	nextDiff.Mul(nextDiff, big.NewInt(int64(fracFactor)))
	shifts -= 16
	if shifts >= 0 {
		nextDiff.Lsh(nextDiff, uint(shifts))
	} else {
		nextDiff.Rsh(nextDiff, uint(-shifts))
	}

	// Limit the target difficulty to the valid hardest and easiest values.
	// The valid range is [1, powLimit].
	if nextDiff.Sign() == 0 {
		// The hardest valid target difficulty is 1 since it would be impossible
		// to find a non-negative integer less than 0.
		nextDiff.SetInt64(1)
	} else if nextDiff.Cmp(powLimit) > 0 {
		nextDiff.Set(powLimit)
	}

	// Convert the difficulty to the compact representation and return it.
	return bigToDiffBits(nextDiff)
}

Pull Requests

Deployment

A reference implementation of the required agenda definition is implemented by pull request #3089.

Consensus Enforcement and Vote

A reference implementation of the required consensus changes to enforce the changes specified in this proposal is provided by pull request #3115.

Test Vectors

The following test vectors are provided in order to facilitate testing across implementations.

The test vectors are all provided as JSON so that implementations can easily copy them and programmatically parse them to avoid transposition errors.

BLAKE3 Proof of Work Hashes

The JSON test vectors for the BLAKE3 Proof of Work computation are available in blake3_powhash_test_vectors.json.

The test data consists of a series of block header fields to serialize for constructing BLAKE3 Proof of Work hashes along with the expected hashes. It also provides the expected serialized preimage for convenience so implementations can cross check the expected input to the hash function if they don't get the expected PoW hash.

Finally, comments are provided near the top that describe the format of each of the fields.

The following example Go code provides a test function that loads and parses the file, executes all tests, and validates the results:

func blake3PowHash(version int32, prevBlockHash, merkleRoot,
	commitmentRoot [32]byte, voteBits uint16, finalState [6]byte,
	numVoters uint16, freshStake, revocations uint8, poolSize,
	diffBits uint32, stakeDiff int64, blockHeight, blockSize uint32,
	timestamp uint32, extraData [36]byte, stakeVersion uint32) [32]byte {

	// Your implementation...
	return [32]byte{}
}

func TestPoWHash(t *testing.T) {
	// Read and parse the reference test vectors.
	f, err := os.ReadFile("blake3_powhash_test_vectors.json")
	if err != nil {
		t.Fatalf("failed to read test vectors: %v", err)
	}
	var testData struct {
		Comments []string `json:"comments"`
		Tests    []struct {
			Version        int32  `json:"version"`
			PrevHash       string `json:"prevHash"`
			MerkleRoot     string `json:"merkleRoot"`
			CommitmentRoot string `json:"commitmentRoot"`
			VoteBits       uint16 `json:"voteBits"`
			FinalState     string `json:"finalState"`
			NumVoters      uint16 `json:"numVoters"`
			FreshStake     uint8  `json:"freshStake"`
			Revocations    uint8  `json:"revocations"`
			PoolSize       uint32 `json:"poolSize"`
			DiffBits       uint32 `json:"diffBits"`
			StakeDiff      int64  `json:"stakeDiff"`
			BlockHeight    uint32 `json:"blockHeight"`
			BlockSize      uint32 `json:"blockSize"`
			Timestamp      uint32 `json:"timestamp"`
			ExtraData      string `json:"extraData"`
			StakeVersion   uint32 `json:"stakeVersion"`
			Serialized     string `json:"serialized"`
			PowHash        string `json:"powHash"`
		} `json:"tests"`
	}
	err = json.Unmarshal(f, &testData)
	if err != nil {
		t.Fatalf("parse failed: %v", err)
	}

	// Define a couple of convenience methods for parsing the hex-encoded data
	// and hashes.
	parseHex := func(data string) []byte {
		t.Helper()

		decoded, err := hex.DecodeString(data)
		if err != nil {
			t.Fatalf("unable to parse test data: %v", err)
		}
		return decoded
	}
	parseHash := func(data string) [32]byte {
		t.Helper()

		const hashSize = 32
		hash := parseHex(data)
		if len(hash) != hashSize {
			t.Fatalf("invalid hash len -- got %d, want %d", len(hash), hashSize)
		}

		// Reverse the hash so it matches the raw bytes that are produced by the
		// hash func.
		var reversed [hashSize]byte
		for i := 0; i < hashSize/2; i++ {
			reversed[i], reversed[hashSize-1-i] = hash[hashSize-1-i], hash[i]
		}
		return reversed
	}
	for i, test := range testData.Tests {
		// Ensure the test data for the serialized header actually produces the
		// provided pow hash.
		hashFromInput := blake3.Sum256(parseHex(test.Serialized))
		wantPowHash := parseHash(test.PowHash)
		if hashFromInput != wantPowHash {
			t.Fatalf("bad test data: provided input to the hash func does "+
				"not hash to the provided pow hash -- got: %x, want: %x",
				hashFromInput, wantPowHash)
		}

		gotHash := blake3PowHash(test.Version, parseHash(test.PrevHash),
			parseHash(test.MerkleRoot), parseHash(test.CommitmentRoot),
			test.VoteBits, *(*[6]byte)(parseHex(test.FinalState)),
			test.NumVoters, test.FreshStake, test.Revocations, test.PoolSize,
			test.DiffBits, test.StakeDiff, test.BlockHeight, test.BlockSize,
			test.Timestamp, *(*[36]byte)(parseHex(test.ExtraData)),
			test.StakeVersion)
		if gotHash != wantPowHash {
			t.Fatalf("test #%d: did not get expected proof of work hash -- "+
				"got: %x, want: %x", i, gotHash, wantPowHash)
		}
	}
}

ASERT Difficulty Calculations

The JSON test vectors for the ASERT difficulty algorithm are available in asert_test_vectors.json.

The test data is comprised of various scenarios that consist of a set of parameters, starting conditions, and a series of block heights and timestamps relative to those starting conditions along with the expected resulting difficulty bits for them. The test data also has comments near the top that describe the format of each of the fields.

The following example Go code provides a test function that loads and parses the file, executes all tests, and validates the results:

func calcASERTDiff(startDiffBits uint32, powLimit *big.Int, targetSecsPerBlock,
	timeDelta, heightDelta, halfLife int64) uint32) {

	// Your implementation...
	return 0
}

func TestCalcASERTDiff(t *testing.T) {
	// Read and parse the reference test vectors.
	f, err := os.ReadFile("asert_test_vectors.json")
	if err != nil {
		t.Fatalf("failed to read test vectors: %v", err)
	}
	var testData struct {
		Comments []string `json:"comments"`
		Params   map[string]struct {
			PowLimit           string `json:"powLimit"`
			PowLimitBits       uint32 `json:"powLimitBits"`
			TargetSecsPerBlock int64  `json:"targetSecsPerBlock"`
			HalfLifeSecs       int64  `json:"halfLifeSecs"`
		} `json:"params"`
		Scenarios []struct {
			Desc          string `json:"description"`
			Params        string `json:"params"`
			StartDiffBits uint32 `json:"startDiffBits"`
			StartHeight   int64  `json:"startHeight"`
			StartTime     int64  `json:"startTime"`
			Tests         []struct {
				Height           uint64 `json:"height"`
				Timestamp        int64  `json:"timestamp"`
				ExpectedDiffBits uint32 `json:"expectedDiffBits"`
			} `json:"tests"`
		} `json:"scenarios"`
	}
	err = json.Unmarshal(f, &testData)
	if err != nil {
		t.Fatal(err)
	}

	for _, scenario := range testData.Scenarios {
		// Lookup the associated network parameters and parse the proof of work
		// limit hexadecimal to a uint256.
		paramsKey := scenario.Params
		params, ok := testData.Params[paramsKey]
		if !ok {
			t.Errorf("%q: bad network params key %q", scenario.Desc, paramsKey)
			continue
		}
		powLimit, ok := new(big.Int).SetString(params.PowLimit, 16)
		if !ok {
			t.Errorf("%q: malformed pow limit %q", paramsKey, params.PowLimit)
			continue
		}

		for _, test := range scenario.Tests {
			// Calculate the time and height deltas from the test data.
			heightDelta := int64(test.Height - uint64(scenario.StartHeight))
			timeDelta := test.Timestamp - scenario.StartTime

			// Ensure the calculated difficulty matches the expected result.
			gotDiff := calcASERTDiff(scenario.StartDiffBits, powLimit,
				params.TargetSecsPerBlock, timeDelta, heightDelta,
				params.HalfLifeSecs)
			if gotDiff != test.ExpectedDiffBits {
				t.Errorf("%q@height %d: did not get expected difficulty bits "+
					"-- got %08x, want %08x", scenario.Desc, test.Height,
					gotDiff, test.ExpectedDiffBits)
				continue
			}
		}
	}
}

Acknowledgements

The ASERT algorithm in this proposal is heavily based on the work of the following sources:

Collaborators

Thanks to the following individuals who provided valuable feedback during the review process of the consensus code and this proposal (alphabetical order):

References

Inline References

  1. ^ Wikipedia: Floor and ceiling functions
  2. ^ Wikipedia: IEEE754
  3. ^ Wikipedia: Arithmetic shift
  4. ^ Wikipedia: Logical shift
  5. ^ Wikipedia: e (mathmetical constant)
  6. ^ Wikipedia: Fixed-point arithmetic

Additional References

  1. BLAKE3 Specification
  2. Politeia Proposal - Change PoW/PoS Subsidy Split to 1/89 and Change PoW Algorithm to BLAKE3

Copyright

This document is licensed under the CC0-1.0: Creative Commons CC0 1.0 Universal license.

The code is licensed under the ISC License.