Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LWMA difficulty algorithm #3

Open
zawy12 opened this issue Dec 6, 2017 · 9 comments
Open

LWMA difficulty algorithm #3

zawy12 opened this issue Dec 6, 2017 · 9 comments

Comments

@zawy12
Copy link
Owner

@zawy12 zawy12 commented Dec 6, 2017

CN coins: The last test of your fork is to make sure your new difficulties when you sync from 0 are matching the old difficulties when running the pre-fork code. See this note.

FWIW, it's possible to do the LWMA without looping over N blocks, using only the first and last difficulties (or targets) and their timestamps. In terms of difficulty, I believe it's:

ts = timestamp  D_N is difficulty of most recently solved block. 
D_{N+1} = next_D
S is the previous denominator:
S = D_N / [ D_{N-2} + D_{N-1}/N - D_{-1}/N ] * k * T
k = N/2*(N+1)
D_{N+1} = [ D_{N-1} + D_N/N - D_0/N ] * T * k / 
[ S - (ts_{N-1}-ts_0) + (ts_N-ts_{N-1})*N ]

I discovered a security weakness on 5/16/2019 due to my past FTL recommendations (which prevent bad timestamps from lowering difficulty). This weakness aka exploit does not seem to apply to Monero and Cryptonote coins that use node time instead of network time. If your coin uses network time instead of node local time, lowering FTL < about 125% of the "revert to node time" rule (70 minutes in BCH, ZEC, & BTC) will allow a 33% Sybil attack on your nodes, so the revert rule must be ~ FTL/2 instead of 70 minutes. If your coin uses network time without a revert rule (a bad design), it is subject to this attack under all conditions See: zcash/zcash#4021

People like reading the history of this algorithm.

Comparing algorithms on live coins: Difficulty Watch
Send me a link to open daemon or full API to be included.

LWMA for Bitcoin & Zcash Clones

See LWMA code for BTC/Zcash clones in the comments below. Known BTC Clones using LWMA: are BTC Gold, BTC Candy, Ignition, Pigeon, Zelcash, Zencash, BitcoinZ, Xchange, Microbitcoin.

Testnet Checking
Emai me a link to your code and then send me 200 testnet timestamps and difficulties (CSV height, timestamp, difficulty). To fully test it, you can send out-of-sequence timestamps to testnet by changing the clock on your node that sends your miner the block templates. There's a Perl script in my github code that you can use to simulate hash attacks on a single-computer testnet. Here's example code for getting the CSV timestamps/difficulty data to send me:

curl -X POST http://127.0.0.1:38782/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"getblockheadersrange","params":{"start_height":300,"end_height":412}}' -H 'Content-Type: application/json' | jq -r '.result.headers[] | [.height, .timestamp, .difficulty] | @csv'

Discord
There is a discord channel for devs using this algorithm. You must have a coin and history as a dev on that coin to join. Please email me at zawy@yahoo.com to get an invite.

Donations
Thanks to Sumo, Masari, Karbo, Electroneum, Lethean, and XChange.
38skLKHjPrPQWF9Vu7F8vdcBMYrpTg5vfM or your coin if it's on TO or cryptopia.

LWMA Description
This sets difficulty by estimating current hashrate by the most recent difficulties and solvetimes. It divides the average difficulty by the Linearly Weighted Moving Average (LWMA) of the solvetimes. This gives it more weight to the more recent solvetimes. It is designed for small coin protection against timestamp manipulation and hash attacks. The basic equation is:

next_difficulty = average(Difficulties) * target_solvetime / LWMA(solvetimes)

LWMA-2/3/4 are now not recommended because I could not show they were better than LWMA-1.

LWMA-1

Use this if you do not have NiceHash etc problems.
See LWMA-4 below for more aggressive rules to help prevent NiceHash delays,

// LWMA-1 difficulty algorithm 
// Copyright (c) 2017-2018 Zawy, MIT License
// See commented link below for required config file changes. Fix FTL and MTP.
// https://github.com/zawy12/difficulty-algorithms/issues/3
// The following comments can be deleted.
// Bitcoin clones must lower their FTL. See Bitcoin/Zcash code on the page above.
// Cryptonote et al coins must make the following changes:
// BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW  = 11; // aka "MTP"
// DIFFICULTY_WINDOW  = 60; //  N=60, 90, and 150 for T=600, 120, 60.
// BLOCK_FUTURE_TIME_LIMIT = DIFFICULTY_WINDOW * DIFFICULTY_TARGET / 20;
// Warning Bytecoin/Karbo clones may not have the following, so check TS & CD vectors size=N+1
// DIFFICULTY_BLOCKS_COUNT = DIFFICULTY_WINDOW+1;
// The BLOCKS_COUNT is to make timestamps & cumulative_difficulty vectors size N+1
//  If your coin uses network time instead of node local time, lowering FTL < about 125% of 
// the "revert to node time" rule (70 minutes in BCH, ZEC, & BTC) will allow a 33% Sybil attack 
// on your nodes.  So revert rule must be ~ FTL/2 instead of 70 minutes.   See: 
// https://github.com/zcash/zcash/issues/4021

difficulty_type LWMA1_(std::vector<uint64_t> timestamps, 
   std::vector<uint64_t> cumulative_difficulties, uint64_t T, uint64_t N, uint64_t height,  
					uint64_t FORK_HEIGHT, uint64_t  difficulty_guess) {
    
   // This old way was not very proper
   // uint64_t  T = DIFFICULTY_TARGET;
   // uint64_t  N = DIFFICULTY_WINDOW; // N=60, 90, and 150 for T=600, 120, 60.
   
   // Genesis should be the only time sizes are < N+1.
   assert(timestamps.size() == cumulative_difficulties.size() && timestamps.size() <= N+1 );

   // Hard code D if there are not at least N+1 BLOCKS after fork (or genesis)
   // This helps a lot in preventing a very common problem in CN forks from conflicting difficulties.
   if (height >= FORK_HEIGHT && height < FORK_HEIGHT + N) { return difficulty_guess; }
   assert(timestamps.size() == N+1); 

   uint64_t  L(0), next_D, i, this_timestamp(0), previous_timestamp(0), avg_D;

	previous_timestamp = timestamps[0]-T;
	for ( i = 1; i <= N; i++) {        
		// Safely prevent out-of-sequence timestamps
		if ( timestamps[i]  > previous_timestamp ) {   this_timestamp = timestamps[i];  } 
		else {  this_timestamp = previous_timestamp+1;   }
		L +=  i*std::min(6*T ,this_timestamp - previous_timestamp);
		previous_timestamp = this_timestamp; 
	}
	if (L < N*N*T/20 ) { L =  N*N*T/20; }
	avg_D = ( cumulative_difficulties[N] - cumulative_difficulties[0] )/ N;
   
	// Prevent round off error for small D and overflow for large D.
	if (avg_D > 2000000*N*N*T) { 
		next_D = (avg_D/(200*L))*(N*(N+1)*T*99);   
	}   
	else {    next_D = (avg_D*N*(N+1)*T*99)/(200*L);    }
	
	// Optional. Make all insignificant digits zero for easy reading.
	i = 1000000000;
	while (i > 1) { 
		if ( next_D > i*100 ) { next_D = ((next_D+i/2)/i)*i; break; }
		else { i /= 10; }
	}
	return  next_D;
}

The following is an idea that could be inserted right before "return next_D;

	// Optional.
        // Make least 2 digits = size of hash rate change last 11 BLOCKS if it's statistically significant.
	// D=2540035 => hash rate 3.5x higher than D expected. Blocks coming 3.5x too fast.
	if ( next_D > 10000 ) { 
		uint64_t est_HR = (10*(11*T+(timestamps[N]-timestamps[N-11])/2)) / 
                                   (timestamps[N]-timestamps[N-11]+1);
		if (  est_HR > 5 && est_HR < 25 )  {  est_HR=0;   }
		est_HR = std::min(static_cast<uint64_t>(99), est_HR);
		next_D = ((next_D+50)/100)*100 + est_HR;  
	}

This is LWMA-2 verses LWMA if there is a 10x attack. There's not any difference for smaller attacks. See further below for LWMA compared to other algos.
image

Credits:

  • dgenr8 for showing LWMA can work
  • Aiwe (Karbo) for extensive discussions and motivation.
  • Thaer (Masari) for jump-starting LWMA and refinement discussions.
  • BTG (h4x4rotab) for finding initial pseudocode error and writing a good clean target method.
  • gabetron for pointing out a if ST<0 then ST=0 type of exploit in 1 version before it was used by anyone.
  • CDY for pointing out target method was not exact same as difficulty method.
  • IPBC and Intense for independently suffering and fixing a sneaky but basic code error.
  • Stellite and CDY for independently modifying an idea in my D-LWMA, forking to implement it, and showing me it worked. (The one-sided jump rule). My modification of their idea resulted in LWMA-2.

Known coins using it
The names here do not imply endorsement or success or even that they've forked to implement it yet. This is mainly for my reference to check on them later.
Alloy, Balkan, Wownero, Bitcoin Candy, Bitcoin Gold, BitcoiNote, BiteCode, BitCedi, BBScoin, Bitsum, BitcoinZ(?) Brazuk, DigitalNote, Dosh, Dynasty(?), Electronero, Elya, Graft, Haven, IPBC, Ignition, Incognito, Iridium, Intense, Italo, Loki, Karbo, MktCoin, MoneroV, Myztic, MarketCash, Masari, Niobio, NYcoin, Ombre, Parsi, Plura, Qwerty, Redwind?, Saronite, Solace, Stellite, Turtle, UltraNote, Vertical, Zelcash, Zencash. Recent inquiries: Tyche, Dragonglass, TestCoin, Shield 3.0. [update: and many more]

Importance of the averaging window size, N
The size of of an algorithm's "averaging" window of N blocks is more important than the particular algorithm. Stability comes at a loss in speed of response by making N larger, and vice versa. Being biased towards low N is good because speed is proportional to 1/N while stability is proportional to SQRT(N). In other words, it's easier to get speed from low N than it is to get stability from high N. It appears as if the the top 20 large coins can use an N up to 10x higher (a full day's averaging window) to get a smooth difficulty with no obvious ill-effects. But it's very risky if a coin does not have at least 20% of the dollar reward per hour as the biggest coin for a given POW. Small coins using a large N can look nice and smooth for a month and then go into oscillations from a big miner and end up with 3-day delays between blocks, having to rent hash power to get unstuck. By tracking hashrate more closely, smaller N is more fair to your dedicated miners who are important to marketing. Correctly estimating current hashrate to get the correct block solvetime is the only goal of a difficulty algorithm. This includes the challenge of dealing with bad timestamps. An N too small disastrously attracts on-off mining by varying too much and doesn't track hashrate very well. Large N attracts "transient" miners by not tracking price fast enough and by not penalizing big miners who jump on and off, leaving your dedicated miners with a higher difficulty. This discourages dedicated miners, which causes the difficulty to drop in the next cycle when the big miner jumps on again, leading to worsening oscillations.

Masari forked to implement this on December 3, 2017 and has been performing outstandingly.
Iridium forked to implement this on January 26, 2018 and reports success. They forked again on March 19, 2018 for other reasons and tweaked it.
IPBC forked to implement it March 2, 2018.
Stellite implemented it March 9, 2018 to stop bad oscillations.
Karbowanec and QwertyCoin appear to be about to use it.

Comparison to other algorithms:

The competing algorithms are LWMA, EMA (exponential moving average), and Digishield. I'll also include SMA (simple moving average) for comparison. This is is the process go through to determine which is best.

First, I set the algorithms' "N" parameter so that they all give the same speed of response to an increase in hash rate (red bars). To give Digishield a fair chance, I removed the 6-block MTP delay. I had to lower its N value from 17 to 13 blocks to make it as fast as the others. I could have raised the other algo's N value instead, but I wanted a faster response than Digishield normally gives (based on watching hash attacks on Zcash and Hush). Also based on those attacks and attacks on other coins, I make my "test attack" below 3x the basline hashrate (red bars) and last for 30 blocks.

compare1

Then I simulate real hash attacks starting when difficulty accidentally drops 15% below baseline and end when difficulty is 30% above baseline. I used 3x attacks, but I get the same results for a wide range of attacks. The only clear advantage LWMA and EMA have over Digishield is fewer delays after attacks. The combination of the delay and "blocks stolen" metrics closely follows the result given by a root-mean-square of the error between where difficulty is and where it should be (based on the hash rate). LWMA wins on that metric also for a wide range of hash attack profiles.

compare4

I also consider their stability during constant hash rate.

compare3

Here is my spreadsheet for testing algorithms I've spent 9 months devising algorithms, learning from others, and running simulations in it.

compare_hash

Here's Hush with Zcash's Digishield compared to Masari with LWMA. Hush was 10x the market capitalization of Masari when these were done (so it should have been more stable). The beginning of Masari was after it forked to LWMA and attackers were still trying to see if they could profit.

image

image

@zawy12 zawy12 changed the title WWHM difficulty algorithm LWWHM difficulty algorithm Dec 7, 2017
@zawy12 zawy12 changed the title LWWHM difficulty algorithm LW-WHM difficulty algorithm Dec 7, 2017
@zawy12 zawy12 changed the title LW-WHM difficulty algorithm WHM difficulty algorithm Dec 8, 2017
@zawy12 zawy12 changed the title WHM difficulty algorithm TWHM difficulty algorithm Jan 9, 2018
@zawy12 zawy12 changed the title TWHM difficulty algorithm WHM difficulty algorithm Jan 9, 2018
@zawy12 zawy12 changed the title WHM difficulty algorithm LWMA (WHM) difficulty algorithm Jan 11, 2018
@h4x3rotab
Copy link

@h4x3rotab h4x3rotab commented Feb 6, 2018

BTCGPU/BTCGPU@a3c8d1a

I'm on boarding :)

@h4x3rotab
Copy link

@h4x3rotab h4x3rotab commented Feb 24, 2018

Here is the Python implementation of LWMA algo in Bitcoin Gold:

def BTG_LWMA(height, timestamp, target):
    # T=<target solvetime>

    T = 600

    # height -1 = most recently solved block number
    # target  = 1/difficulty/2^x where x is leading zeros in coin's max_target, I believe
    # Recommended N:

    N = 45 # int(45*(600/T)^0.3))

    # To get a more accurate solvetime to within +/- ~0.2%, use an adjustment factor.
    # This technique has been shown to be accurate in 4 coins.
    # In a formula:
# [edit by zawy: since he's using target method, adjust should be 0.998. This was my mistake. ]
    # adjust = 0.9989^(500/N)  
    # k = (N+1)/2 * adjust * T 
    k = 13632
    sumTarget = 0
    t = 0
    j = 0

    # Loop through N most recent blocks.  "< height", not "<=". 
    # height-1 = most recently solved rblock
    for i in range(height - N, height):
        solvetime = timestamp[i] - timestamp[i-1]
        j += 1
        t += solvetime * j
        sumTarget += target[i]

    # Keep t reasonable in case strange solvetimes occurred. 
    if t < N * k // 3:
        t = N * k // 3

    next_target = t * sumTarget // k // N // N
    return next_target

@zawy12 , please note that your original pseudocode has a mistake at the last line:

next_target = t * sumTarget / k

If I understand it correctly, it should be:

next_target = t * sumTarget / (k * N^2)

t is the weighted sum of solve time, which has the same order of T*N*(N+1) / 2; sumTarget is the sum of the target of the last N blocks, which equals to N*avg_target.

Given k is (N+1)/2 * adjust * T, ignoring adjust, which is approximate 1, if we sub the three variables to next_target = t * sumTarget / k, we will get:

next_target = T*N*(N+1) / 2 * N*avg_target / ((N+1)/2 * T) = N^2 * avg_target

Apparently, there's a superfluous factor N^2.

@zawy12
Copy link
Owner Author

@zawy12 zawy12 commented Feb 24, 2018

Thanks for the correction.

@zawy12
Copy link
Owner Author

@zawy12 zawy12 commented Nov 27, 2018

LWMA for BTC / Zcash Clones

LWMA-1 (LWMA-3 used is deprecated)

This version does not allow "negative solvetimes" that the original BTG allowed. This increases stability.

// LWMA-1 for BTC & Zcash clones
// Copyright (c) 2017-2019 The Bitcoin Gold developers, Zawy, iamstenman (Microbitcoin)
// MIT License
// Algorithm by Zawy, a modification of WT-144 by Tom Harding
// For updates see
// https://github.com/zawy12/difficulty-algorithms/issues/3#issuecomment-442129791
// Do not use Zcash's / Digishield's method of ignoring the ~6 most recent 
// timestamps via the median past timestamp (MTP of 11).
//  FTL should be lowered to about N*T/20.
//  FTL in BTC clones is MAX_FUTURE_BLOCK_TIME in chain.h.
//  FTL in Ignition, Numus, and others can be found in main.h as DRIFT.
//  FTL in Zcash & Dash clones need to change the 2*60*60 here:
//  if (block.GetBlockTime() > nAdjustedTime + 2 * 60 * 60)
//  which is around line 3700 in main.cpp in ZEC and validation.cpp in Dash
//  If your coin uses median network time instead of node's time, the "revert to 
//  node time" rule (70 minutes in BCH, ZEC, & BTC) should be reduced to FTL/2 
//  to prevent 33% Sybil attack that can manipulate difficulty via timestamps. See:
// https://github.com/zcash/zcash/issues/4021

unsigned int Lwma3CalculateNextWorkRequired(const CBlockIndex* pindexLast, const Consensus::Params& params)
{
    const int64_t T = params.nPowTargetSpacing;

   // For T=600, 300, 150 use approximately N=60, 90, 120
    const int64_t N = params.lwmaAveragingWindow;  

    // Define a k that will be used to get a proper average after weighting the solvetimes.
    const int64_t k = N * (N + 1) * T / 2; 

    const int64_t height = pindexLast->nHeight;
    const arith_uint256 powLimit = UintToArith256(params.powLimit);
    
   // New coins just "give away" first N blocks. It's better to guess
   // this value instead of using powLimit, but err on high side to not get stuck.
    if (height < N) { return powLimit.GetCompact(); }

    arith_uint256 avgTarget, nextTarget;
    int64_t thisTimestamp, previousTimestamp;
    int64_t sumWeightedSolvetimes = 0, j = 0;

    const CBlockIndex* blockPreviousTimestamp = pindexLast->GetAncestor(height - N);
    previousTimestamp = blockPreviousTimestamp->GetBlockTime();

    // Loop through N most recent blocks. 
    for (int64_t i = height - N + 1; i <= height; i++) {
        const CBlockIndex* block = pindexLast->GetAncestor(i);

        // Prevent solvetimes from being negative in a safe way. It must be done like this. 
        // Do not attempt anything like  if (solvetime < 1) {solvetime=1;}
        // The +1 ensures new coins do not calculate nextTarget = 0.
        thisTimestamp = (block->GetBlockTime() > previousTimestamp) ? 
                            block->GetBlockTime() : previousTimestamp + 1;

       // 6*T limit prevents large drops in diff from long solvetimes which would cause oscillations.
        int64_t solvetime = std::min(6 * T, thisTimestamp - previousTimestamp);

       // The following is part of "preventing negative solvetimes". 
        previousTimestamp = thisTimestamp;
       
       // Give linearly higher weight to more recent solvetimes.
        j++;
        sumWeightedSolvetimes += solvetime * j; 

        arith_uint256 target;
        target.SetCompact(block->nBits);
        avgTarget += target / N / k; // Dividing by k here prevents an overflow below.
    }
    nextTarget = avgTarget * sumWeightedSolvetimes; 

    if (nextTarget > powLimit) { nextTarget = powLimit; }

    return nextTarget.GetCompact();
}
zawy12 referenced this issue in cash2/cash2 Dec 30, 2018
…algorithm back to original

CryptoNoteConfig.h
- reduced initial block size
- reduced maximum number of blocks

Currency.cpp
- changed difficulty adjustment algorithm back to original
-LWMA difficulty does not work to produce 9 second blocks. Blocks were coming out too fast at 2 seconds per block and difficulty was not increasing.
@cryptozeny
Copy link

@cryptozeny cryptozeny commented Jan 7, 2019

For safety reasons, I suggest inserting this line at the beginning of the code. @zawy12

assert(pindexLast != nullptr);
@cryptforall
Copy link

@cryptforall cryptforall commented Jan 7, 2019

For safety reasons, I suggest inserting this line at the beginning of the code. @zawy12

assert(pindexLast != nullptr);

sir most btc/dash clones have it

unsigned int GetNextWorkRequired(const CBlockIndex* pindexLast, const CBlockHeader *pblock, const Consensus::Params& params)
{
    assert( pindexLast != nullptr );
    int nHeight = pindexLast->nHeight + 1;
CjS77 added a commit to tari-project/tari that referenced this issue Nov 6, 2019
This PR adds in a difficulty adjustment algorithm.

Motivation and Context

We need to adjust the difficulty per block. This PR adds the LWMA
algorithm as per zawy12/difficulty-algorithms#3
CjS77 added a commit to tari-project/tari that referenced this issue Nov 6, 2019
This PR adds in a difficulty adjustment algorithm.

Motivation and Context

We need to adjust the difficulty per block. This PR adds the LWMA
algorithm as per zawy12/difficulty-algorithms#3
CjS77 added a commit to tari-project/tari that referenced this issue Nov 6, 2019
Merge pull request #971

We need to adjust the difficulty per block. This PR adds the LWMA
algorithm as per zawy12/difficulty-algorithms#3
@SWvheerden SWvheerden mentioned this issue Sep 10, 2020
5 of 12 tasks complete
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
4 participants