New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LWMA difficulty algorithm #3

Open
zawy12 opened this Issue Dec 6, 2017 · 6 comments

Comments

Projects
None yet
2 participants
@zawy12
Owner

zawy12 commented Dec 6, 2017

CN coins: The last test of your fork is to make sure your new difficulties when you sync from 0 are matching the old difficulties when running the pre-fork code. See this note.

Comparing algorithms on live coins: Difficulty Watch
Send me a link to open daemon or full API to be included.

LWMA for Bitcoin & Zcash Clones

I currently have LWMA and LWMA-3 code for BTC and Zcash clones. See the LWMA code for BTC/Zcash clones BTC Clones using LWMA: are BTC Gold, BTC Candy, Ignition, Pigeon, Zelcash, Zencash, BitcoinZ.

Testnet Checking
Emai me a link to your code and then send me 200 testnet timestamps and difficulties (CSV height, timestamp, difficulty). To fully test it, you can send out-of-sequence timestamps to testnet by changing the clock on your node that sends your miner the block templates. There's a Perl script in my github code that you can use to simulate hash attacks on a single-computer testnet. Example code for getting the CSV timestamps/difficulty data:

curl -X POST http://127.0.0.1:38782/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"getblockheadersrange","params":{"start_height":300,"end_height":412}}' -H 'Content-Type: application/json' | jq -r '.result.headers[] | [.height, .timestamp, .difficulty] | @csv'

Discord
There is a discord channel for devs using this algorithm. You must have a coin and history as a dev on that coin to join. Please email me at zawy@yahoo.com to get an invite.

Donations
Thanks to Sumo, Masari, Karbo, Electroneum, Lethean, and XChange.
38skLKHjPrPQWF9Vu7F8vdcBMYrpTg5vfM or your coin if it's on TO or cryptopia.

LWMA Description
This sets difficulty by estimating current hashrate by the most recent difficulties and solvetimes. It divides the average difficulty by the Linearly Weighted Moving Average (LWMA) of the solvetimes. This gives it more weight to the more recent solvetimes. It is designed for small coin protection against timestamp manipulation and hash attacks. The basic equation is:

next_difficulty = average(Difficulties) * target_solvetime / LWMA(solvetimes)

LWMA-2 is LWMA with 8% jump when last 3 solvetimes were < 0.8xT. Finished Aug 2018.
LWMA-3 fixes an exploit that enabled s>50% miners to do block withholding to get unlimited blocks.Finished Oct 2018.
LWMA-4 (Nov 2018)

  1. Limited LWMA-2's jumps to 5% above avg D.
  2. Added 1 & 2 block triggers to LWMA-2's 3-block trigger to start jumping 10% per block.
  3. Drops slower if there are fast solvetimes followed by a really long solvetime (a NH problem).
  4. Converts difficulty to easier-to-read number by converting 123,456,789 to 12300000.
  5. Makes least 2 digits of difficulty equal the estimated hashrate of the last 11 blocks. 123000053 means the last 11 blocks have a hashrate 5.3x higher than the difficulty expected.

LWMA-4 is LWMA if all the options are removed. (And convert the 97 to 99.)

LWMA-1

Use this if you do not have NiceHash etc problems.
See LWMA-4 below for more aggressive rules to help prevent NiceHash delays,

// LWMA-1 difficulty algorithm 
// Copyright (c) 2017-2018 Zawy, MIT License
// https://github.com/zawy12/difficulty-algorithms/issues/3
// See commented version for explanations & required config file changes. Fix FTL and MTP!

difficulty_type next_difficulty_v3(std::vector<uint64_t> timestamps, 
   std::vector<difficulty_type> cumulative_difficulties) {
    
   uint64_t  T = DIFFICULTY_TARGET_V2;
   uint64_t  N = DIFFICULTY_WINDOW_V2; // N=60, 90, and 120 for T=600, 120, 60.
   uint64_t  L(0), next_D, i, this_timestamp(0), previous_timestamp(0), avg_D;
        
    assert(timestamps.size() == cumulative_difficulties.size() && timestamps.size() <= N+1 );

   // If it's a new coin, do startup code. Do not remove in case other coins copy your code.
   uint64_t difficulty_guess = 100; 
   if (timestamps.size() <= 12 ) {   return difficulty_guess;   }
   if ( timestamps.size()  < N +1 ) { N = timestamps.size()-1;  }
   
   // If hashrate/difficulty ratio after a fork is < 1/3 prior ratio, hardcode D for N+1 blocks after fork. 
   // This will also cover up a very common type of backwards-incompatible fork.
   // difficulty_guess = 100000; //  Dev may change.  Guess low than anything expected.
   // if ( height <= UPGRADE_HEIGHT + 1 + N && height >= UPGRADE_HEIGHT ) { return difficulty_guess;  }
 
   previous_timestamp = timestamps[0];
   for ( i = 1; i <= N; i++) {        
      // Safely prevent out-of-sequence timestamps
      if ( timestamps[i]  > previous_timestamp ) {   this_timestamp = timestamps[i];  } 
      else {  this_timestamp = previous_timestamp;   }
      L +=  i*std::min(6*T ,this_timestamp - previous_timestamp);
      previous_timestamp = this_timestamp; 
   }
   if (L < N*N*T/20 ) { L =  N*N*T/20; }
   avg_D = ( cumulative_difficulties[N] - cumulative_difficulties[0] )/ N;
   
   // Prevent round off error for small D and overflow for large D.
   if (avg_D > 2000000*N*N*T) { 
       next_D = (avg_D/(200*L))*(N*(N+1)*T*99);   
   }   
   else {    next_D = (avg_D*N*(N+1)*T*99)/(200*L);    }

   // Optional. Make all insignificant digits zero for easy reading.
   i = 1000000000;
   while (i > 1) { 
     if ( next_D > i*100 ) { next_D = ((next_D+i/2)/i)*i; break; }
     else { i /= 10; }
   }
// Make least 2 digits = size of hash rate change last 11 blocks if it's statistically significant.
// D=2540035 => hash rate 3.5x higher than D expected. Blocks coming 3.5x too fast.
 if ( next_D > 10000 ) { 
     uint64_t est_HR = (10*(11*T+(timestamps[N]-timestamps[N-11])/2)) / 
                                   (timestamps[N]-timestamps[N-11]+1);
      if (  est_HR > 5 && est_HR < 22 )  {  est_HR=0;   }
      est_HR = std::min(static_cast<uint64_t>(99), est_HR);
      next_D = ((next_D+50)/100)*100 + est_HR;  
}
   return  next_D;
}

Do not use LWMA-4 if you are a CN/Monero/Bytecoin/Forknote coin unless your pools are adjusting the timestamps during hashing. If your pools have not fixed this error, LWMA-4 will make their results worse and cause more delays in your coin, giving Nicehash an advantage over your pools.

LWMA-4 for CN / Monero coins

For dealing with NiceHash or other extensive on-off mining problems.

// LWMA-4 difficulty algorithm 
// Copyright (c) 2017-2018 Zawy, MIT License
// https://github.com/zawy12/difficulty-algorithms/issues/3
// See commented version for explanations & required config file changes. Fix FTL and MTP!

difficulty_type next_difficulty_v3(std::vector<uint64_t> timestamps, 
   std::vector<difficulty_type> cumulative_difficulties) {
    
   uint64_t  T = DIFFICULTY_TARGET_V2;
   uint64_t  N = DIFFICULTY_WINDOW_V2; // N=45, 60, and 90 for T=600, 120, 60.
   uint64_t  L(0), ST(0), next_D, prev_D, avg_D, i;
        
    assert(timestamps.size() == cumulative_difficulties.size() && timestamps.size() <= N+1 );

   // If it's a new coin, do startup code. Do not remove in case other coins copy your code.
   uint64_t difficulty_guess = 100; 
   if (timestamps.size() <= 12 ) {   return difficulty_guess;   }
   if ( timestamps.size()  < N +1 ) { N = timestamps.size()-1;  }
   
   // If hashrate/difficulty ratio after a fork is < 1/3 prior ratio, hardcode D for N+1 blocks after fork. 
   // This will also cover up a very common type of backwards-incompatible fork.
   // difficulty_guess = 100000; //  Dev may change.  Guess low than anything expected.
   // if ( height <= UPGRADE_HEIGHT + 1 + N ) { return difficulty_guess;  }
 
   // Safely convert out-of-sequence timestamps into > 0 solvetimes.
   std::vector<uint64_t>TS(N+1);
   TS[0] = timestamps[0];
   for ( i = 1; i <= N; i++) {        
      if ( timestamps[i]  > TS[i-1]  ) {   TS[i] = timestamps[i];  } 
      else {  TS[i] = TS[i-1];   }
   }

   for ( i = 1; i <= N; i++) {  
      // Temper long solvetime drops if they were preceded by 3 or 6 fast solves.
      if ( i > 4 && TS[i]-TS[i-1] > 5*T  && TS[i-1] - TS[i-4] < (14*T)/10 ) {   ST = 2*T; }
      else if ( i > 7 && TS[i]-TS[i-1] > 5*T  && TS[i-1] - TS[i-7] < 4*T ) {   ST = 2*T; }
      else { // Assume normal conditions, so get ST.
         // LWMA drops too much from long ST, so limit drops with a 5*T limit 
         ST = std::min(5*T ,TS[i] - TS[i-1]);
      }
      L +=  ST * i ; 
   } 
   if (L < N*N*T/20 ) { L =  N*N*T/20; } 
   avg_D = ( cumulative_difficulties[N] - cumulative_difficulties[0] )/ N;
   
   // Prevent round off error for small D and overflow for large D.
   if (avg_D > 2000000*N*N*T) { 
       next_D = (avg_D/(200*L))*(N*(N+1)*T*97);   
   }   
   else {    next_D = (avg_D*N*(N+1)*T*97)/(200*L);    }

   prev_D =  cumulative_difficulties[N] - cumulative_difficulties[N-1] ; 

   // Apply 10% jump rule.
   if (  ( TS[N] - TS[N-1] < (2*T)/10 ) || 
         ( TS[N] - TS[N-2] < (5*T)/10 ) ||  
         ( TS[N] - TS[N-3] < (8*T)/10 )    )
   {  
       next_D = std::max( next_D, std::min( (prev_D*110)/100, (105*avg_D)/100 ) ); 
   }
   // Make all insignificant digits zero for easy reading.
   i = 1000000000;
   while (i > 1) { 
     if ( next_D > i*100 ) { next_D = ((next_D+i/2)/i)*i; break; }
     else { i /= 10; }
   }
   // Make least 3 digits equal avg of past 10 solvetimes.
   if ( next_D > 100000 ) { 
    next_D = ((next_D+500)/1000)*1000 + std::min(static_cast<uint64_t>(999), (TS[N]-TS[N-10])/10); 
   }
   return  next_D;
}

Old code for the last option that tells size current hashrate as multiple of what D expected.

// Make least 2 digits = size of hash rate change last 11 blocks if it's statistically significant.
// D=2540035 => hash rate 3.5x higher than D expected. Blocks coming 3.5x too fast.
 if ( next_D > 10000 ) { 
     uint64_t est_HR = (10*(11*T+(TS[N]-TS[N-11])/2))/(TS[N]-TS[N-11]+1);
      if (  est_HR > 5 && est_HR < 22 )  {  est_HR=0;   }
      est_HR = std::min(static_cast<uint64_t>(99), est_HR);
      next_D = ((next_D+50)/100)*100 + est_HR;  
}

LWMA-4 (long commented version)

// LWMA-4 difficulty algorithm 
// Copyright (c) 2017-2018 Zawy, MIT License
// https://github.com/zawy12/difficulty-algorithms/issues/3
// See commented version for explanations & required config file changes. Fix FTL and MTP!

// REMOVE COMMENTS BELOW THIS LINE. 

// The options are recommended. They're called options to show the core is a simple LWMA.
// Bitcoin clones must lower their FTL. 
// Cryptonote et al coins must make the following changes:
// #define BLOCKCHAIN_TIMESTAMP_CHECK_WINDOW    11
// #define CRYPTONOTE_BLOCK_FUTURE_TIME_LIMIT        3 * DIFFICULTY_TARGET 
// #define DIFFICULTY_WINDOW         100 //  N=60, 100, and 180 for T=600, 120, 60.
// Warning Bytecoin/Karbo clones may not have the following, so check TS & CD vectors size=N+1
// #define DIFFICULTY_BLOCKS_COUNT       DIFFICULTY_WINDOW+1
// The BLOCKS_COUNT is to make timestamps & cumulative_difficulty vectors size N+1
// Do not sort timestamps.  
// CN coins (Monero < 12.3) must deploy the Jagerman MTP Patch. See:
// https://github.com/loki-project/loki/pull/26   or
// https://github.com/graft-project/GraftNetwork/pull/118/files

difficulty_type next_difficulty_v3(std::vector<uint64_t> timestamps, 
   std::vector<difficulty_type> cumulative_difficulties) {
    
   uint64_t  T = DIFFICULTY_TARGET_V2;
   uint64_t  N = DIFFICULTY_WINDOW_V2; // N=45, 60, and 90 for T=600, 120, 60.
   uint64_t  L(0), ST(0), next_D, prev_D, avg_D, i;
        
    assert(timestamps.size() == cumulative_difficulties.size() && timestamps.size() <= N+1 );

   // If it's a new coin, do startup code. Do not remove in case other coins copy your code.
   uint64_t difficulty_guess = 100; 
   if (timestamps.size() <= 12 ) {   return difficulty_guess;   }
   if ( timestamps.size()  < N +1 ) { N = timestamps.size()-1;  }
   
   // If hashrate/difficulty ratio after a fork is < 1/3 prior ratio, hardcode D for N+1 blocks after fork. 
   // difficulty_guess = 100000; //  Dev may change.  Guess low than anything expected.
   // if ( height <= UPGRADE_HEIGHT + 1 + N ) { return difficulty_guess;  }

   // Recreate timestamps (TS) vector to safely handle out-of-sequence timestamps.
  std::vector<uint64_t>TS(N+1);
   TS[0] = timestamps[0];
   for ( i = 1; i <= N; i++) {        
      if ( timestamps[i]  > TS[i-1]  ) {   TS[i] = timestamps[i];  } 
      else {  TS[i] = TS[i-1];   }
   }
  // Calculate numerator of LWMA of STs.
   for ( i = 1; i <= N; i++) {  
      
      // Option 1. The next "if" & "else if" are optional. 
      // Temper long solvetime drops if they were preceded by 3 or 6 fast solves.
      if ( i > 4 && TS[i]-TS[i-1] > 5*T  && TS[i-1] - TS[i-4] < (14*T)/10 ) {   ST = 2*T; }
      else if ( i > 7 && TS[i]-TS[i-1] > 5*T  && TS[i-1] - TS[i-7] < 4*T ) {   ST = 2*T; }
      else { // Assume normal conditions, so get ST.
         // LWMA drops too much from long ST, so limit drops with a 5*T limit 
         ST = std::min(5*T ,TS[i] - TS[i-1]);
      }
      L +=  ST * i ; 
   }
  // Allow L small enough for fast start up, but large enough for protection.
   if (L < (N*N*T)/20 ) { L =  (N*N*T)/20; } 
   
   avg_D = ( cumulative_difficulties[N] - cumulative_difficulties[0] )/ N;

  // Do core calculation.  Math explanation:
  // 97/100 adjustment is to get correct avg ST b/c next_D is thrown ~1% high by each:
  // 1) 5*T above, 2) 8% jumps below, & 3) Poisson for low N is a gamma distribution.
  // N*N(+1)/(2*L) is just 1/LWMA(STs).   avg_D/LWMA(STs) is the estimated 
  // hashrate (HR).  T/LWMA(STs) is a ratio in 0.85 to 1.05 range that corrects 
  // avg_D to try to make avg ST occur in T.  

   // Prevent overflow for large D and round-off error for small D .
   if (avg_D > 2000000*N*N*T) { 
       next_D = (avg_D/(200*L))*(N*(N+1)*T*97);   
   }   
   else {    next_D = (avg_D*N*(N+1)*T*97)/(200*L);    }

   prev_D =  cumulative_difficulties[N] - cumulative_difficulties[N-1] ; 
   
   // Option 2.
   // Miners' decision to mine during low D is a non-linear function like a reverse 
   //  S-curve (D=x-axis, HR=y).  This "if" statement is counter-acting approximate S-curve,
   //  Jump 10% up to 5% above avg_D if last 1 to 3 ST's are fast. Otherwise, keep next_D
   if (  ( TS[N] - TS[N-3] < (8*T)/10 ) || 
         ( TS[N] - TS[N-2] < (5*T)/10 ) || 
         ( TS[N] - TS[N-1] < (2*T)/10 )    )
   {  
       next_D = std::max( next_D, std::min( (prev_D*110)/100, (105*avg_D)/100 ) ); 
   } 

  // Option 3. Convert next_D to 3 significant digits.
  // Round-off function: ((next_D+i/2)/i)*i
  i = 1000000000;
  while (i > 1) { 
     if ( next_D > i*100 ) { next_D = ((next_D+i/2)/i)*i; break; }
     else { i /= 10; }
  }
   // Make least 3 digits equal avg of past 10 solvetimes.
   if ( next_D > 100000 ) { 
    next_D = ((next_D+500)/1000)*1000 + std::min(static_cast<uint64_t>(999), (TS[N]-TS[N-10])/10);
   }
   return  next_D;
  
  // To show difference. 
  // next_Target = sumTargets*L*2/0.998/T/(N+1)/N/N; // To show the difference.
}

This is LWMA-2 verses LWMA if there is a 10x attack. There's not any difference for smaller attacks. See further below for LWMA compared to other algos.
image

Credits:

  • dgenr8 for showing LWMA can work
  • Aiwe (Karbo) for extensive discussions and motivation.
  • Thaer (Masari) for jump-starting LWMA and refinement discussions.
  • BTG (h4x4rotab) for finding initial pseudocode error and writing a good clean target method.
  • gabetron for pointing out a if ST<0 then ST=0 type of exploit in 1 version before it was used by anyone.
  • CDY for pointing out target method was not exact same as difficulty method.
  • IPBC and Intense for independently suffering and fixing a sneaky but basic code error.
  • Stellite and CDY for independently modifying an idea in my D-LWMA, forking to implement it, and showing me it worked. (The one-sided jump rule). My modification of their idea resulted in LWMA-2.

Known coins using it
The names here do not imply endorsement or success or even that they've forked to implement it yet. This is mainly for my reference to check on them later.
Alloy, Balkan, Wownero, Bitcoin Candy, Bitcoin Gold, BitcoiNote, BiteCode, BitCedi, BBScoin, Bitsum, BitcoinZ(?) Brazuk, DigitalNote, Dosh, Dynasty(?), Electronero, Elya, Graft, Haven, IPBC, Ignition, Incognito, Iridium, Intense, Italo, Loki, Karbo, MktCoin, MoneroV, Myztic, MarketCash, Masari, Niobio, NYcoin, Ombre, Parsi, Plura, Qwerty, Redwind?, Saronite, Solace, Stellite, Turtle, UltraNote, Vertical, Zelcash, Zencash. Recent inquiries: Tyche, Dragonglass, TestCoin, Shield 3.0. [update: and many more]

Importance of the averaging window size, N
The size of of an algorithm's "averaging" window of N blocks is more important than the particular algorithm. Stability comes at a loss in speed of response by making N larger, and vice versa. Being biased towards low N is good because speed is proportional to 1/N while stability is proportional to SQRT(N). In other words, it's easier to get speed from low N than it is to get stability from high N. It appears as if the the top 20 large coins can use an N up to 10x higher (a full day's averaging window) to get a smooth difficulty with no obvious ill-effects. But it's very risky if a coin does not have at least 20% of the dollar reward per hour as the biggest coin for a given POW. Small coins using a large N can look nice and smooth for a month and then go into oscillations from a big miner and end up with 3-day delays between blocks, having to rent hash power to get unstuck. By tracking hashrate more closely, smaller N is more fair to your dedicated miners who are important to marketing. Correctly estimating current hashrate to get the correct block solvetime is the only goal of a difficulty algorithm. This includes the challenge of dealing with bad timestamps. An N too small disastrously attracts on-off mining by varying too much and doesn't track hashrate very well. Large N attracts "transient" miners by not tracking price fast enough and by not penalizing big miners who jump on and off, leaving your dedicated miners with a higher difficulty. This discourages dedicated miners, which causes the difficulty to drop in the next cycle when the big miner jumps on again, leading to worsening oscillations.

Timestamp Manipulation
All fast algorithms (low N) can have the difficulty forced low by >50% hash rate miners who give bad timestamps. This can be reduced by raising N or lowering the nodes' future time limit to FTL = 360 seconds. The amount they can lower difficulty is (N-FTL/T)/N once about every 1.5xFTL. The FTL must be less than one T more than the absolute value of any limit on negative solvetimes like -6xT in order for the negative solvetimes to erase the effect of bad forward timestamps. SMA and Digishield algorithms using a subtraction of the first and last timestamps in the window rather than looking at individual solvetimes like LWMA are subject to the same extent, so they also need a low FTL. Without FTL, they are protected against bad timestamps from < 50% miners because they function the same as allowing negative solvetimes (an honest timestamp that follows a bad timestamp immediately erases the effect of the bad timestamp).

Why are your jumps in LWMA-2 and 3 not symmetrical? Isn't the lack of symmetry dangerous or causing some other problem?
I have been a big fan of using symmetry like this in the past. I have seen big problems from not using symmetry, like I'm doing. So the asymmetry I'm using was not chosen lightly. I tried many algos with a symmetry for the jumps and the advantages sought were cancelled by the disadvantages. It took BTC Candy and Stellite a long time and live coin results to convince me this asymmetry is the way to go. They implemented modified asymmetrical versions of my symmetrical algorithms. We don't want to attract miners "unjustly"by jumping low ... I mean we want the "long"-term average hashrate (about 6 hour avg) to be reflected in the difficulty, and to "slowly" change....to slowly go up or down (over about 3 hours)......but there is an exception: we want to penalize the sudden heavy mining attack. These jumps have a type of "memoryless-ness" to them ... the difficulty immediately goes back down to the long term average "without prejudice". In other words, the difficulty algo has an immediate forgiveness to the attack so everyone else is not penalized with a higher difficulty after the attacker leaves. I can't think of a specific reason we would want symmetrical "memoryless-ness" drops. There should not be a reason that the network suddenly collapse in hashrate, unless it is an attack that is ending, and that case is being handled correctly with the jumps and forgiveness. LWMA is really good at dropping fast anyway, so much so that the +6xT limit in the loop helps prevent it from dropping too much too fast which was causing a big problem during persistent attacks on certain coins (the attacker seem to be their only miner, so there were long delays which caused a huge drop). If it were symmetrical, it could have oscillations if an attacker finds a favorable on-off pattern. One-sided jumps enables a larger N that "dampens" falls, but does not dampen jumps. The stability is not hardly affected and solvetimes remain exactly correct with this asymmetrical approach.

Using MTP for bad timestamps
Digishield / Zcash clones use bitcoin's MTP for protection against bad timestamps. This delays the difficulty response about 6 blocks. It's not protection against > 50% miners because they can "own" the MTP, deciding what the most recent solvetime is. But as explained above, a bad timestamp for < 50% only lowers difficulty for 1 block, and if the FTL is reasonably low, it can't lower it a lot.

Masari forked to implement this on December 3, 2017 and has been performing outstandingly.
Iridium forked to implement this on January 26, 2018 and reports success. They forked again on March 19, 2018 for other reasons and tweaked it.
IPBC forked to implement it March 2, 2018.
Stellite implemented it March 9, 2018 to stop bad oscillations.
Karbowanec and QwertyCoin appear to be about to use it.

Comparison to other algorithms:

The competing algorithms are LWMA, EMA (exponential moving average), and Digishield. I'll also include SMA (simple moving average) for comparison. This is is the process go through to determine which is best.

First, I set the algorithms' "N" parameter so that they all give the same speed of response to an increase in hash rate (red bars). To give Digishield a fair chance, I removed the 6-block MTP delay. I had to lower its N value from 17 to 13 blocks to make it as fast as the others. I could have raised the other algo's N value instead, but I wanted a faster response than Digishield normally gives (based on watching hash attacks on Zcash and Hush). Also based on those attacks and attacks on other coins, I make my "test attack" below 3x the basline hashrate (red bars) and last for 30 blocks.

compare1

Then I simulate real hash attacks starting when difficulty accidentally drops 15% below baseline and end when difficulty is 30% above baseline. I used 3x attacks, but I get the same results for a wide range of attacks. The only clear advantage LWMA and EMA have over Digishield is fewer delays after attacks. The combination of the delay and "blocks stolen" metrics closely follows the result given by a root-mean-square of the error between where difficulty is and where it should be (based on the hash rate). LWMA wins on that metric also for a wide range of hash attack profiles.

compare4

I also consider their stability during constant hash rate.

compare3

Here is my spreadsheet for testing algorithms I've spent 9 months devising algorithms, learning from others, and running simulations in it.

compare_hash

Here's Hush with Zcash's Digishield compared to Masari with LWMA. Hush was 10x the market capitalization of Masari when these were done (so it should have been more stable). The beginning of Masari was after it forked to LWMA and attackers were still trying to see if they could profit.

image

image

@zawy12 zawy12 changed the title from WWHM difficulty algorithm to LWWHM difficulty algorithm Dec 7, 2017

@zawy12 zawy12 changed the title from LWWHM difficulty algorithm to LW-WHM difficulty algorithm Dec 7, 2017

@zawy12 zawy12 changed the title from LW-WHM difficulty algorithm to WHM difficulty algorithm Dec 8, 2017

@zawy12 zawy12 changed the title from WHM difficulty algorithm to TWHM difficulty algorithm Jan 9, 2018

@zawy12 zawy12 changed the title from TWHM difficulty algorithm to WHM difficulty algorithm Jan 9, 2018

@zawy12 zawy12 changed the title from WHM difficulty algorithm to LWMA (WHM) difficulty algorithm Jan 11, 2018

@h4x3rotab

This comment has been minimized.

h4x3rotab commented Feb 6, 2018

BTCGPU/BTCGPU@a3c8d1a

I'm on boarding :)

@h4x3rotab

This comment has been minimized.

h4x3rotab commented Feb 24, 2018

Here is the Python implementation of LWMA algo in Bitcoin Gold:

def BTG_LWMA(height, timestamp, target):
    # T=<target solvetime>

    T = 600

    # height -1 = most recently solved block number
    # target  = 1/difficulty/2^x where x is leading zeros in coin's max_target, I believe
    # Recommended N:

    N = 45 # int(45*(600/T)^0.3))

    # To get a more accurate solvetime to within +/- ~0.2%, use an adjustment factor.
    # This technique has been shown to be accurate in 4 coins.
    # In a formula:
# [edit by zawy: since he's using target method, adjust should be 0.998. This was my mistake. ]
    # adjust = 0.9989^(500/N)  
    # k = (N+1)/2 * adjust * T 
    k = 13632
    sumTarget = 0
    t = 0
    j = 0

    # Loop through N most recent blocks.  "< height", not "<=". 
    # height-1 = most recently solved rblock
    for i in range(height - N, height):
        solvetime = timestamp[i] - timestamp[i-1]
        j += 1
        t += solvetime * j
        sumTarget += target[i]

    # Keep t reasonable in case strange solvetimes occurred. 
    if t < N * k // 3:
        t = N * k // 3

    next_target = t * sumTarget // k // N // N
    return next_target

@zawy12 , please note that your original pseudocode has a mistake at the last line:

next_target = t * sumTarget / k

If I understand it correctly, it should be:

next_target = t * sumTarget / (k * N^2)

t is the weighted sum of solve time, which has the same order of T*N*(N+1) / 2; sumTarget is the sum of the target of the last N blocks, which equals to N*avg_target.

Given k is (N+1)/2 * adjust * T, ignoring adjust, which is approximate 1, if we sub the three variables to next_target = t * sumTarget / k, we will get:

next_target = T*N*(N+1) / 2 * N*avg_target / ((N+1)/2 * T) = N^2 * avg_target

Apparently, there's a superfluous factor N^2.

@zawy12

This comment has been minimized.

Owner

zawy12 commented Feb 24, 2018

Thanks for the correction.

@Mojo-LB Mojo-LB referenced this issue Mar 7, 2018

Closed

Difficulty #91

@zawy12 zawy12 changed the title from LWMA (WHM) difficulty algorithm to LWMA difficulty algorithm Mar 21, 2018

qwertycoin-org pushed a commit to qwertycoin-org/qwertycoin that referenced this issue Mar 21, 2018

orangecoi
Addet Zawy v2
Addet Zawy difficulty algorithm in version 2 (newest one: see zawy12/difficulty-algorithms#3 )

aivve added a commit to seredat/karbowanec that referenced this issue Mar 22, 2018

Repository owner deleted a comment from Shiro20 Jun 15, 2018

Repository owner deleted a comment from Technohacker Jun 15, 2018

Repository owner deleted a comment from Shiro20 Jun 15, 2018

Repository owner deleted a comment Jun 15, 2018

Repository owner deleted a comment Jun 15, 2018

Repository owner deleted a comment from h4x3rotab Jun 15, 2018

@FndNur1Labs FndNur1Labs referenced this issue Jun 24, 2018

Closed

LWMA-2 Soon #1

Repository owner deleted a comment from vitorgamer58 Aug 14, 2018

Repository owner deleted a comment from vans163 Aug 14, 2018

Repository owner deleted a comment from cryptforall Aug 14, 2018

Repository owner deleted a comment from cryptforall Aug 14, 2018

Repository owner deleted a comment from cryptforall Aug 14, 2018

Repository owner deleted a comment from cryptforall Aug 14, 2018

Repository owner deleted a comment from dahifi Aug 14, 2018

Repository owner deleted a comment from vans163 Aug 14, 2018

Repository owner deleted a comment from ghulam222 Oct 22, 2018

@zawy12 zawy12 referenced this issue Oct 29, 2018

Closed

LWMA-4 #33

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment