-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Difficulty performance of various coins #6
Comments
Zcash performance. Zcash (and Hush that follows) uses Digi v3 which is: NOTE: the scale on the following charts has been change from the above post. The following divides the avg 11 solvetimes and "hash attacks" by 4. A hash attack with 2x the baseline is at 0.5, and an avg 11 ST at 2 is at 0.5. These are the minimum values that trigger a count and the display spikes. I had to write a special script to handle timestamps when trying to average Zcash's solvetimes because Zcash has had a lot of out-of-sequence timestamps. A complicating factor is that after a past-time was assigned, often a timestamp just 1 second after it was assigned. Sometimes the "+1" was assigned twice in a row, so my script got more complicated. I didn't code for 3 times in a row which you can see below in every large >1.5 hash attack peak below. Those were not increases in hash rate, but three +1 values assigned in a row. See data below chart. So Zcash's "hahs attack" values are even better than shown, indicating it did not attract hash attacks from accidental variation, and responded fast enough to price changes. The above is the first 60,000 blocks. The following is a more recent 60,000, starting at block 170,000. Notice the hash attack spikes are not present which means the timestamp problems mostly stopped (or were prevented) and the "blocks stolen" is more accurate. also notice the avg solvetime is 1.004, having 0.4% error. This is the same as what I saw in experiments, and higher than the 0.2% Zcash has claimed. They started with the very first blocks that came really fast. From block 10,000 to 230,000 the average solvetime was 0.48% too high. I mention it to demonstrate experiments are accurate. Here is an examples of a lot of bad timestamps assigned. For some reason, they assign the oldest-possible timestamp as can be seen by large negatives, then usually a large positive "solvetime" follows which is a correct timestamp. The correct timestamp minus the incorrect old timestamp appears to be a large solvetime. The first large negative solvetime results from the timestamp being set equal to the minimum allowed which is the median time past (MTP, the median of the past 11 timestamps). The "1's" instead of large negatives result from 2 or more timestamps in a row being set to MTP, when the MTP did not change (a +1 seems to be inserted by the code). The 1's occur only if MTP did not change which is possible because of previous out-of-sequence timestamps. Long story short Long story The right column is the assigned timestamp for that block minus the MTP of the previous 11 timestamps. When it is "1" the timestamps was the oldest that wold have been allowed. |
Hush performance. Hush has the same POW (Equihash) and difficulty algorithm as Zcash, but is about 1% the hash power, so it's good to see if the performance metrics are much worse. You can see its performance seems a little worse, but that it had to deal with huge swings in hash rate in the beginning. Here are the first 60,000 blocks. Here are the most recent 60,000 blocks. I do not know why but it has been a lot worse the past 100 days (57,600 blocks) when the hash rate is a lot more stable. And for some reason the past week has been a lot better (last half of last chart). |
Masari performance. Masari started using the WHM algorithm on this page and is doing awesome. This shows their history of problems and shows the new algos results. Masari first had Monero's default difficulty which is like an SMA with N=730, then it switched to Sumokoin's pseudo-SMA with N=17, and recently it switched to WHM N=60. The N=17 aglorithm that Masari and Sumokoin use is I should mention the 3 coins here using N=17 is my fault. N=30 if not N=60 would have been a lot better for all of them. But from this past mistake, I have a better estimate on how to select N. Next image starts where the above image left off. The WHM N=60 (second one of the 3 on that page) was employed at 63,000. |
Sumokoin performance. Masari above got it's algorithm from Sumokoin, but Sumokoin has T=240 as opposed to Masari's T=120. Possibly this was why Masari had a lot more trouble with the same algorithm. When you go to a lower T, you need to raise N, and vice versa. Like Masari, Sumokoin also started with a high N value. I'm not sure it was the same as Masari or not (Monero default). N=17 worked a lot better for them than Masari, but you can see it did not do good on the metrics. |
Karbowanec performance. Like Sumokoin, it started with Monero or Cryptonote default (N=300) and was forced to fork. They chose N=17 on my recommendation and have been happy with it, but it does not appear near as good as it could have been. My selection of N was too small. The solvetimes being too high are the result of low N SMA naturally causing this, not because of a deeper problem. It just needed a 0.96 adjustment factor. Since I didn't adjust in these charts for the high avg ST the "blocks stolen" metric is too low. This also applies to Sumokoin and Masari above, but not to the head-to-head comparisons at the very top that have the adjustment. There are 3 images of 7 charts each, covering 60,000 blocks each. This covers block 0 to 180,000. |
This shows the performance of 7 coins' difficulty algorithms for 3800 blocks. Notice in the additional posts below that Masari's with the new WHM N=60 is doing better than the runner-up (Zcash) ever did in the past year.
I would like to investigate other coins with other algorithms. See please send me your coin data to include your coin here.
The "% blocks 'stolen'" aka "cheap blocks" aka "hash attacks" is "blocks suddenly obtained with a high hash rate as evidenced by fast solvetimes in excess of the expected fast solvetimes". It is approximately 2x the avg of 11 ST < 0.385xT, so it is like the converse of the other metric "avg 11 SolveTimes > 2xT".
"Delays" are "avg of 11 ST > 2.1xT" and the >2.1xT values are printed on the charts. The values are divided by 4. For example, an avg 11 ST = 3xT is 3/4=0.75 on the charts. A 3x baseline hashrate attack is also 0.75.
The average 11 ST includes a lot of logic to prevent out-of-sequence timestamps from throwing off the calculation for the two metrics.
See also
the best difficulty algorithms
how to choose N for the averaging window
methods for handling bad timestamps
introduction to difficulty algorithms
The text was updated successfully, but these errors were encountered: