Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difficulty performance of various coins #6

Open
zawy12 opened this issue Dec 6, 2017 · 5 comments
Open

Difficulty performance of various coins #6

zawy12 opened this issue Dec 6, 2017 · 5 comments

Comments

@zawy12
Copy link
Owner

zawy12 commented Dec 6, 2017

This shows the performance of 7 coins' difficulty algorithms for 3800 blocks. Notice in the additional posts below that Masari's with the new WHM N=60 is doing better than the runner-up (Zcash) ever did in the past year.

I would like to investigate other coins with other algorithms. See please send me your coin data to include your coin here.

The "% blocks 'stolen'" aka "cheap blocks" aka "hash attacks" is "blocks suddenly obtained with a high hash rate as evidenced by fast solvetimes in excess of the expected fast solvetimes". It is approximately 2x the avg of 11 ST < 0.385xT, so it is like the converse of the other metric "avg 11 SolveTimes > 2xT".

"Delays" are "avg of 11 ST > 2.1xT" and the >2.1xT values are printed on the charts. The values are divided by 4. For example, an avg 11 ST = 3xT is 3/4=0.75 on the charts. A 3x baseline hashrate attack is also 0.75.

The average 11 ST includes a lot of logic to prevent out-of-sequence timestamps from throwing off the calculation for the two metrics.

See also
the best difficulty algorithms
how to choose N for the averaging window
methods for handling bad timestamps
introduction to difficulty algorithms

7_coins_compared

@zawy12
Copy link
Owner Author

zawy12 commented Dec 17, 2017

Zcash performance.

Zcash (and Hush that follows) uses Digi v3 which is:
next_D=avg( past 17 D) * T / (0.75*T + 0.25*avg(past 17 ST delayed 5 blocks) )
ST=solvetime, T=target time. The delay is to use MTP to prevent out of sequence timestamps. There are POW limits on the denominator that are rarely activated and actually greatly hurt the results if they are reached.

NOTE: the scale on the following charts has been change from the above post. The following divides the avg 11 solvetimes and "hash attacks" by 4. A hash attack with 2x the baseline is at 0.5, and an avg 11 ST at 2 is at 0.5. These are the minimum values that trigger a count and the display spikes.

I had to write a special script to handle timestamps when trying to average Zcash's solvetimes because Zcash has had a lot of out-of-sequence timestamps. A complicating factor is that after a past-time was assigned, often a timestamp just 1 second after it was assigned. Sometimes the "+1" was assigned twice in a row, so my script got more complicated. I didn't code for 3 times in a row which you can see below in every large >1.5 hash attack peak below. Those were not increases in hash rate, but three +1 values assigned in a row. See data below chart. So Zcash's "hahs attack" values are even better than shown, indicating it did not attract hash attacks from accidental variation, and responded fast enough to price changes.
zcash1

The above is the first 60,000 blocks. The following is a more recent 60,000, starting at block 170,000. Notice the hash attack spikes are not present which means the timestamp problems mostly stopped (or were prevented) and the "blocks stolen" is more accurate. also notice the avg solvetime is 1.004, having 0.4% error. This is the same as what I saw in experiments, and higher than the 0.2% Zcash has claimed. They started with the very first blocks that came really fast. From block 10,000 to 230,000 the average solvetime was 0.48% too high. I mention it to demonstrate experiments are accurate.

zcash2_170k-230k

Here is an examples of a lot of bad timestamps assigned. For some reason, they assign the oldest-possible timestamp as can be seen by large negatives, then usually a large positive "solvetime" follows which is a correct timestamp. The correct timestamp minus the incorrect old timestamp appears to be a large solvetime. The first large negative solvetime results from the timestamp being set equal to the minimum allowed which is the median time past (MTP, the median of the past 11 timestamps). The "1's" instead of large negatives result from 2 or more timestamps in a row being set to MTP, when the MTP did not change (a +1 seems to be inserted by the code). The 1's occur only if MTP did not change which is possible because of previous out-of-sequence timestamps.

Long story short
It appears about 30% of the mining power was assigning bad timestamps, and were a constant source of hashing, and the bad timestamps did not help them in any way

Long story
It appears clear that the first bad timestamp is almost always negative which does not help a miner. In some algos it can drive difficulty up a little. In others, there is a way it could drive difficulty to zero, but Zcash does not have that code error. Also, the -670 to 0 sequence of 5 blocks are timestamps all at the MTP, and they appear to have the wrong timestamp, as evidenced by the 1916 that follows which appears to be a correct timestamp, as evidenced by the 9 reasonable solvetimes that follow it. This is strange because the hash rate did not appear to increased. If anything, the hash rate was less than average when the timestamps were bad. If hash rate did not increase, then this big miner or pool or group of miners or pool are usually there mining. Judging by the frequency of negatives, the miners were about 30% of the hashrate. The long sequence of 1's occurred from 30% because this happened only about once per 3000 blocks. You can take any pair of timestamps that appear to be correct and are some number of blocks apart and subtract them and divide by the number of blocks apart to get an estimate of the solvetime during this and other unusual periods, and the average solvetime is about correct.

The right column is the assigned timestamp for that block minus the MTP of the previous 11 timestamps. When it is "1" the timestamps was the oldest that wold have been allowed.

image

@zawy12
Copy link
Owner Author

zawy12 commented Dec 18, 2017

Hush performance.

Hush has the same POW (Equihash) and difficulty algorithm as Zcash, but is about 1% the hash power, so it's good to see if the performance metrics are much worse. You can see its performance seems a little worse, but that it had to deal with huge swings in hash rate in the beginning. Here are the first 60,000 blocks.

hush1

Here are the most recent 60,000 blocks. I do not know why but it has been a lot worse the past 100 days (57,600 blocks) when the hash rate is a lot more stable. And for some reason the past week has been a lot better (last half of last chart).

hush_170k-223k

@zawy12
Copy link
Owner Author

zawy12 commented Dec 18, 2017

Masari performance.

Masari started using the WHM algorithm on this page and is doing awesome. This shows their history of problems and shows the new algos results.

Masari first had Monero's default difficulty which is like an SMA with N=730, then it switched to Sumokoin's pseudo-SMA with N=17, and recently it switched to WHM N=60. The N=17 aglorithm that Masari and Sumokoin use is
next_D=avg(17 D) / (0.8*avg(17 ST) + 0.3*median(17 ST)
ST=solvetimes. The 17 D and ST are 6 blocks behind the most recent block due to using MTP to prevent timestamp manipulation. The 0.3 instead of 0.2 is because the median is ln(2)=0.693 of the mean.

I should mention the 3 coins here using N=17 is my fault. N=30 if not N=60 would have been a lot better for all of them. But from this past mistake, I have a better estimate on how to select N.

masari1

Next image starts where the above image left off. The WHM N=60 (second one of the 3 on that page) was employed at 63,000.

masari_60k-74k

@zawy12
Copy link
Owner Author

zawy12 commented Dec 18, 2017

Sumokoin performance.

Masari above got it's algorithm from Sumokoin, but Sumokoin has T=240 as opposed to Masari's T=120. Possibly this was why Masari had a lot more trouble with the same algorithm. When you go to a lower T, you need to raise N, and vice versa.

Like Masari, Sumokoin also started with a high N value. I'm not sure it was the same as Masari or not (Monero default). N=17 worked a lot better for them than Masari, but you can see it did not do good on the metrics.

sumokoin1

sumokoin2

@zawy12
Copy link
Owner Author

zawy12 commented Dec 19, 2017

Karbowanec performance.

Like Sumokoin, it started with Monero or Cryptonote default (N=300) and was forced to fork. They chose N=17 on my recommendation and have been happy with it, but it does not appear near as good as it could have been. My selection of N was too small. The solvetimes being too high are the result of low N SMA naturally causing this, not because of a deeper problem. It just needed a 0.96 adjustment factor.

Since I didn't adjust in these charts for the high avg ST the "blocks stolen" metric is too low. This also applies to Sumokoin and Masari above, but not to the head-to-head comparisons at the very top that have the adjustment.

There are 3 images of 7 charts each, covering 60,000 blocks each. This covers block 0 to 180,000.

karb1
karb2
karb3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant