Permalink
Browse files

Difficulty algo

Difficulty algo cleanup, based on suggestions by Eugene
  • Loading branch information...
aivve committed Nov 18, 2016
1 parent 82e7489 commit 231db5270acb2e673a641a1800be910ce345668a
Showing with 25 additions and 20 deletions.
  1. +8 −0 ReleaseNotes.txt
  2. +17 −20 src/CryptoNoteCore/Currency.cpp
View
@@ -1,3 +1,11 @@
Release notes Karbowanec 1.3.0
- Bytecoin core 1.0.11 transition
- New difficulty algorithm
- Tail emission
- Daemon restricted RPC mode
- Fees for open remote node
Release notes 1.0.11
- New Bytecoin Wallet file format
@@ -407,10 +407,13 @@ namespace CryptoNote {
difficulty_type Currency::nextDifficulty(uint8_t blockMajorVersion, std::vector<uint64_t> timestamps,
std::vector<difficulty_type> cumulativeDifficulties) const {
if (blockMajorVersion >= BLOCK_MAJOR_VERSION_2) {
// new difficulty calculation
// based on Zawy difficulty algorithm v1.0
// next Diff = Avg past N Diff * TargetInterval / Avg past N solve times
// as described at https://github.com/monero-project/research-lab/issues/3
// Window time span and total difficulty is taken instead of average as suggested by Eugene
// default CN with smaller window DIFFICULTY_WINDOW_V2
// without DIFFICULTY_CUT it gives very similar results to the Zawy's formula below
if (blockMajorVersion >= BLOCK_MAJOR_VERSION_2) {
size_t m_difficultyWindow_2 = CryptoNote::parameters::DIFFICULTY_WINDOW_V2;
assert(m_difficultyWindow_2 >= 2);
@@ -429,37 +432,31 @@ namespace CryptoNote {
sort(timestamps.begin(), timestamps.end());
/* uint64_t timeSpan = timestamps[length - 1] - timestamps[0];
uint64_t timeSpan = timestamps.back() - timestamps.front();
if (timeSpan == 0) {
timeSpan = 1;
}
difficulty_type totalWork = cumulativeDifficulties[length - 1] - cumulativeDifficulties[0];
difficulty_type totalWork = cumulativeDifficulties.back() - cumulativeDifficulties.front();
assert(totalWork > 0);
uint64_t low, high;
// uint64_t nextDiffZ = totalWork * m_difficultyTarget / timeSpan;
uint64_t low, high;
low = mul128(totalWork, m_difficultyTarget, &high);
// blockchain error "Difficulty overhead" if this function returns zero
if (high != 0 || low + timeSpan - 1 < low) {
return 0;
}
uint64_t nextDiffAlt = (low + timeSpan - 1) / timeSpan; */
// return nextDiffAlt;
// Zawy difficulty algorithm v1.0
// next Diff = Avg past N Diff * TargetInterval / Avg past N solve times
// this gives almost same results as modified CN version without cut above
}
uint64_t avgWindowDiff = (cumulativeDifficulties.back() - cumulativeDifficulties.front()) / cumulativeDifficulties.size();
uint64_t avgSolveTime = (timestamps.back() - timestamps.front()) / timestamps.size();
uint64_t nextDiffZ = avgWindowDiff * m_difficultyTarget / avgSolveTime;
uint64_t nextDiffZ = low / timeSpan;
// minimum limit
if (nextDiffZ <= 100000) {
nextDiffZ = 100000;
nextDiffZ = 100000;
}
return nextDiffZ;
// end of new difficulty calculation

165 comments on commit 231db52

@aivve

This comment has been minimized.

Show comment
Hide comment
@aivve

aivve Aug 1, 2017

Collaborator

There is default CryptoNote timestamp 2 hours limit from the 60 previous blocks average.

Collaborator

aivve replied Aug 1, 2017

There is default CryptoNote timestamp 2 hours limit from the 60 previous blocks average.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

I think < 9xT and > -8xT from the previous timestamp is best.

zawy12 replied Aug 1, 2017

I think < 9xT and > -8xT from the previous timestamp is best.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

Yes, there's default CryptoNote two hours future time limit.

Owner

seredat replied Aug 1, 2017

Yes, there's default CryptoNote two hours future time limit.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

Zawy v1c is a lot better than I thought. It's very hard to see any benefit by your eye, but when measuring it with the short routine I gave in a previous post, a single number shows that it is better. It beats N=8 a little bit. To my surprise, it even beats N=8 and N=4 in responding due to post-attack delays.

The titles should be a little clearer: they are looking for incorrect difficulty in the previous 10 blocks, so you can see the lines are shifted to the right.

1
2
3
4
5

zawy12 replied Aug 1, 2017

Zawy v1c is a lot better than I thought. It's very hard to see any benefit by your eye, but when measuring it with the short routine I gave in a previous post, a single number shows that it is better. It beats N=8 a little bit. To my surprise, it even beats N=8 and N=4 in responding due to post-attack delays.

The titles should be a little clearer: they are looking for incorrect difficulty in the previous 10 blocks, so you can see the lines are shifted to the right.

1
2
3
4
5

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

Actually, with some tweaking, I can make v1b N=8 look almost exactly like v1c. The tweak is replacing the divisor 1/(1+0.693/N) with a multiplier of 0.89. So maybe v1b with N=8 is best due to simplicity. On both of these, if a +9xT timestamp is used when difficulty is accidentally 0.5 of correct value, then it will will send difficulty down to 0.25. The next block will get it back to 0.5. This is unavoidable. v3c will also have problems with +9x timestamp because it also gives more weight to most recent blocks. v3b will not have that problem, but it responds very slowly.

6
7

zawy12 replied Aug 1, 2017

Actually, with some tweaking, I can make v1b N=8 look almost exactly like v1c. The tweak is replacing the divisor 1/(1+0.693/N) with a multiplier of 0.89. So maybe v1b with N=8 is best due to simplicity. On both of these, if a +9xT timestamp is used when difficulty is accidentally 0.5 of correct value, then it will will send difficulty down to 0.25. The next block will get it back to 0.5. This is unavoidable. v3c will also have problems with +9x timestamp because it also gives more weight to most recent blocks. v3b will not have that problem, but it responds very slowly.

6
7

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

Summary of versions:

  • v1 = simple avg
  • v1b = simple avg that refines limits and uses 1/(1+0.693/N) factor
  • v1c = v1b with avg of 3 different averaging windows. Higher weight to most recent blocks.
  • v1d = canceled and it became v1c
  • v2 = variable averaging window
  • v3 = new "discrete" avg
  • v3b = v3 with trigger to N=4 simple avg if post-attack is detected.
  • v3c = v3b with higher weight given to most recent blocks

zawy12 replied Aug 1, 2017

Summary of versions:

  • v1 = simple avg
  • v1b = simple avg that refines limits and uses 1/(1+0.693/N) factor
  • v1c = v1b with avg of 3 different averaging windows. Higher weight to most recent blocks.
  • v1d = canceled and it became v1c
  • v2 = variable averaging window
  • v3 = new "discrete" avg
  • v3b = v3 with trigger to N=4 simple avg if post-attack is detected.
  • v3c = v3b with higher weight given to most recent blocks
@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

v3b = v3 with trigger to N=4 simple avg if post-attack is detected.

In our implementation this was triggering when blocks were mined too fast, rapdly rising difficulty, and dropping it after long delay, but only partially because trigger factor ceased turning it on after gap. This must be because we did not get 'wait' variable working.

Lets try to correctly implement v1c with limits and negative timestamps.

Owner

seredat replied Aug 1, 2017

v3b = v3 with trigger to N=4 simple avg if post-attack is detected.

In our implementation this was triggering when blocks were mined too fast, rapdly rising difficulty, and dropping it after long delay, but only partially because trigger factor ceased turning it on after gap. This must be because we did not get 'wait' variable working.

Lets try to correctly implement v1c with limits and negative timestamps.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

I think I have done all this work and shown that my original post must be true: it is not possible to do better than a simple average and that low N is best. v1b N=8 replacing 1/(1+0.693/N) with *0.89 reduction factor seems to be the best. But it's good to understand the timestamp limits and to finally have a good and quick way to measure the effectiveness of an algorithm, assuming it maintains a good average ST and that the ST does not increase as as difficulty increases (the SQRT formular had this problem). Each of the above were tested many different ways, trying different constants. For example, v3's were tested with things other than c*e^k*x where c, e, and k were changed. I also did some testing on using least squares curve fitting and a lot of work on using slope prediction.

zawy12 replied Aug 1, 2017

I think I have done all this work and shown that my original post must be true: it is not possible to do better than a simple average and that low N is best. v1b N=8 replacing 1/(1+0.693/N) with *0.89 reduction factor seems to be the best. But it's good to understand the timestamp limits and to finally have a good and quick way to measure the effectiveness of an algorithm, assuming it maintains a good average ST and that the ST does not increase as as difficulty increases (the SQRT formular had this problem). Each of the above were tested many different ways, trying different constants. For example, v3's were tested with things other than c*e^k*x where c, e, and k were changed. I also did some testing on using least squares curve fitting and a lot of work on using slope prediction.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

In our implementation this was triggering when blocks were mined too fast,

Sorry replace
if (std_dev > 2.1 OR wait > 0 ) then

with

if (std_dev < -2.1 OR wait > 0 ) then

zawy12 replied Aug 1, 2017

In our implementation this was triggering when blocks were mined too fast,

Sorry replace
if (std_dev > 2.1 OR wait > 0 ) then

with

if (std_dev < -2.1 OR wait > 0 ) then

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

Shouldn't we apply limits to all timetamps in window N, not just on first occurance because when they move down the window N they also affect difficulty?

Owner

seredat replied Aug 1, 2017

Shouldn't we apply limits to all timetamps in window N, not just on first occurance because when they move down the window N they also affect difficulty?

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

I think if the timestamp limit is applied, it should be written to the block as the true timestamp so that it does not have to be figured out again.

zawy12 replied Aug 1, 2017

I think if the timestamp limit is applied, it should be written to the block as the true timestamp so that it does not have to be figured out again.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

OK, I guess it's easier to loop through N and correct it only for difficulty calculation.
You made fantastic revelations, enough material for good article!
We will try different versions on testnet.

Owner

seredat replied Aug 1, 2017

OK, I guess it's easier to loop through N and correct it only for difficulty calculation.
You made fantastic revelations, enough material for good article!
We will try different versions on testnet.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

Tesing v1c without correction of negative difficulties with averages. Fount this interesting - after forward stamping block one hour ahead too large solve time and negative solve time do not counterweigh each other when first one goes past M window:

    Solve time: 242
    Solve time: 284
    Solve time: 291
    Solve time: 127
    Solve time: 157
    Solve time: 54
    Solve time: 339
    Solve time: 2160
    Solve time: -1863
    Solve time: 3
    Solve time: 369
    Solve time: 556
    Solve time: 30
    Solve time: 546
    Solve time: 287
    Solve time: 53
    Temp N T: 3635, M T: 19, O T: 916
    Difficulty for N: 3271.91, M: 248966, O: 1864.06
    Next difficulty: 84700.6

Upd. This results in high difficulty. Average difficulty for that hashrate on testing machine is 4000.

Owner

seredat replied Aug 1, 2017

Tesing v1c without correction of negative difficulties with averages. Fount this interesting - after forward stamping block one hour ahead too large solve time and negative solve time do not counterweigh each other when first one goes past M window:

    Solve time: 242
    Solve time: 284
    Solve time: 291
    Solve time: 127
    Solve time: 157
    Solve time: 54
    Solve time: 339
    Solve time: 2160
    Solve time: -1863
    Solve time: 3
    Solve time: 369
    Solve time: 556
    Solve time: 30
    Solve time: 546
    Solve time: 287
    Solve time: 53
    Temp N T: 3635, M T: 19, O T: 916
    Difficulty for N: 3271.91, M: 248966, O: 1864.06
    Next difficulty: 84700.6

Upd. This results in high difficulty. Average difficulty for that hashrate on testing machine is 4000.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

OK, yeah, you've found something really important. This will be a problem in v1c even at the beginning even if ST's are stored on the blockchain. [edit: but not in v1b because it has the limits on next_D which provide protection] The negative difficulties still need to be prevented in the v1 methods and the 2x rise and probably 0.5x fall limits on diff should also not be abandoned.

This makes v3 methods interesting in not needing the limits, but more difficult to write in order to prevent negative solvetimes and the way the "wait" would need to be handled though aother loop.

So v1c should not lose the way v1b handles limits.

zawy12 replied Aug 1, 2017

OK, yeah, you've found something really important. This will be a problem in v1c even at the beginning even if ST's are stored on the blockchain. [edit: but not in v1b because it has the limits on next_D which provide protection] The negative difficulties still need to be prevented in the v1 methods and the 2x rise and probably 0.5x fall limits on diff should also not be abandoned.

This makes v3 methods interesting in not needing the limits, but more difficult to write in order to prevent negative solvetimes and the way the "wait" would need to be handled though aother loop.

So v1c should not lose the way v1b handles limits.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

I think in v1b the same will occur when long gap passes beyond window N but negative solve time is still there. Should be spike in difficulty.

Owner

seredat replied Aug 1, 2017

I think in v1b the same will occur when long gap passes beyond window N but negative solve time is still there. Should be spike in difficulty.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

Yes. That's why the 2x and 0.5x limits are needed. Don't forget the negative could be the one that comes first. so difficulty rises when it enters the window and lowers when it exists. I had never thought about the "tail end" like this, so I'm glad you found it. This tail end effect is not corrected by the block behind it, so it will stay either too high or too low for too long. I had previously chosen the +6x and -5x for as long as N>=12 because I realized there might be some effects I was not considering. Even with N=12 there is a chance something bad could happen. So I definitely can't use 9x and -8x with a window of only N=8. Maybe the tail end can easily be fixed: if N or N+1 block is negative, then the time used for it would be the average of it plus the one in front of it and the one behind it. But today I saw 3 blocks in a row in that were negative in HUSH about 60 days ago. They were all small, but it shows it can happen. I think v1b like it is provides good and safe protection.

zawy12 replied Aug 1, 2017

Yes. That's why the 2x and 0.5x limits are needed. Don't forget the negative could be the one that comes first. so difficulty rises when it enters the window and lowers when it exists. I had never thought about the "tail end" like this, so I'm glad you found it. This tail end effect is not corrected by the block behind it, so it will stay either too high or too low for too long. I had previously chosen the +6x and -5x for as long as N>=12 because I realized there might be some effects I was not considering. Even with N=12 there is a chance something bad could happen. So I definitely can't use 9x and -8x with a window of only N=8. Maybe the tail end can easily be fixed: if N or N+1 block is negative, then the time used for it would be the average of it plus the one in front of it and the one behind it. But today I saw 3 blocks in a row in that were negative in HUSH about 60 days ago. They were all small, but it shows it can happen. I think v1b like it is provides good and safe protection.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 1, 2017

Owner

Negative solvetimes counterwieghting long gaps and vice versa is interesing idea. It will be a pity if we abandon it not trying to work around this situation when one half of the pair falls off the window.

Edit. In the meanwhile you actually proposed workaround :)

Owner

seredat replied Aug 1, 2017

Negative solvetimes counterwieghting long gaps and vice versa is interesing idea. It will be a pity if we abandon it not trying to work around this situation when one half of the pair falls off the window.

Edit. In the meanwhile you actually proposed workaround :)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

This week I have a lot to do, but I'll work on it. The problem is that I need to think about what effect a 'fix" will have. What if 2 are negative in a row? It may make my fix kind of dangerous. On the other hand, the limit on the next_D rise and fall protect against the disasters.

zawy12 replied Aug 1, 2017

This week I have a lot to do, but I'll work on it. The problem is that I need to think about what effect a 'fix" will have. What if 2 are negative in a row? It may make my fix kind of dangerous. On the other hand, the limit on the next_D rise and fall protect against the disasters.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 1, 2017

It is interesting and strange to me that it is a problem in the front for only one block, but a bigger problem on the tail.

{ edit : correction. it affects the end the same way as beginning. I would not make these changes below that I suggested ]

Maybe just do this at the correct place:

# fix to v1b and v1c
if ST[N] < 0 then ST[N]=(ST[N-1] + ST[N] + ST[N+1])/3;  
if ST[N+1] < 0 then ST[N+1]=(ST[N] + ST[N+1] + ST[N+2])/3;  

And in another place for v1c. Of couse it should be a subroutine in v1c.

# additional fix for v1c
if ST[M] < 0 then ST[M]=(ST[M-1] + ST[M] + ST[M+1])/3;  
if ST[M+1] < 0 then ST[M+1]=(ST[M] + ST[M+1] + ST[M+2])/3;  

zawy12 replied Aug 1, 2017

It is interesting and strange to me that it is a problem in the front for only one block, but a bigger problem on the tail.

{ edit : correction. it affects the end the same way as beginning. I would not make these changes below that I suggested ]

Maybe just do this at the correct place:

# fix to v1b and v1c
if ST[N] < 0 then ST[N]=(ST[N-1] + ST[N] + ST[N+1])/3;  
if ST[N+1] < 0 then ST[N+1]=(ST[N] + ST[N+1] + ST[N+2])/3;  

And in another place for v1c. Of couse it should be a subroutine in v1c.

# additional fix for v1c
if ST[M] < 0 then ST[M]=(ST[M-1] + ST[M] + ST[M+1])/3;  
if ST[M+1] < 0 then ST[M+1]=(ST[M] + ST[M+1] + ST[M+2])/3;  
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 2, 2017

I was not thinking about the tail end correctly. It will cause a problem on the end for only 1 block the same way as at the beginning, but in the opposite direction. So I would keep v1b like it is.

zawy12 replied Aug 2, 2017

I was not thinking about the tail end correctly. It will cause a problem on the end for only 1 block the same way as at the beginning, but in the opposite direction. So I would keep v1b like it is.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 2, 2017

I modified v1b above a little bit to be what I think is best. I'm going with N=12. N=8 is my logical choice but it requires a small timestamp limit and adds variation. Logically N=8 seems better, but N=12 seems safer in some vague conservative sense.

Since v3b/c depend on N=4 with a simple average, and since the needed negative solvetime routine does not protect against a large forward timestamp, the lower limit on next_D is needed. I've made that change. If bad timestamps do not cause a problem with v3c, then it is better than v1b with N=12 but not better than N=8. For simplicity reasons, v1b seems best.

zawy12 replied Aug 2, 2017

I modified v1b above a little bit to be what I think is best. I'm going with N=12. N=8 is my logical choice but it requires a small timestamp limit and adds variation. Logically N=8 seems better, but N=12 seems safer in some vague conservative sense.

Since v3b/c depend on N=4 with a simple average, and since the needed negative solvetime routine does not protect against a large forward timestamp, the lower limit on next_D is needed. I've made that change. If bad timestamps do not cause a problem with v3c, then it is better than v1b with N=12 but not better than N=8. For simplicity reasons, v1b seems best.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 4, 2017

There was one other thing I wanted to try again and figure out why it does not work:

I tried to detect and respond to step functions with v1b when looking at 1/2 N blocks for an increase or decrease from previous 1/2 N, and it works great when the attack is >N. The problem is that attacks like to last N/2 (before difficulty rises) and the results are worse than leaving it alone. Checking N/4 blocks compared to the previous N/4 blocks works great. Any attacks are going quit after ~N/8 blocks, or definitely by N/4 blocks. It is more unstable during constant hashrate, jumping up 3x on occasion for 1 block. You can lower this only by sacrificing the ability to detect a hash rate change, but the trade off is so equal and opposite. It turns out using ~ N/3 responds just as well with better stability during constant hashrate.

Also, when trying to look at short windows for step changes it opens it up to more timestamp manipulation, although the damage is limited by the limit on diff change per block and erased by the next block.

zawy12 replied Aug 4, 2017

There was one other thing I wanted to try again and figure out why it does not work:

I tried to detect and respond to step functions with v1b when looking at 1/2 N blocks for an increase or decrease from previous 1/2 N, and it works great when the attack is >N. The problem is that attacks like to last N/2 (before difficulty rises) and the results are worse than leaving it alone. Checking N/4 blocks compared to the previous N/4 blocks works great. Any attacks are going quit after ~N/8 blocks, or definitely by N/4 blocks. It is more unstable during constant hashrate, jumping up 3x on occasion for 1 block. You can lower this only by sacrificing the ability to detect a hash rate change, but the trade off is so equal and opposite. It turns out using ~ N/3 responds just as well with better stability during constant hashrate.

Also, when trying to look at short windows for step changes it opens it up to more timestamp manipulation, although the damage is limited by the limit on diff change per block and erased by the next block.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 8, 2017

There seems to be a big error in getting away from avg(ST) and using [Max(St) - min(ST)]. Consider these timestamps where the 5th one was a lie. The real solvetime for all was "1"

1,2,3,4,10,6

The measured solve times were
1,1,1,6,-4

The (max-min)/5 = 1.8 but the average is (1+1+1+6-4) / 5 = 1

I'll correct v1b to reflect this.

zawy12 replied Aug 8, 2017

There seems to be a big error in getting away from avg(ST) and using [Max(St) - min(ST)]. Consider these timestamps where the 5th one was a lie. The real solvetime for all was "1"

1,2,3,4,10,6

The measured solve times were
1,1,1,6,-4

The (max-min)/5 = 1.8 but the average is (1+1+1+6-4) / 5 = 1

I'll correct v1b to reflect this.

@aivve

This comment has been minimized.

Show comment
Hide comment
@aivve

aivve Aug 8, 2017

Collaborator

Yes, you are right. This should reduce damage from wrong timestamp. Actually I started to use timespans in latest attempts to implement different versions. I'll try correct v1b.

Collaborator

aivve replied Aug 8, 2017

Yes, you are right. This should reduce damage from wrong timestamp. Actually I started to use timespans in latest attempts to implement different versions. I'll try correct v1b.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 20, 2017

I finally realized Timestamp limits are not needed if I give appropriate limits to the rise and fall on next_D. Long and short timestamps near the max would change the next D more than what we should allow for reasonable attacks. For example, an attack of X=10x with a nice small averaging window of N=10 would be expected to increase and decrease next difficulty X^(+/-2/N) = 0.63 to 1.58. The "2" in the exponent is statistically correct and not arbitrary: I said "expected" which means the average change per block, plus or minus the standard deviation per block, which is SQRT(1)=1. If we allow a pretty short timestamp limit of 6x for N=10, then it can decrease difficulty 15/10 = 50%. A limit of 10x would cause 20/10 = 100% increase in 1 block. So the max change we can expect a huge attack to cause is less than timestamp manipulation and we can just use the limits on next_D. Here's the example for N=20: 10^(2/20) = 26% increase should be our limit. Worst choice for timestamp limit =6x gives 25/20 = 25% increase.

Does this allow it to drop fast enough after an attack? We statistically expect it to drop fast enough. "Expect" (the 1+1=2 in the exponent) gives 84% confidence (single-tail) that our X^(-2/N) next_D limit is small enough. I would have thought this is enough, but we know the 6x limit was not ideal. Maybe your 6x is what I would call 5x. To be more sure, I think raising the 84% to 90% would be better. Looking at wikipedia, I see 1+1.28 = 2.28 gives 90% correctness on single tail.

My protection against negative D is also not needed thanks to these limits. So I corrected zawy v1b above.

zawy12 replied Aug 20, 2017

I finally realized Timestamp limits are not needed if I give appropriate limits to the rise and fall on next_D. Long and short timestamps near the max would change the next D more than what we should allow for reasonable attacks. For example, an attack of X=10x with a nice small averaging window of N=10 would be expected to increase and decrease next difficulty X^(+/-2/N) = 0.63 to 1.58. The "2" in the exponent is statistically correct and not arbitrary: I said "expected" which means the average change per block, plus or minus the standard deviation per block, which is SQRT(1)=1. If we allow a pretty short timestamp limit of 6x for N=10, then it can decrease difficulty 15/10 = 50%. A limit of 10x would cause 20/10 = 100% increase in 1 block. So the max change we can expect a huge attack to cause is less than timestamp manipulation and we can just use the limits on next_D. Here's the example for N=20: 10^(2/20) = 26% increase should be our limit. Worst choice for timestamp limit =6x gives 25/20 = 25% increase.

Does this allow it to drop fast enough after an attack? We statistically expect it to drop fast enough. "Expect" (the 1+1=2 in the exponent) gives 84% confidence (single-tail) that our X^(-2/N) next_D limit is small enough. I would have thought this is enough, but we know the 6x limit was not ideal. Maybe your 6x is what I would call 5x. To be more sure, I think raising the 84% to 90% would be better. Looking at wikipedia, I see 1+1.28 = 2.28 gives 90% correctness on single tail.

My protection against negative D is also not needed thanks to these limits. So I corrected zawy v1b above.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 20, 2017

Owner

That's amazing. I was playing around with timestamps limits on the beginning and the end of the window when one of two counterweighting solve times (fake long solve time and correct negative or vice versa) is going beyond the window. But I will put it aside and will make testnet with latetest v1b.

Owner

seredat replied Aug 20, 2017

That's amazing. I was playing around with timestamps limits on the beginning and the end of the window when one of two counterweighting solve times (fake long solve time and correct negative or vice versa) is going beyond the window. But I will put it aside and will make testnet with latetest v1b.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 20, 2017

Let's call zawy v1b with N=8 and X=5 "Gandalf's difficulty algorithm" as in "You shall not pass!" I really recommend this for all small coins.

# Gandalf's difficulty algorithm v1.  "Hash attacks shall not pass!"
# This is Zawy v1b with low N=8 (and X=5)
# Timestamp manipulation protection is not needed due to limits on next_D.  
# Negative timestamps must be allowed.
# 0.92 keeps avg solvetime on track but makes median come out 8% low. I can't find a fix.
# Increase N to 30 if you want correct avg & median, but expect 10% of blocks in small 
# coins to be stolen.   Median mixed in w/ avg will not help. 
# Do not use sum(D's) * T / [max(timestamp)-min(timestamp)] method.
#  5% block solvetimes are > 5x target solvetime under constant HEAVY attack.
#  1.3%  block solvetimes > 5x target solvetime when hashrate is constant. 
#  3% of blocks will be < 0.5x or > 2x the correct difficulty on accident.
#  When N=15 these numbers are 5%,, 1%, and 0.5%.   
https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a

avg(last 8 solvetimes) = 1 if avg(last 8 solvetimes) < 1; # prevent divide by zero 
next_D = avg(last 8 Ds) * T / avg(last 8 solvetimes] *0.92;
next_D = 1.6*previous_D  if next_D/previous_D > 1.6;
next_D = 0.63*previous_D  if next_D/previous_D < 0.63;

The only potential problem I see is that the D will swing below 0.5 the correct D a lot, attracting an attack. But they can only get ~4 blocks at low difficulty. A question I need an answer to is this: how short of an attck can an attacker tolerate before it is not worth switching to the coin? I mean, does he lose 1 or 2 blocks when switching between coins?

zawy12 replied Aug 20, 2017

Let's call zawy v1b with N=8 and X=5 "Gandalf's difficulty algorithm" as in "You shall not pass!" I really recommend this for all small coins.

# Gandalf's difficulty algorithm v1.  "Hash attacks shall not pass!"
# This is Zawy v1b with low N=8 (and X=5)
# Timestamp manipulation protection is not needed due to limits on next_D.  
# Negative timestamps must be allowed.
# 0.92 keeps avg solvetime on track but makes median come out 8% low. I can't find a fix.
# Increase N to 30 if you want correct avg & median, but expect 10% of blocks in small 
# coins to be stolen.   Median mixed in w/ avg will not help. 
# Do not use sum(D's) * T / [max(timestamp)-min(timestamp)] method.
#  5% block solvetimes are > 5x target solvetime under constant HEAVY attack.
#  1.3%  block solvetimes > 5x target solvetime when hashrate is constant. 
#  3% of blocks will be < 0.5x or > 2x the correct difficulty on accident.
#  When N=15 these numbers are 5%,, 1%, and 0.5%.   
https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a

avg(last 8 solvetimes) = 1 if avg(last 8 solvetimes) < 1; # prevent divide by zero 
next_D = avg(last 8 Ds) * T / avg(last 8 solvetimes] *0.92;
next_D = 1.6*previous_D  if next_D/previous_D > 1.6;
next_D = 0.63*previous_D  if next_D/previous_D < 0.63;

The only potential problem I see is that the D will swing below 0.5 the correct D a lot, attracting an attack. But they can only get ~4 blocks at low difficulty. A question I need an answer to is this: how short of an attck can an attacker tolerate before it is not worth switching to the coin? I mean, does he lose 1 or 2 blocks when switching between coins?

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 21, 2017

Trying to determine if they sent a long positive timestamp or a negative one seems to require making an assumption. Making an assumption will make the algorithm asymmetrical. It will enable them to do the opposite of what you expect in a way that cancels the benefit or enables them to do more damage. They can get 1 block at less difficulty at the beginning of the window and when that block passes out, everyone else will be left with 1 block that has difficulty too high.

N=8 may not be worthy of the "Gandalf" name, but N=5 would be. I am frightened of N=5 because 1% of the time the difficulty will be 1/3 of the correct difficulty on accident. They will definitely start when it is 1/3 and let's say they assume a long timestamp on that block. With N=5 I would allow next_D limits to be 0.5 to 2.0. So they can get 2nd block at 1/6 difficulty with bad timestamp. If they have >50% (which is normal) they could keep making it go lower, but this is a problem with any N so I will assume they assign only 1st timestamp bad (or that someone else got the 2nd timestamp), cancelling the 1st one. The 3rd block difficulty will be 1/3 the correct difficulty. But if they are 3x attacker, then "correct" difficulty is 3x the previous difficulty. , so 3rd block is really only 1/9 their difficulty. 4th block with 2*Next_D limit = 2/9. 5th block is 4/9. 6th block is 8/9, so they are finally paying the correct difficulty. But thanks to you I now know 6th block is when their 1st bad timestamp passes out the window, so really 6th block will be 16/9 the correct difficulty...if they were are still present. So we assume they leave, so everyone is stuck with 16/3 = 5.3x difficulty too high in 6th block. 7th block means that negative passes out, and 1st block of new window was low hashrate, so 7th block will be 0.822*8/3 = 2.2x too high. 0.822 comes from (3/8)^(1/5) which is the rate per block it lowers back to correct D: 2.2, 1.8, 1.5, 1.22, 1.0. OK, my point is that with N=5 they get ~ 4 blocks free at max. Without timestamp manipulation (like most) they would get ~2.5.

With N=8, they can get ~5.5 blocks with bad timestamp and ~ 4 blocks without it. With N=17, they can get about 10 and 8.5.

zawy12 replied Aug 21, 2017

Trying to determine if they sent a long positive timestamp or a negative one seems to require making an assumption. Making an assumption will make the algorithm asymmetrical. It will enable them to do the opposite of what you expect in a way that cancels the benefit or enables them to do more damage. They can get 1 block at less difficulty at the beginning of the window and when that block passes out, everyone else will be left with 1 block that has difficulty too high.

N=8 may not be worthy of the "Gandalf" name, but N=5 would be. I am frightened of N=5 because 1% of the time the difficulty will be 1/3 of the correct difficulty on accident. They will definitely start when it is 1/3 and let's say they assume a long timestamp on that block. With N=5 I would allow next_D limits to be 0.5 to 2.0. So they can get 2nd block at 1/6 difficulty with bad timestamp. If they have >50% (which is normal) they could keep making it go lower, but this is a problem with any N so I will assume they assign only 1st timestamp bad (or that someone else got the 2nd timestamp), cancelling the 1st one. The 3rd block difficulty will be 1/3 the correct difficulty. But if they are 3x attacker, then "correct" difficulty is 3x the previous difficulty. , so 3rd block is really only 1/9 their difficulty. 4th block with 2*Next_D limit = 2/9. 5th block is 4/9. 6th block is 8/9, so they are finally paying the correct difficulty. But thanks to you I now know 6th block is when their 1st bad timestamp passes out the window, so really 6th block will be 16/9 the correct difficulty...if they were are still present. So we assume they leave, so everyone is stuck with 16/3 = 5.3x difficulty too high in 6th block. 7th block means that negative passes out, and 1st block of new window was low hashrate, so 7th block will be 0.822*8/3 = 2.2x too high. 0.822 comes from (3/8)^(1/5) which is the rate per block it lowers back to correct D: 2.2, 1.8, 1.5, 1.22, 1.0. OK, my point is that with N=5 they get ~ 4 blocks free at max. Without timestamp manipulation (like most) they would get ~2.5.

With N=8, they can get ~5.5 blocks with bad timestamp and ~ 4 blocks without it. With N=17, they can get about 10 and 8.5.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 22, 2017

I just realized that when the difficulty passes out the rear window, the opposite effect it has might exactly cancel the effect of a <= 50% timestamp attack.

I finally figured out Zcash's difficulty algorithm and can identify the problems: MyHush/hush#24

It's very slow to respond and basically:

next_D = AVG(D) * 150 / ( 0.75*150 + 0.25*AVG(ST) )
D = previous 1 to 17 difficulties 
ST = previous 6th to 22rd solvetimes

zawy12 replied Aug 22, 2017

I just realized that when the difficulty passes out the rear window, the opposite effect it has might exactly cancel the effect of a <= 50% timestamp attack.

I finally figured out Zcash's difficulty algorithm and can identify the problems: MyHush/hush#24

It's very slow to respond and basically:

next_D = AVG(D) * 150 / ( 0.75*150 + 0.25*AVG(ST) )
D = previous 1 to 17 difficulties 
ST = previous 6th to 22rd solvetimes
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 23, 2017

Gandalf and zawy v1b were modified to include
avg(last N solvetimes) = 1 if avg(last N solvetimes) < 0.1; # prevent divide by zero
because an attacker could assign a negative timestamp to cause a divide-by-zero error.

zawy12 replied Aug 23, 2017

Gandalf and zawy v1b were modified to include
avg(last N solvetimes) = 1 if avg(last N solvetimes) < 0.1; # prevent divide by zero
because an attacker could assign a negative timestamp to cause a divide-by-zero error.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 23, 2017

Owner

I am aware of it.

			double avgST = std::accumulate(solveTimes.begin(), solveTimes.end(), 0.0) / solveTimes.size();

			// just in case
			if (avgST == 0)
				avgST = T;
			if (avgST < 0)
				avgST *= -1;
Owner

seredat replied Aug 23, 2017

I am aware of it.

			double avgST = std::accumulate(solveTimes.begin(), solveTimes.end(), 0.0) / solveTimes.size();

			// just in case
			if (avgST == 0)
				avgST = T;
			if (avgST < 0)
				avgST *= -1;
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 23, 2017

I can't think of any reason to make < 0 converted into a positive, nor making 0 turn into T. I would do

if (avgST <  1)
     avgST=1;

zawy12 replied Aug 23, 2017

I can't think of any reason to make < 0 converted into a positive, nor making 0 turn into T. I would do

if (avgST <  1)
     avgST=1;
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 23, 2017

I did almost the same thing with
next_D=1.2*previous_D if next_D < 1;
because that implies the ST was 0 or negative.
But i was not comfortable with it.

zawy12 replied Aug 23, 2017

I did almost the same thing with
next_D=1.2*previous_D if next_D < 1;
because that implies the ST was 0 or negative.
But i was not comfortable with it.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 23, 2017

Owner

Yes, I noticed that with D and left intact in current version that I am testing. I will change avgST as recommended.

Owner

seredat replied Aug 23, 2017

Yes, I noticed that with D and left intact in current version that I am testing. I will change avgST as recommended.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 23, 2017

I applied v1b N=8 to real Zcash data when they were having a lot of timestmap manipulation. I did it with and without a +/- 7x timestamp limit. The std dev was less ((better) without using the timestamp limit.

zawy12 replied Aug 23, 2017

I applied v1b N=8 to real Zcash data when they were having a lot of timestmap manipulation. I did it with and without a +/- 7x timestamp limit. The std dev was less ((better) without using the timestamp limit.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 25, 2017

In getting more serious about applying N=8 to HUSH, I modified my spreadsheet to simulate attacks to trigger on low D. This shows N < 30 can be anything and it does not matter (same maximum harm can be done without regard to N) if the attacker triggers on the correct low D. But when N/2 is 1/2 of the time it take to switch which coin is been attacked, then that low N is twice as good as the higher Ns.

MyHush/hush#24 (comment)

zawy12 replied Aug 25, 2017

In getting more serious about applying N=8 to HUSH, I modified my spreadsheet to simulate attacks to trigger on low D. This shows N < 30 can be anything and it does not matter (same maximum harm can be done without regard to N) if the attacker triggers on the correct low D. But when N/2 is 1/2 of the time it take to switch which coin is been attacked, then that low N is twice as good as the higher Ns.

MyHush/hush#24 (comment)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 29, 2017

Being able to "hack" my own algorithms for profit made me look for something more durable against attacks that trigger on low difficulty. I like this one best. Please let me know if there is an error. I have only implemented it in spreadsheet and not tested it against bad timestamps.

[edit: once again, no, this seems to be only as good as zawy v1b, not better ]

# Zawy v4 difficulty algorithm for small coins
# Designed for small coins suffering "hash attacks". 
# Summary:
# Use Zawy v1b N=12 if difficulty needs decreasing.
# Use simplified Zawy v3 M=6  if difficulty needs increasing.
# Zawy v3 gives a more stable rise and is reluctant to jump up like N=12 at N~9.  
# This creates a more stable rise without slowing the average rise of an N=12, 
# so it is less likely to overshoot the difficulty.  By not overshooting 
# the difficulty, the v1b N=12 after an attack will not accidentally oscillate back 
# down below the correct difficulty, which would otherwise invite another attack and oscillations.
#
# For background and history of the development see
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
#
# D = difficulty, T = TargetInterval, TS = timestamp, 

#####  Constants   #####
N=12;  # N=12 recommended for fast but stable response.
M=int(N/2);
X=8;  # Size of hash jumps/falls expected as multiple of avg hashrate. X=8 for small coins.  
limit=X^(2/N); # used to limit the rise and fall of difficulty.
adjust = 0.91; # 0.91 for N=12.  Helps keep avg solvetime.

avg_ST=0;  K=0; sumHashrate=0;
for (i=1 to M ) {  avg_ST +=ST[i]/M;  }

#### Do Zawy v3 if the difficulty needs to increase. ###

if ( avg_ST < T )    { 
   for (i=1 to M) {
        # It can't do negative numbers. discarding negative means 
        # there might be a big positive before or after it that also needs discarding.
        if ( ST[i] > 0 and ST[i] < 2*avg_ST) {   
               K++;
               sumHashrate += D[i]/ST[i] *  (1-e^(-ST[i]/T)) * e^(-ST[i]/T));
        }
   }   
   return next_D = T * 2.51 * sumHashrate / K; 
} 

#### Do zawy v1b if difficulty needs to decrease  ####

else {
avg_ST=0; avg_D=0; 
for (i=1 to N) {  
   avg_ST +=ST[i]/N; 
   avg_D += D[i]/N;
}
# prevent divide by zero, just in case
avg_ST = 1 if avg_ST < 1; 
next_D = avg_D * T / avg_ST * adjust;   

# Do not use the following, even though it looks like the N's divide out:
# next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs];

# limit it against outliers and timestamps errors / manipulation
previous_D=D[1]; #  for clarity
next_D = limit*previous_D  if next_D/previous_D > limit;
next_D = 1/limit*previous_D  if next_D/previous_D < 1/limit; # safer to make it symmetrical

return next_D;
}

zawy12 replied Aug 29, 2017

Being able to "hack" my own algorithms for profit made me look for something more durable against attacks that trigger on low difficulty. I like this one best. Please let me know if there is an error. I have only implemented it in spreadsheet and not tested it against bad timestamps.

[edit: once again, no, this seems to be only as good as zawy v1b, not better ]

# Zawy v4 difficulty algorithm for small coins
# Designed for small coins suffering "hash attacks". 
# Summary:
# Use Zawy v1b N=12 if difficulty needs decreasing.
# Use simplified Zawy v3 M=6  if difficulty needs increasing.
# Zawy v3 gives a more stable rise and is reluctant to jump up like N=12 at N~9.  
# This creates a more stable rise without slowing the average rise of an N=12, 
# so it is less likely to overshoot the difficulty.  By not overshooting 
# the difficulty, the v1b N=12 after an attack will not accidentally oscillate back 
# down below the correct difficulty, which would otherwise invite another attack and oscillations.
#
# For background and history of the development see
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
#
# D = difficulty, T = TargetInterval, TS = timestamp, 

#####  Constants   #####
N=12;  # N=12 recommended for fast but stable response.
M=int(N/2);
X=8;  # Size of hash jumps/falls expected as multiple of avg hashrate. X=8 for small coins.  
limit=X^(2/N); # used to limit the rise and fall of difficulty.
adjust = 0.91; # 0.91 for N=12.  Helps keep avg solvetime.

avg_ST=0;  K=0; sumHashrate=0;
for (i=1 to M ) {  avg_ST +=ST[i]/M;  }

#### Do Zawy v3 if the difficulty needs to increase. ###

if ( avg_ST < T )    { 
   for (i=1 to M) {
        # It can't do negative numbers. discarding negative means 
        # there might be a big positive before or after it that also needs discarding.
        if ( ST[i] > 0 and ST[i] < 2*avg_ST) {   
               K++;
               sumHashrate += D[i]/ST[i] *  (1-e^(-ST[i]/T)) * e^(-ST[i]/T));
        }
   }   
   return next_D = T * 2.51 * sumHashrate / K; 
} 

#### Do zawy v1b if difficulty needs to decrease  ####

else {
avg_ST=0; avg_D=0; 
for (i=1 to N) {  
   avg_ST +=ST[i]/N; 
   avg_D += D[i]/N;
}
# prevent divide by zero, just in case
avg_ST = 1 if avg_ST < 1; 
next_D = avg_D * T / avg_ST * adjust;   

# Do not use the following, even though it looks like the N's divide out:
# next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs];

# limit it against outliers and timestamps errors / manipulation
previous_D=D[1]; #  for clarity
next_D = limit*previous_D  if next_D/previous_D > limit;
next_D = 1/limit*previous_D  if next_D/previous_D < 1/limit; # safer to make it symmetrical

return next_D;
}

@aivve

This comment has been minimized.

Show comment
Hide comment
@aivve

aivve Aug 30, 2017

Collaborator

I will test it and report back.

Collaborator

aivve replied Aug 30, 2017

I will test it and report back.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

hold off on testing it, I am still have problems and making a lot of changes.

A completely different idea: using some byte of the the previous block solve that is random (like a byte in the solution hash), trigger difficulty to go to 1/20 the correct value for 1 block. Then the next 2 difficulties will be made 47.5% too high to make up for the "instant solve" to keep the avg solvetime on track. Probability of the trigger is 33% so that the average "instant solve" occurs 1/6 of the time. (The 2 blocks after the low one are not subject to the trigger) So a 16.7% reward goes to constant-on miners at the expense of miners triggering on low difficulty. I assume they can't switch coins in the 12 seconds it will take regular miners to find the "instant block". If they take the bait, they are then faced with 2 difficulties 47.5% higher than our average. I can make a difficulty algorithm to respond in 4 blocks and stop the trigger if the hashrate has increased a lot in the 4 blocks. This would not work if the difficulty algorithm is not really fast. Another idea is to put a encumbrance on 75% of mined coins that says they can't be transferred until 210,000 blocks have past since the mined block, so miners would be heavily invested in the success of the project for the year it takes to get there.

I want constant-on miners to make more than 16.7% but I can't trigger it more often. Actually the trigger should be more like 1 every 12 blocks so that I easily measure an increase in hashrate. I want constant-on miners to make 2x more than everyone else, so if it happens once every 12 blocks, then the reward should be about 10x more than the other blocks.

zawy12 replied Aug 30, 2017

hold off on testing it, I am still have problems and making a lot of changes.

A completely different idea: using some byte of the the previous block solve that is random (like a byte in the solution hash), trigger difficulty to go to 1/20 the correct value for 1 block. Then the next 2 difficulties will be made 47.5% too high to make up for the "instant solve" to keep the avg solvetime on track. Probability of the trigger is 33% so that the average "instant solve" occurs 1/6 of the time. (The 2 blocks after the low one are not subject to the trigger) So a 16.7% reward goes to constant-on miners at the expense of miners triggering on low difficulty. I assume they can't switch coins in the 12 seconds it will take regular miners to find the "instant block". If they take the bait, they are then faced with 2 difficulties 47.5% higher than our average. I can make a difficulty algorithm to respond in 4 blocks and stop the trigger if the hashrate has increased a lot in the 4 blocks. This would not work if the difficulty algorithm is not really fast. Another idea is to put a encumbrance on 75% of mined coins that says they can't be transferred until 210,000 blocks have past since the mined block, so miners would be heavily invested in the success of the project for the year it takes to get there.

I want constant-on miners to make more than 16.7% but I can't trigger it more often. Actually the trigger should be more like 1 every 12 blocks so that I easily measure an increase in hashrate. I want constant-on miners to make 2x more than everyone else, so if it happens once every 12 blocks, then the reward should be about 10x more than the other blocks.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

I'm still thinking about punishing by the reducing of scheduled reward directly proportionally to the deviation from the target solve time. This will encourage to keep steady hashrate. In case of small average intervals the reward will be proportionally small.

Owner

seredat replied Aug 30, 2017

I'm still thinking about punishing by the reducing of scheduled reward directly proportionally to the deviation from the target solve time. This will encourage to keep steady hashrate. In case of small average intervals the reward will be proportionally small.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

I forgot about that. It sounds good, but I wonder if there are consequences. The will be motivated to not report a solved block. If everyone estimates submitting late is more likely to be profitable, the difficulty could be sent a little lower. Miners not doing this would start getting more money for not trying to trick it, so the difficulty will not keep going low. It sounds like a great idea.

zawy12 replied Aug 30, 2017

I forgot about that. It sounds good, but I wonder if there are consequences. The will be motivated to not report a solved block. If everyone estimates submitting late is more likely to be profitable, the difficulty could be sent a little lower. Miners not doing this would start getting more money for not trying to trick it, so the difficulty will not keep going low. It sounds like a great idea.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

It's used in other currencies, for instance, in Dash block reward is controlled by: 2222222/(((Difficulty+2600)/9)^2)

Owner

seredat replied Aug 30, 2017

It's used in other currencies, for instance, in Dash block reward is controlled by: 2222222/(((Difficulty+2600)/9)^2)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

I don't see solve time in that. You're talking about basing it on solve time

For example, Coins = k * ST / T

zawy12 replied Aug 30, 2017

I don't see solve time in that. You're talking about basing it on solve time

For example, Coins = k * ST / T

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

Yes, they are taking difficulty, difficulty depends on solve time too. I am not sure how it plays with emission curve.

Owner

seredat replied Aug 30, 2017

Yes, they are taking difficulty, difficulty depends on solve time too. I am not sure how it plays with emission curve.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

I think Coins = k * ST / T is perfect. If you come in averaging 3x faster ST, then you get 3x fewer coins. Problem is timestamps. They can fake 5x timestamp, but the block after it can't correct their timestamp, unless an encumbrance is placed on the coins. That it is, the other blocks detecting the timestamp manipulation would invalidate their coins but I believe that requires smart contract ability.

zawy12 replied Aug 30, 2017

I think Coins = k * ST / T is perfect. If you come in averaging 3x faster ST, then you get 3x fewer coins. Problem is timestamps. They can fake 5x timestamp, but the block after it can't correct their timestamp, unless an encumbrance is placed on the coins. That it is, the other blocks detecting the timestamp manipulation would invalidate their coins but I believe that requires smart contract ability.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

What 'k' stands for? We have base reward scheduled by emission curve for current block. We need adjust it according to solve time.

Owner

seredat replied Aug 30, 2017

What 'k' stands for? We have base reward scheduled by emission curve for current block. We need adjust it according to solve time.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

k=base coins per block So k average coins per block is accurate if avg ST is accurate.

zawy12 replied Aug 30, 2017

k=base coins per block So k average coins per block is accurate if avg ST is accurate.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

So k average coins per block is accurate if avg ST is accurate.

zawy12 replied Aug 30, 2017

So k average coins per block is accurate if avg ST is accurate.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

As it is, this formula makes reward smaller when solve time is smaller than T and increases reward when solve time is bigger than T.

Maybe something like this:

dev = (ST - T) / T
if (dev < 0) 
    dev = dev * -1
R = R - R * dev;
if(R < 0)
   R = 0

where dev is deviation, R is reward.
Maybe it should be toned down, not so draconian: dev = (ST - T) / T * 0,693

Upd. We can also make it asymmetrical, i.e. when ST < T punish more than if ST > T

Owner

seredat replied Aug 30, 2017

As it is, this formula makes reward smaller when solve time is smaller than T and increases reward when solve time is bigger than T.

Maybe something like this:

dev = (ST - T) / T
if (dev < 0) 
    dev = dev * -1
R = R - R * dev;
if(R < 0)
   R = 0

where dev is deviation, R is reward.
Maybe it should be toned down, not so draconian: dev = (ST - T) / T * 0,693

Upd. We can also make it asymmetrical, i.e. when ST < T punish more than if ST > T

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

It would average out OK if the miner is or pool gets >30 blocks. Yours not right. To make it not draconian and come out right it will need to use something like Zawy v3 method because median is not equal to average. I have not tested this but I think it might work. The 0.628 may need changing.

R=k * 0.628 * 4 * (1-e^(-ST/T))/e^(-ST/T) 

zawy12 replied Aug 30, 2017

It would average out OK if the miner is or pool gets >30 blocks. Yours not right. To make it not draconian and come out right it will need to use something like Zawy v3 method because median is not equal to average. I have not tested this but I think it might work. The 0.628 may need changing.

R=k * 0.628 * 4 * (1-e^(-ST/T))/e^(-ST/T) 
@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

I think average solve time should be used instead of ST to make it more smooth.

Owner

seredat replied Aug 30, 2017

I think average solve time should be used instead of ST to make it more smooth.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 30, 2017

Avg ST for reward? The timestamp problem with this idea is a big one so maybe avg will help. But I really like it per block because different miners get different blocks. But it would encourage big miner to come in when blocks are slow. He can solve a few blocks and get a reward that is too bog. So the avg window for "k" should be small.

zawy12 replied Aug 30, 2017

Avg ST for reward? The timestamp problem with this idea is a big one so maybe avg will help. But I really like it per block because different miners get different blocks. But it would encourage big miner to come in when blocks are slow. He can solve a few blocks and get a reward that is too bog. So the avg window for "k" should be small.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Aug 30, 2017

Owner

That's right, I'm thinking about average ST because of inaccurate timestamps.

Owner

seredat replied Aug 30, 2017

That's right, I'm thinking about average ST because of inaccurate timestamps.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 31, 2017

This
R = k * avg(D N=10) / avg(D N=50

So they are discouraged to leave when difficulty is low, but come back when it's high. But it could very possibly cause unwanted oscillations. Also it may encourage non-constant mining which means it could make worse selling pressure. My random byte idea might be better for price because it encourages constant mining. More coins are getting the smart contract features, so people will probably encumber coin issuance to miners based on block number so that miners remain invested in the coin like devs.

zawy12 replied Aug 31, 2017

This
R = k * avg(D N=10) / avg(D N=50

So they are discouraged to leave when difficulty is low, but come back when it's high. But it could very possibly cause unwanted oscillations. Also it may encourage non-constant mining which means it could make worse selling pressure. My random byte idea might be better for price because it encourages constant mining. More coins are getting the smart contract features, so people will probably encumber coin issuance to miners based on block number so that miners remain invested in the coin like devs.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 31, 2017

Zawy v4 is finished. It deals with hash attacks substantially better. It's not obvious that it's better, but the measurements are about 30% fewer blocks "stolen" and less post-attack delays.

zawy12 replied Aug 31, 2017

Zawy v4 is finished. It deals with hash attacks substantially better. It's not obvious that it's better, but the measurements are about 30% fewer blocks "stolen" and less post-attack delays.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Aug 31, 2017

You could use Zawy v4 for the N=12 average in R=k * avg(12 D) / avg(50 D)
This is not perfect because someone with 3x hashpower who starts mining when avg(12 D) = 0.7 * avg(50 D) will get blocks at 1/(1+3) = 1/4 his appropriate difficulty. But Coins = k * ST / T is perfect because they will get 1/4 as many coins. Someone in a pool that gets > 30 blocks will get the correct number of coins, so it's not "draconian" but completely fair, if the miners stay around for > 30 blocks.

zawy12 replied Aug 31, 2017

You could use Zawy v4 for the N=12 average in R=k * avg(12 D) / avg(50 D)
This is not perfect because someone with 3x hashpower who starts mining when avg(12 D) = 0.7 * avg(50 D) will get blocks at 1/(1+3) = 1/4 his appropriate difficulty. But Coins = k * ST / T is perfect because they will get 1/4 as many coins. Someone in a pool that gets > 30 blocks will get the correct number of coins, so it's not "draconian" but completely fair, if the miners stay around for > 30 blocks.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 1, 2017

I had a small but big error in Zawy v4. I just corrected it by changing if ( avg_ST > T ) { to
if ( avg_ST < T ) {

Zawy v4 gives only 5% fewer "free" blocks to attacker than Zawy v1b.

zawy12 replied Sep 1, 2017

I had a small but big error in Zawy v4. I just corrected it by changing if ( avg_ST > T ) { to
if ( avg_ST < T ) {

Zawy v4 gives only 5% fewer "free" blocks to attacker than Zawy v1b.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 1, 2017

In order to determine if v4 is better than v1b, I had to find a better measure of "best. This new measure indicates v4 is not better than v1b. Zawy v1b still seems the best.

# measure of how "bad" a difficulty algorithm is subject to sudden hash attacks

for all D[i] during H[i] hashrates {
   if ( D[i]/H[i] > 1.2)  { good_miner_losses += D[i]/H[i] -1 ; }
    if ( H[i]/D[i] > 1.2)  { attacker_profits += 1 - H[i]/D[i] ; }
}
ineffectiveness_of_diff_algo = attacker_profits + good_miner_losses;

Attacker profits are bad because it motivates more attacks. They are equal and opposite to the good miner losses if the diff algo is balanced.

The above measure seems to correspond well with std dev of the difficulty. In other words, a lower std dev during hash attacks may be the best measure of the effectiveness an algorithm (assuming the avg solvetime is sufficiently correct).

zawy12 replied Sep 1, 2017

In order to determine if v4 is better than v1b, I had to find a better measure of "best. This new measure indicates v4 is not better than v1b. Zawy v1b still seems the best.

# measure of how "bad" a difficulty algorithm is subject to sudden hash attacks

for all D[i] during H[i] hashrates {
   if ( D[i]/H[i] > 1.2)  { good_miner_losses += D[i]/H[i] -1 ; }
    if ( H[i]/D[i] > 1.2)  { attacker_profits += 1 - H[i]/D[i] ; }
}
ineffectiveness_of_diff_algo = attacker_profits + good_miner_losses;

Attacker profits are bad because it motivates more attacks. They are equal and opposite to the good miner losses if the diff algo is balanced.

The above measure seems to correspond well with std dev of the difficulty. In other words, a lower std dev during hash attacks may be the best measure of the effectiveness an algorithm (assuming the avg solvetime is sufficiently correct).

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 2, 2017

# Zawy v5 difficulty & block reward algorithm 
# Adjusts reward and difficulty to benefit constant miners at expense of hash-attackers.  
# D=Difficulty,  T=TargetSolvetime, N=averaging window
# TSL = TimeStamp limit as multiple of T
# TS = TimeStamp
# ST = SolveTime

N=12;
TSL = 6; # needs to be restrictive due to small N

for ( i=height;  i>height-N;  i--) {  
    ST = TS[i] - TS[i-1]
    if ( ST > TSL*T) { ST=TSL*T; }
    eslif (  ST < -(TSL-1)*T) { ST=-(TSL-1)*T; }
    avg_ST += ST / N; 
   avg_D += D[i] / N;
}
avg_ST = 1 if avg_ST < 1; # prevent divide by zero
next_D = avg_D * T / avg_ST / (1+0.7/N);

# amplify the dis-motivation effect of the above increased difficulty that occurs 
# in response to hash attacks by also reducing the reward by the same amount.
# The avg reward is 5% more than basereward if worst-case attacks occur all day.
# The excess is split between attacker and constant miners.

Reward = BaseReward * avg_ST / avg(past 2*N ST ;

zawy12 replied Sep 2, 2017

# Zawy v5 difficulty & block reward algorithm 
# Adjusts reward and difficulty to benefit constant miners at expense of hash-attackers.  
# D=Difficulty,  T=TargetSolvetime, N=averaging window
# TSL = TimeStamp limit as multiple of T
# TS = TimeStamp
# ST = SolveTime

N=12;
TSL = 6; # needs to be restrictive due to small N

for ( i=height;  i>height-N;  i--) {  
    ST = TS[i] - TS[i-1]
    if ( ST > TSL*T) { ST=TSL*T; }
    eslif (  ST < -(TSL-1)*T) { ST=-(TSL-1)*T; }
    avg_ST += ST / N; 
   avg_D += D[i] / N;
}
avg_ST = 1 if avg_ST < 1; # prevent divide by zero
next_D = avg_D * T / avg_ST / (1+0.7/N);

# amplify the dis-motivation effect of the above increased difficulty that occurs 
# in response to hash attacks by also reducing the reward by the same amount.
# The avg reward is 5% more than basereward if worst-case attacks occur all day.
# The excess is split between attacker and constant miners.

Reward = BaseReward * avg_ST / avg(past 2*N ST ;

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 2, 2017

I made a couple of corrections to v5 and realized my "adjust difficulty based on random byte" idea would not work. This is because a miner would only need to attack a few times to get the same average benefit as a constant-on miner. Worse, they can choose to quit after the "free block", allowing them to average better.

HUSH coin is interested if you implement Zawy v5. They may in the future decide to copy whatever your "live" difficulty algorithm is. They do not immediately need a change away from Zcash's difficulty algorithm but would like to have a faster responding algorithm and especially reward constant-on miners.

zawy12 replied Sep 2, 2017

I made a couple of corrections to v5 and realized my "adjust difficulty based on random byte" idea would not work. This is because a miner would only need to attack a few times to get the same average benefit as a constant-on miner. Worse, they can choose to quit after the "free block", allowing them to average better.

HUSH coin is interested if you implement Zawy v5. They may in the future decide to copy whatever your "live" difficulty algorithm is. They do not immediately need a change away from Zcash's difficulty algorithm but would like to have a faster responding algorithm and especially reward constant-on miners.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Sep 2, 2017

Owner

Big miners still can cease mining when difficulty rises (reallocate their hash power to other coins), wait for an hour, throw in massive hashrate "helping" constant miners to find block or blocks and get next block(s) with lower difficulty. Moreover, they will get bigger reward that was intended for the big gap to constant miners. That's why I was thinking on different formula of reward adjustment. Your formula is perfect if you want to reward more coins after big gaps between blocks and afrer blocks with shorter solve times and therefore with smaller than scheduled reward. This is good formula to keep average emission rate as planned, but it is not needed because emission schedule algo will take care of this changing scheduled reward accordingly to already mined coins. Besides, our 'friends' will abuse that in a way I pointed out above. What I was thinking of, was to never give more than was scheduled, always give less than was planned for block if solve time is not T (in both directions - bigger or smaller solve time). I.e. the more close ST to T, the more close reward will be to the scheduled reward and vice versa - the bigger or smaller solve time than T - the smaller reward.

Owner

seredat replied Sep 2, 2017

Big miners still can cease mining when difficulty rises (reallocate their hash power to other coins), wait for an hour, throw in massive hashrate "helping" constant miners to find block or blocks and get next block(s) with lower difficulty. Moreover, they will get bigger reward that was intended for the big gap to constant miners. That's why I was thinking on different formula of reward adjustment. Your formula is perfect if you want to reward more coins after big gaps between blocks and afrer blocks with shorter solve times and therefore with smaller than scheduled reward. This is good formula to keep average emission rate as planned, but it is not needed because emission schedule algo will take care of this changing scheduled reward accordingly to already mined coins. Besides, our 'friends' will abuse that in a way I pointed out above. What I was thinking of, was to never give more than was scheduled, always give less than was planned for block if solve time is not T (in both directions - bigger or smaller solve time). I.e. the more close ST to T, the more close reward will be to the scheduled reward and vice versa - the bigger or smaller solve time than T - the smaller reward.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 2, 2017

I had my equation upside down. Corrected
Reward = BaseReward * long_avg_D / next_D;

So when big miners start, the N=60 average is typically larger than N=12 because they will choose to start when N=12 is causing a low next_D. So their reward is larger. So they are really motivated. But as soon as about N=6 the reward is normal and by N=12 they are getting 1/2 the reward of a constant miner (1/4 the reward when they started) and the difficulty has risen to match their hash power. Then when they leave, N=12 drops so that constant-on miners get a big reward, equaling what the big miner took. It may be that instead of N=60 I need N=24. You made me think about it more and I see I need to think about it more. There seems a danger of them constantly coming to solve for only 6 blocks. In one sense we want them to start mining because lower D for N=12 means the hash rate is seen dropping. But it will also drop on accident. There just seems to be a danger of encouraging oscillations.

zawy12 replied Sep 2, 2017

I had my equation upside down. Corrected
Reward = BaseReward * long_avg_D / next_D;

So when big miners start, the N=60 average is typically larger than N=12 because they will choose to start when N=12 is causing a low next_D. So their reward is larger. So they are really motivated. But as soon as about N=6 the reward is normal and by N=12 they are getting 1/2 the reward of a constant miner (1/4 the reward when they started) and the difficulty has risen to match their hash power. Then when they leave, N=12 drops so that constant-on miners get a big reward, equaling what the big miner took. It may be that instead of N=60 I need N=24. You made me think about it more and I see I need to think about it more. There seems a danger of them constantly coming to solve for only 6 blocks. In one sense we want them to start mining because lower D for N=12 means the hash rate is seen dropping. But it will also drop on accident. There just seems to be a danger of encouraging oscillations.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Sep 2, 2017

Owner

My intention is to make them keep close to the T to get max reward. Deviations from T in any direction have to cause decreasing of reward so there will be no point of getting blocks quicker with smallers solve times on low difficulty because reward will be less. I am not sure about decreasing of reward in case ST is bigger than T. Intuitively I am inclined to decrease reward on long gaps as well, for examople to discourage timestamp forwarding and to make them keep on mining.

Owner

seredat replied Sep 2, 2017

My intention is to make them keep close to the T to get max reward. Deviations from T in any direction have to cause decreasing of reward so there will be no point of getting blocks quicker with smallers solve times on low difficulty because reward will be less. I am not sure about decreasing of reward in case ST is bigger than T. Intuitively I am inclined to decrease reward on long gaps as well, for examople to discourage timestamp forwarding and to make them keep on mining.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Sep 2, 2017

Owner

Again, there's no need to increase reward above scheduled by the emission curve to compensate previous decrease because curve will take care of that itself. It will adapt scheduled reward accordingly to already mined coins. It's doing this in the case of punishment for crossing blocksize median so it will do this in case of solve tome punishment too.

Owner

seredat replied Sep 2, 2017

Again, there's no need to increase reward above scheduled by the emission curve to compensate previous decrease because curve will take care of that itself. It will adapt scheduled reward accordingly to already mined coins. It's doing this in the case of punishment for crossing blocksize median so it will do this in case of solve tome punishment too.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 2, 2017

I see your emission curve. I view that and Dash's difficulty-based emission as a long-term setting, that changes very slowly. What I'm trying to do is a short-term motivation to not deviate in the short term away from that trend while keeping hashrate more stable. I want to decrease reward if a big miner suddenly comes on and increase reward to constant miners. If I chose one of them and not both, then it seems your coin will either emit coins faster or more slowly than you anticipate. My thought is that if I only discourage increases in hashrate, then the hashrate of the network is being discouraged, making it subject to 51% attack. If I only motivate constant mining by giving more reward when hashrate is low, then the coins will be issued faster and faster if miners start leaving your network. By issuing coins too fast, it could cause the price to drop faster than the reward is increasing, which is why they would leave. For some reason, always being "symmetrical" is important.

zawy12 replied Sep 2, 2017

I see your emission curve. I view that and Dash's difficulty-based emission as a long-term setting, that changes very slowly. What I'm trying to do is a short-term motivation to not deviate in the short term away from that trend while keeping hashrate more stable. I want to decrease reward if a big miner suddenly comes on and increase reward to constant miners. If I chose one of them and not both, then it seems your coin will either emit coins faster or more slowly than you anticipate. My thought is that if I only discourage increases in hashrate, then the hashrate of the network is being discouraged, making it subject to 51% attack. If I only motivate constant mining by giving more reward when hashrate is low, then the coins will be issued faster and faster if miners start leaving your network. By issuing coins too fast, it could cause the price to drop faster than the reward is increasing, which is why they would leave. For some reason, always being "symmetrical" is important.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 2, 2017

Your current coin emission is a strict function of block number. It is not directly in the equation, but since current coin per block is based on past coin/block, going all the way back to the beginning it is still a function of the block number which I could derive. If you change it based also on difficulty in a way that is not symmetrical, then the coins emitted will not be a function of the block number. If the difficulty algorithm maintains a good average solvetime, then the coins emitted will not be a function of time like they are now.

zawy12 replied Sep 2, 2017

Your current coin emission is a strict function of block number. It is not directly in the equation, but since current coin per block is based on past coin/block, going all the way back to the beginning it is still a function of the block number which I could derive. If you change it based also on difficulty in a way that is not symmetrical, then the coins emitted will not be a function of the block number. If the difficulty algorithm maintains a good average solvetime, then the coins emitted will not be a function of time like they are now.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Sep 2, 2017

Owner

I see your point, maybe you're right, increase of reward just frightens me :) We need to take care of timestamp manipulations to get more coins.

Owner

seredat replied Sep 2, 2017

I see your point, maybe you're right, increase of reward just frightens me :) We need to take care of timestamp manipulations to get more coins.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 5, 2017

Corrected another error in v5 (I had < where I needed >= in two places)

zawy12 replied Sep 5, 2017

Corrected another error in v5 (I had < where I needed >= in two places)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 5, 2017

In testing I'm still having trouble with v5. I can't get it to reward in a good way under all conditions.

zawy12 replied Sep 5, 2017

In testing I'm still having trouble with v5. I can't get it to reward in a good way under all conditions.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 5, 2017

OK, my original idea R = C * avg (12) /avg(24) seems to be working, but there are many things to check.

zawy12 replied Sep 5, 2017

OK, my original idea R = C * avg (12) /avg(24) seems to be working, but there are many things to check.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 5, 2017

Here's the corrected Zawy v5. Notice the reward / difficulty drops to 1/3 in less than 12 blocks. Reward / difficulty is the real net "reward" that should measure its success. For some reason, the average reward is staying perfectly correct, better than the solvetime.

zawy_v5

zawy12 replied Sep 5, 2017

Here's the corrected Zawy v5. Notice the reward / difficulty drops to 1/3 in less than 12 blocks. Reward / difficulty is the real net "reward" that should measure its success. For some reason, the average reward is staying perfectly correct, better than the solvetime.

zawy_v5

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 5, 2017

I inverted the above to show diff/reward to look more like my difficulty charts and it does not look like it's working good.

zawy12 replied Sep 5, 2017

I inverted the above to show diff/reward to look more like my difficulty charts and it does not look like it's working good.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 6, 2017

I'm starting to think modifying the reward will not make any difference because increasing the reward seems to be exactly the same effect as far as the miners are concerned as lowering the difficulty. However, there is 1 potential use for the change. You can make the difficulty an average of the past 1000 blocks, so it changes very slowly and is very smooth. Then use Reward = C * avg(past 12 solvetimes) / T. This should be the exact same as far as the number of coins constant miners and hash attackers get as N=12 in Zawy v1b. But it's interesting that difficulty stays very smooth.

zawy12 replied Sep 6, 2017

I'm starting to think modifying the reward will not make any difference because increasing the reward seems to be exactly the same effect as far as the miners are concerned as lowering the difficulty. However, there is 1 potential use for the change. You can make the difficulty an average of the past 1000 blocks, so it changes very slowly and is very smooth. Then use Reward = C * avg(past 12 solvetimes) / T. This should be the exact same as far as the number of coins constant miners and hash attackers get as N=12 in Zawy v1b. But it's interesting that difficulty stays very smooth.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Sep 6, 2017

Owner

This is amazing idea! The only concern I have that what will happen if very big miner gradually raise the difficulty really high and leave? The rest of the small miners will struggle to get blocks for long period of time. This will result in the same situation we have a year ago when we have large difficulty window. Ideal solution would be the one that would keep stable solve time and therefore confirmations.

Owner

seredat replied Sep 6, 2017

This is amazing idea! The only concern I have that what will happen if very big miner gradually raise the difficulty really high and leave? The rest of the small miners will struggle to get blocks for long period of time. This will result in the same situation we have a year ago when we have large difficulty window. Ideal solution would be the one that would keep stable solve time and therefore confirmations.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 6, 2017

Yes, he could do that, but his reward will be very small. If he has 2x hashrate, then his rewards will go to 1/3 after 12 blocks. When he leaves after 100 blocks, blocks will take 3x longer but the rewards will be 3x for as many blocks as he was mining. 10x miner for 500 blocks would get about 1/8 average reward, and after he leaves it will take 10x longer for 500 blocks and I think rewards will be 8x more. At first they will be 10x more when he leaves.

zawy12 replied Sep 6, 2017

Yes, he could do that, but his reward will be very small. If he has 2x hashrate, then his rewards will go to 1/3 after 12 blocks. When he leaves after 100 blocks, blocks will take 3x longer but the rewards will be 3x for as many blocks as he was mining. 10x miner for 500 blocks would get about 1/8 average reward, and after he leaves it will take 10x longer for 500 blocks and I think rewards will be 8x more. At first they will be 10x more when he leaves.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Sep 6, 2017

Owner

But this is an attack vector if someone doesn't care about reward.

Owner

seredat replied Sep 6, 2017

But this is an attack vector if someone doesn't care about reward.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 6, 2017

Right. The combination is looking really good. It's like using (difficulty)^2 to dis-motivate attackers with higher difficulty squared. I can't use higher difficulty squared directly because it throws off the average solvetime. And I can't use (reward)^2 because it would throw off the reward.


next_D = avg(last 12 D) * T / avg(last 12 ST)
reward = C * avg(last 12 ST) / T

zawy12 replied Sep 6, 2017

Right. The combination is looking really good. It's like using (difficulty)^2 to dis-motivate attackers with higher difficulty squared. I can't use higher difficulty squared directly because it throws off the average solvetime. And I can't use (reward)^2 because it would throw off the reward.


next_D = avg(last 12 D) * T / avg(last 12 ST)
reward = C * avg(last 12 ST) / T
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Oct 1, 2017

[ I deleted this post because the idea did not work at all ]

zawy12 replied Oct 1, 2017

[ I deleted this post because the idea did not work at all ]

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Oct 5, 2017

My comments about how > 50% hashrate could enable a miner to drive the difficulty to zero for giving forward timestamps does is not exactly correct for coins that use bitcoin's code that limits the timestamp to 2 hours ahead of the node's time. It appears nodes in at least bitcoin will reject blocks (not propagate them) if the timestamp is too far ahead. So a > 50% can bring down difficulty, but he can't keep it down for a long time. He'll have to leave to let other miners correct the time (or correct it himself) before he resumes the attack. It may take only 1 block to correct the time, and the settings will determine what this will do to the difficulty. The attacker may have to leave for 1 window length before resuming the attack.

zawy12 replied Oct 5, 2017

My comments about how > 50% hashrate could enable a miner to drive the difficulty to zero for giving forward timestamps does is not exactly correct for coins that use bitcoin's code that limits the timestamp to 2 hours ahead of the node's time. It appears nodes in at least bitcoin will reject blocks (not propagate them) if the timestamp is too far ahead. So a > 50% can bring down difficulty, but he can't keep it down for a long time. He'll have to leave to let other miners correct the time (or correct it himself) before he resumes the attack. It may take only 1 block to correct the time, and the settings will determine what this will do to the difficulty. The attacker may have to leave for 1 window length before resuming the attack.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Oct 6, 2017

Here is v1b rewritten to be more clear and concise. It includes the option to change the reward in a way that helps and does not hurt.

# Zawy v6 difficulty algorithm 
# Newest version of Zawy v1b with option for reward changes based on attacks
# Based on next_diff=average(prev N diff) * TargetInterval / average(prev N solvetimes)
# Thanks to Karbowanec and Sumokoin for supporting, testing, and using.
# (1+0.67/N) keeps the avg solve time at TargetInterval.
# Low N has better response to short attacks, but wider variation in solvetimes. 
# Sudden large 5x on-off hashrate changes with N=12 sometimes has 30x delays verses 
# 20x delays with N=18. But N=12 may lose only 20 bks in 5 attacks verse 30 w/ N=18.
# For discussion and history of all the alternatives that failed: 
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
#
# D = difficulty, T=TargetInterval, TS=TimeStamp, ST=solveTime

# set this coin's constants

T = <targetinterval>; 
N=18;  # Averaging window. Can conceivably be any N>6.  N=18 seems to be a good idea for small coins.
X=5;  # Size of expected "hash attacks" as multiple of avg hashrate. X=5 for small coins.
# An X too small is unresponsive. X too large is subject to timestamp manipulation.
# The following where how X is used.
limit=X^(2/N); # Protect against timestamp error. Limits avg_ST and thereby limits next_D.
adjust = 1/(1+0.67/N); # Keeps correct avg solvetime.

# get next difficulty

ST=0; D=0; 
for ( i=height;  i > height-N;  i--) {  # go through N most recent blocks
   # Note: TS's mark beginning of blocks, so the ST's below are shifted back 1
   # block from the D for that ST, but it does not cause a problem.
   ST += TS[i] - TS[i-1]; # Note:  ST != TS
   D += D[i];
}
ST = T*limit if ST > T*limit; 
ST = T/limit if ST < T/limit; 

next_D = D * T / ST * adjust;   

# Do not use the following, even though it looks like the N's divide out:
# next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs];

zawy12 replied Oct 6, 2017

Here is v1b rewritten to be more clear and concise. It includes the option to change the reward in a way that helps and does not hurt.

# Zawy v6 difficulty algorithm 
# Newest version of Zawy v1b with option for reward changes based on attacks
# Based on next_diff=average(prev N diff) * TargetInterval / average(prev N solvetimes)
# Thanks to Karbowanec and Sumokoin for supporting, testing, and using.
# (1+0.67/N) keeps the avg solve time at TargetInterval.
# Low N has better response to short attacks, but wider variation in solvetimes. 
# Sudden large 5x on-off hashrate changes with N=12 sometimes has 30x delays verses 
# 20x delays with N=18. But N=12 may lose only 20 bks in 5 attacks verse 30 w/ N=18.
# For discussion and history of all the alternatives that failed: 
# https://github.com/seredat/karbowanec/commit/231db5270acb2e673a641a1800be910ce345668a
#
# D = difficulty, T=TargetInterval, TS=TimeStamp, ST=solveTime

# set this coin's constants

T = <targetinterval>; 
N=18;  # Averaging window. Can conceivably be any N>6.  N=18 seems to be a good idea for small coins.
X=5;  # Size of expected "hash attacks" as multiple of avg hashrate. X=5 for small coins.
# An X too small is unresponsive. X too large is subject to timestamp manipulation.
# The following where how X is used.
limit=X^(2/N); # Protect against timestamp error. Limits avg_ST and thereby limits next_D.
adjust = 1/(1+0.67/N); # Keeps correct avg solvetime.

# get next difficulty

ST=0; D=0; 
for ( i=height;  i > height-N;  i--) {  # go through N most recent blocks
   # Note: TS's mark beginning of blocks, so the ST's below are shifted back 1
   # block from the D for that ST, but it does not cause a problem.
   ST += TS[i] - TS[i-1]; # Note:  ST != TS
   D += D[i];
}
ST = T*limit if ST > T*limit; 
ST = T/limit if ST < T/limit; 

next_D = D * T / ST * adjust;   

# Do not use the following, even though it looks like the N's divide out:
# next_D = sum(last N Ds) * T / [max(last N TSs) - min(last N TSs];

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Oct 13, 2017

I'm probably going to write a BIP for bitcoin to recommend going to zawy v6. This is because if SegWit2x is going to have as much mining support as they claim, and if the prices are comparable, Bitcoin Core will be under threat of a hash attack because of it's difficulty window of 2016 blocks. Even BCH's asymmetrical difficulty algorithm is better then BTC because BCH would not have lasted this long if they had kept bitcoin's. But since they made it fast fall but slow rise, they are suffering a lot. They should have made it symmetrical in the rise and fall, and they probably would have done a lot better.

So if Karbowanek forks, I hope you can use zawy v6 exactly. Especially because the reward function needs to be demonstrated. I should mention that my previous reward function was upside down, so your complaint was correct. I edited it to to invert it.

I get design a hash attack to get 1/3 of coins in any difficulty algorithm. The reward function seems to reduce it to 1/6. I'm recommending N=30 and X=5 for bitcoin. Even small coins should find this nearly ideal.

zawy12 replied Oct 13, 2017

I'm probably going to write a BIP for bitcoin to recommend going to zawy v6. This is because if SegWit2x is going to have as much mining support as they claim, and if the prices are comparable, Bitcoin Core will be under threat of a hash attack because of it's difficulty window of 2016 blocks. Even BCH's asymmetrical difficulty algorithm is better then BTC because BCH would not have lasted this long if they had kept bitcoin's. But since they made it fast fall but slow rise, they are suffering a lot. They should have made it symmetrical in the rise and fall, and they probably would have done a lot better.

So if Karbowanek forks, I hope you can use zawy v6 exactly. Especially because the reward function needs to be demonstrated. I should mention that my previous reward function was upside down, so your complaint was correct. I edited it to to invert it.

I get design a hash attack to get 1/3 of coins in any difficulty algorithm. The reward function seems to reduce it to 1/6. I'm recommending N=30 and X=5 for bitcoin. Even small coins should find this nearly ideal.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Oct 13, 2017

Owner

Consider it icluded into our roadmap. We will test it and adopt after the hardfork. We just are limited in resources to implement it quickly. But we will.

Owner

seredat replied Oct 13, 2017

Consider it icluded into our roadmap. We will test it and adopt after the hardfork. We just are limited in resources to implement it quickly. But we will.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Oct 13, 2017

Just leave the reward out. It looks like it could be attacked all the same if the miner is smart enough to pick the times when reward / difficulty is low. You can choose N to balance stability in diff with preventing long post-attack delays. N=17 might be the best choice. N=30 would have smoother difficulty and post-attack delays may not be bad. Theoretically, I can design an attack to always get 1/3 of the blocks no matter what N is, unless N is small enough to be comparable to the time it takes to switch coins. If it only takes 1 block to switch and miners don't mind attacking for only 6 blocks and can attack up to 30, then N=12 is no better than N=60. Attacking for 6 blocks on an N=12 is the same thing as attacking for 30 blocks on N=60, if they attack 5 times more often. In both cases they can get 1/3 of the blocks at a zero net increase in difficulty. Unless they have competition that's trying to do the same as them. But from the point of view of constant miners, they will still lose 1/3 of the blocks that should have been theirs.

zawy12 replied Oct 13, 2017

Just leave the reward out. It looks like it could be attacked all the same if the miner is smart enough to pick the times when reward / difficulty is low. You can choose N to balance stability in diff with preventing long post-attack delays. N=17 might be the best choice. N=30 would have smoother difficulty and post-attack delays may not be bad. Theoretically, I can design an attack to always get 1/3 of the blocks no matter what N is, unless N is small enough to be comparable to the time it takes to switch coins. If it only takes 1 block to switch and miners don't mind attacking for only 6 blocks and can attack up to 30, then N=12 is no better than N=60. Attacking for 6 blocks on an N=12 is the same thing as attacking for 30 blocks on N=60, if they attack 5 times more often. In both cases they can get 1/3 of the blocks at a zero net increase in difficulty. Unless they have competition that's trying to do the same as them. But from the point of view of constant miners, they will still lose 1/3 of the blocks that should have been theirs.

@seredat

This comment has been minimized.

Show comment
Hide comment
@seredat

seredat Oct 14, 2017

Owner

I just was going to write about this possibilty: selfish big miner can leave on high diff/low reward and come back when diff is falling and reward increasing, all is needed - just good timing and sooner or later eventually they learn how to abuse :)

Owner

seredat replied Oct 14, 2017

I just was going to write about this possibilty: selfish big miner can leave on high diff/low reward and come back when diff is falling and reward increasing, all is needed - just good timing and sooner or later eventually they learn how to abuse :)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Oct 29, 2017

Someone working on fixing bitcoin cash found a better difficulty algo than Zawy v1b aka Zawy v6. You remember I said avg(D/ST) gave bad results, but I wanted to use something like it. But apparently I have have been missing something that might be obvious to others doing difficulty: 1/avg(ST/D). I never tested it because it superficially looks the same as avg(D/ST), but it's a lot different. It seems to work exactly as Zawy v6 which is avg(D) / avg(ST). But when combined with a weighted average, it is better. Their weighting is the same I used in v3c.

Also, I think I have been wrong to push low N too much. With low N, difficulty accidentally goes lower more often, which invites miners to attack. It's true they can't get as many blocks, but they are encouraged to attack more often. As I previously said, I can develop an attack pattern for any N and always get 1/3 of the blocks for "zero excess difficulty" cost. So I think using this with N=30 to 50 would be best for all coins.

# Degnr8 (Tom Harold) WMA (weighted moving average)
# see  https://github.com/kyuupichan/difficulty/issues/11
# and newest version at:
# https://github.com/kyuupichan/difficulty/issues/21
# TS = timestamp
# constants: 
# N=30
# T=240
# k=N*(N+1)/2
# limit =10^(3/N)
# adjust = (1+1.3/N) # keeps correct avg solvetime for N < 150
wt=0, weight=0
for  (i=height-N+1 ; i<height+1 ; i++ ) {
   weight++
   wt += ( TS[i-1] - TS[i] ) / D[i]  * weight
}
tw=1 if tw < 1
next_D = T / wt * k
# use the following if bitcoin -6*T MTP and +12*T node times are not the limits
# next_D = D[height]*limit if next_D > D[height]*limit
# next_D = D[height]/limit if next_D < D[height]/limit

zawy12 replied Oct 29, 2017

Someone working on fixing bitcoin cash found a better difficulty algo than Zawy v1b aka Zawy v6. You remember I said avg(D/ST) gave bad results, but I wanted to use something like it. But apparently I have have been missing something that might be obvious to others doing difficulty: 1/avg(ST/D). I never tested it because it superficially looks the same as avg(D/ST), but it's a lot different. It seems to work exactly as Zawy v6 which is avg(D) / avg(ST). But when combined with a weighted average, it is better. Their weighting is the same I used in v3c.

Also, I think I have been wrong to push low N too much. With low N, difficulty accidentally goes lower more often, which invites miners to attack. It's true they can't get as many blocks, but they are encouraged to attack more often. As I previously said, I can develop an attack pattern for any N and always get 1/3 of the blocks for "zero excess difficulty" cost. So I think using this with N=30 to 50 would be best for all coins.

# Degnr8 (Tom Harold) WMA (weighted moving average)
# see  https://github.com/kyuupichan/difficulty/issues/11
# and newest version at:
# https://github.com/kyuupichan/difficulty/issues/21
# TS = timestamp
# constants: 
# N=30
# T=240
# k=N*(N+1)/2
# limit =10^(3/N)
# adjust = (1+1.3/N) # keeps correct avg solvetime for N < 150
wt=0, weight=0
for  (i=height-N+1 ; i<height+1 ; i++ ) {
   weight++
   wt += ( TS[i-1] - TS[i] ) / D[i]  * weight
}
tw=1 if tw < 1
next_D = T / wt * k
# use the following if bitcoin -6*T MTP and +12*T node times are not the limits
# next_D = D[height]*limit if next_D > D[height]*limit
# next_D = D[height]/limit if next_D < D[height]/limit
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Nov 3, 2017

Bitcoin ABC developer Amaury Séchet's difficulty algorithm was inspired by this discussion. It is going to be used by bitcoin cash. So we can say we were part of the fix to the big problem with Bitcoin Cash's difficulty.

zawy12 replied Nov 3, 2017

Bitcoin ABC developer Amaury Séchet's difficulty algorithm was inspired by this discussion. It is going to be used by bitcoin cash. So we can say we were part of the fix to the big problem with Bitcoin Cash's difficulty.

@aivve

This comment has been minimized.

Show comment
Hide comment
@aivve

aivve Nov 3, 2017

Collaborator

This is really cool. We have further improvement of the difficulty algorithm on our roadmap so we will study this carefully. We were going to use v6 but this new version is very interesting, we will try it.

Collaborator

aivve replied Nov 3, 2017

This is really cool. We have further improvement of the difficulty algorithm on our roadmap so we will study this carefully. We were going to use v6 but this new version is very interesting, we will try it.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Nov 3, 2017

I'm going to try to finalize something for you. Does karbowanek use bitcoin's -6 MTP and +12 node time as the limits on timestamps?

zawy12 replied Nov 3, 2017

I'm going to try to finalize something for you. Does karbowanek use bitcoin's -6 MTP and +12 node time as the limits on timestamps?

@aivve

This comment has been minimized.

Show comment
Hide comment
@aivve

aivve Nov 3, 2017

Collaborator

Timestamp can't be less than the median of last 60 blocks timestamps and can't go ahead more than 2 hours.

Collaborator

aivve replied Nov 3, 2017

Timestamp can't be less than the median of last 60 blocks timestamps and can't go ahead more than 2 hours.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Nov 20, 2017

For those looking for my final recommendation for difficulty algos, see this issue.

zawy12 replied Nov 20, 2017

For those looking for my final recommendation for difficulty algos, see this issue.

Please sign in to comment.