New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust difficulty calculation algorithm #10

Merged
merged 7 commits into from Jun 18, 2017

Conversation

Projects
None yet
5 participants
@haruto-tanno
Contributor

haruto-tanno commented Jun 13, 2017

  • Problem: Current diff calculation inheriting from Cryptonote
    codebase has serious flaw that cannot adjusted fast enough to multi-pool hashrate
    flash (from Nicehash, for example). This was pointed out by MRL-0006 here:

https://www.overleaf.com/articles/difficulty-adjustment-algorithms-in-cryptocurrency-protocols/ytcxbjvzrpbp/viewer.pdf

  • Some suggestions to solve the issue can be found here:

monero-project/research-lab#3

and here:

zcash/zcash#147

especially valuable one from @zawy12 (Zcash)

  • Sumokoin follows Zcash approach to make difficulty more responsive to hashrate surge
    leaving attackers much less profitable.

Please note that this will require a hardford however. Comments are welcome.

Adjust difficulty calculation algorithm
- Problem: Current diff calculation inheriting from Cryptonote
codebase has serious flaw that cannot adjusted fast enough to multi-pool hashrate
flash (from Nicehash, for example). This was pointed out by MRL-0006 here:

https://www.overleaf.com/articles/difficulty-adjustment-algorithms-in-cryptocurrency-protocols/ytcxbjvzrpbp/viewer.pdf

- Some suggestions to solve the issue can be found here:

monero-project/research-lab#3

and here:

zcash/zcash#147

especially valuable one from @zawy12 (Zcash)

- Sumokoin follows Zcash approach to make difficulty more responsive to hashrate surge
leaving attackers much less profitable.

Please note that this will require a hardford however. Comments are welcome.
@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 13, 2017

Contributor

Thanks Haruto, the algo looks nice, still testing on testnet to how good it responses to multipool

Contributor

sumoshi commented Jun 13, 2017

Thanks Haruto, the algo looks nice, still testing on testnet to how good it responses to multipool

@billaue2

This comment has been minimized.

Show comment
Hide comment
@billaue2

billaue2 Jun 13, 2017

Contributor

Why did you double diff window (35) in comparison to Zcash (17)? Would that make network less responsive to surge and leave too many blocks for attackers?

Contributor

billaue2 commented Jun 13, 2017

Why did you double diff window (35) in comparison to Zcash (17)? Would that make network less responsive to surge and leave too many blocks for attackers?

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 13, 2017

Contributor

Yes, it could be a bit less responsive but it would be safer and the network looks better reflective to actual hashrate. In fact, the network hashrate will adjust in 5 - 6 blocks after the surge ;)

Contributor

haruto-tanno commented Jun 13, 2017

Yes, it could be a bit less responsive but it would be safer and the network looks better reflective to actual hashrate. In fact, the network hashrate will adjust in 5 - 6 blocks after the surge ;)

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 13, 2017

Contributor

Did you try at testnet pool, Bill?

Contributor

sumoshi commented Jun 13, 2017

Did you try at testnet pool, Bill?

@billaue2

This comment has been minimized.

Show comment
Hide comment
@billaue2

billaue2 Jun 13, 2017

Contributor

Network hashrate seems so high to actual hashrate?

Contributor

billaue2 commented Jun 13, 2017

Network hashrate seems so high to actual hashrate?

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 13, 2017

Contributor

@haruto-tanno I think the gap between median vs average hashrate appears a little too high, did you see that?

Contributor

sumoshi commented Jun 13, 2017

@haruto-tanno I think the gap between median vs average hashrate appears a little too high, did you see that?

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 13, 2017

[ edit March 2018: My recommendation here of N=17 that Sumokoin used was a bad one. It should have been N=60. I did not realize how much the random variation would attract miners and invite them to come on and off a lot. A much better algorithm is my new one LWMA ]

After much consideration, I strongly recommend a simple average of the most recent solve times. By increasing N, you get stability at the expense of responsiveness. Digishield was using N= 17 or N=18 in a more complicated way. Last I checked, this is what Zcash was doing (N=17) but they might be doing something different. People still refer to them as using a "modified digishield v3", but it's just an average that I believe came as a result of my pestering them about it.

Next difficulty = (avg last N difficulties) x (target solve time) / (avg of past N solve times)

As a result of improper times being reported by miners, the "average" solve time is not used, but the median is. If you have accurate solve times, use the average.

I could not confirm the Zcash code was getting a good median (I could not trace the variables back to their origins), but it seemed to work.

The lack of a known time allows time-warp attacks especially when mining is concentrated. It was observed in the Zcash testnet. To help minimize this, you limit how far ahead or behind the previous time that the miner can report their time. I believe Zcash made the limit same as bitcoin, 3600 seconds, but the limit should have been 1/4 of bitcoin's since their blocks come 4x faster. 4x more blocks in that time limit means more opportunity for a time warp attack. The attack seems to apply only when mining is concentrated, having at least maybe 20% of network power. Also, it is used if electricity or computer time is their main expense. They crunch a lot to get maybe up to 1/2 the blocks in 1/2 of N, so that the calculated median is artificially old because they are reporting old times to trick the algorithm into lowering the difficulty. They continue for another 1/2 of N which is when the algo finally figures it out, thanks to the limit of how far back they can set times. Then they stop to let others suffer a high difficulty for 1 N, then they do it again. I think that's a good summary.

This is not to say a smaller N prevents or reduces a time warp attack. The median and the limit on the reported time helps reduce the possibility of it occurring. I'm describing the attack in order to check for it by watching the times being reported and to see if it makes the difficulty oscillate. You look for a series old times being reported on the scale of 1/4 to 1 N , then it stopping for a while to be repeated later.

The problem with many difficulty algorithms is that they try to "think too much" instead of just looking at recent data, and going from there. If you try to look at a recent increase in hash rate and let the difficulty "jump ahead" of the actual recent average in a predictive manner, then it invites oscillations, both natural and intentional. A similar thing occurs if you try to limit the increase or decrease in the difficulty, like Digishield did, especially if the limits are not the same (symmetrical). Just looking at the avg of the most recent past is the most scientific method.

You may be fully aware of all the above, but I wanted to distill what I had found out in the past.

[edit: corrected big error in 2nd sentence and other changes]
[edit to summarize time warp attack and how it relates to N]

zawy12 commented Jun 13, 2017

[ edit March 2018: My recommendation here of N=17 that Sumokoin used was a bad one. It should have been N=60. I did not realize how much the random variation would attract miners and invite them to come on and off a lot. A much better algorithm is my new one LWMA ]

After much consideration, I strongly recommend a simple average of the most recent solve times. By increasing N, you get stability at the expense of responsiveness. Digishield was using N= 17 or N=18 in a more complicated way. Last I checked, this is what Zcash was doing (N=17) but they might be doing something different. People still refer to them as using a "modified digishield v3", but it's just an average that I believe came as a result of my pestering them about it.

Next difficulty = (avg last N difficulties) x (target solve time) / (avg of past N solve times)

As a result of improper times being reported by miners, the "average" solve time is not used, but the median is. If you have accurate solve times, use the average.

I could not confirm the Zcash code was getting a good median (I could not trace the variables back to their origins), but it seemed to work.

The lack of a known time allows time-warp attacks especially when mining is concentrated. It was observed in the Zcash testnet. To help minimize this, you limit how far ahead or behind the previous time that the miner can report their time. I believe Zcash made the limit same as bitcoin, 3600 seconds, but the limit should have been 1/4 of bitcoin's since their blocks come 4x faster. 4x more blocks in that time limit means more opportunity for a time warp attack. The attack seems to apply only when mining is concentrated, having at least maybe 20% of network power. Also, it is used if electricity or computer time is their main expense. They crunch a lot to get maybe up to 1/2 the blocks in 1/2 of N, so that the calculated median is artificially old because they are reporting old times to trick the algorithm into lowering the difficulty. They continue for another 1/2 of N which is when the algo finally figures it out, thanks to the limit of how far back they can set times. Then they stop to let others suffer a high difficulty for 1 N, then they do it again. I think that's a good summary.

This is not to say a smaller N prevents or reduces a time warp attack. The median and the limit on the reported time helps reduce the possibility of it occurring. I'm describing the attack in order to check for it by watching the times being reported and to see if it makes the difficulty oscillate. You look for a series old times being reported on the scale of 1/4 to 1 N , then it stopping for a while to be repeated later.

The problem with many difficulty algorithms is that they try to "think too much" instead of just looking at recent data, and going from there. If you try to look at a recent increase in hash rate and let the difficulty "jump ahead" of the actual recent average in a predictive manner, then it invites oscillations, both natural and intentional. A similar thing occurs if you try to limit the increase or decrease in the difficulty, like Digishield did, especially if the limits are not the same (symmetrical). Just looking at the avg of the most recent past is the most scientific method.

You may be fully aware of all the above, but I wanted to distill what I had found out in the past.

[edit: corrected big error in 2nd sentence and other changes]
[edit to summarize time warp attack and how it relates to N]

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 13, 2017

Contributor

Thank you very much for your input @zawy12 that enlightens us about many things. Sumokoin, as a cryptonote-based cryptocurrency, inherits a (serious) flaw in the way it calculates difficulty, especially when there are surges in hashrate and ppl utilizes it rather often recently to steal easy coins and leave network almost stalled for days and we're going to release a hardfork to address the problem.

Current implementation of Sumokoin (and Monero) set time limit 7200 seconds and also timestamp must not be less than median of 60 recent blocks (so it may leave 29 blocks for timewarping, is it correct?). I think current algorithm tries to remove it by setting a 60-blocks cut at timestamp series (at 2 ends, 30 at each) put for difficulty calculation. If N=17 or even N=35, we may be well at the possible attacked blocks. How do you think?

Haruto is finding way to balance bw median and average values (both have their advantages) and as long as the timewarp won't be serious problem, we can reach nearer to average and better reflective to actual hashrate.

Contributor

sumoshi commented Jun 13, 2017

Thank you very much for your input @zawy12 that enlightens us about many things. Sumokoin, as a cryptonote-based cryptocurrency, inherits a (serious) flaw in the way it calculates difficulty, especially when there are surges in hashrate and ppl utilizes it rather often recently to steal easy coins and leave network almost stalled for days and we're going to release a hardfork to address the problem.

Current implementation of Sumokoin (and Monero) set time limit 7200 seconds and also timestamp must not be less than median of 60 recent blocks (so it may leave 29 blocks for timewarping, is it correct?). I think current algorithm tries to remove it by setting a 60-blocks cut at timestamp series (at 2 ends, 30 at each) put for difficulty calculation. If N=17 or even N=35, we may be well at the possible attacked blocks. How do you think?

Haruto is finding way to balance bw median and average values (both have their advantages) and as long as the timewarp won't be serious problem, we can reach nearer to average and better reflective to actual hashrate.

@billaue2

This comment has been minimized.

Show comment
Hide comment
@billaue2

billaue2 Jun 13, 2017

Contributor

As @zawy12 advice, we should lower both timestamp limit (3600s) and 30 median checking blocks. Now, let's say, if there are 14 timewarpped blocks, we'll have to find somehow to minimize their impact to selected N. Just my initial thought.

Edit: I recalculated the math for time limit

Contributor

billaue2 commented Jun 13, 2017

As @zawy12 advice, we should lower both timestamp limit (3600s) and 30 median checking blocks. Now, let's say, if there are 14 timewarpped blocks, we'll have to find somehow to minimize their impact to selected N. Just my initial thought.

Edit: I recalculated the math for time limit

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 13, 2017

Contributor

Why not 1800 seconds and 15 block median? Not more than 7- blocks time different or it would be sign of timewarp or computer without synced date/time

Contributor

sumoshi commented Jun 13, 2017

Why not 1800 seconds and 15 block median? Not more than 7- blocks time different or it would be sign of timewarp or computer without synced date/time

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 14, 2017

Contributor

Guys, next patch will narrow down time limit/median as @sumoshi suggestion and with N=17 that can reduce impact of timewarp attack. Also, reducing gap bw average and media value of selected N to better reflect to network hashrate

Contributor

haruto-tanno commented Jun 14, 2017

Guys, next patch will narrow down time limit/median as @sumoshi suggestion and with N=17 that can reduce impact of timewarp attack. Also, reducing gap bw average and media value of selected N to better reflect to network hashrate

haruto-tanno added some commits Jun 14, 2017

Adjust difficulty calculation algorithm (ver 2)
1. Narrow gap for timewarp attack:
+ Set "block future limit" to 30 minutes (vs old 2 hours)
+ Set "block timestamp check window" to 15 block (vs old 60 block)

2. Adjust diff calc algorithm:
+ Add sorted and cut old timestamps
+ Bring median nearer to average value
Adjust difficulty calculation algorithm (ver 2.1)
- Fixed miscalculation of timespan adjustment
- Lower minimum timespan limit
- Lower log level
@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 14, 2017

Contributor

Well, ver 2.x worked for a while, yet created artificial high difficulty when hashrate surge at test pool. I have an average-based version at branch new-difficulty-debug, pls give it a try first

Contributor

haruto-tanno commented Jun 14, 2017

Well, ver 2.x worked for a while, yet created artificial high difficulty when hashrate surge at test pool. I have an average-based version at branch new-difficulty-debug, pls give it a try first

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 14, 2017

Contributor

Ok, I'll see how it adjusts diff in testnet for a while

Contributor

sumoshi commented Jun 14, 2017

Ok, I'll see how it adjusts diff in testnet for a while

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 14, 2017

Contributor

@haruto-tanno As far as it goes, this is the best version which is both responsive and not to make diff too high during a 10x hashrate surge. In a few hours later I'll see how it adjusts to sudden hashrate disappear but it looks very promising. Prepare to merge guy. Thank you very much.

Contributor

sumoshi commented Jun 14, 2017

@haruto-tanno As far as it goes, this is the best version which is both responsive and not to make diff too high during a 10x hashrate surge. In a few hours later I'll see how it adjusts to sudden hashrate disappear but it looks very promising. Prepare to merge guy. Thank you very much.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 14, 2017

I think N from 14 to N=50 might be OK. I would select N=30 to make difficulty more predictable for everyone and there is a statistical standard in letting 30 be a cut-off. A lot of research is immediately rejected if N < 30. Large N also makes it more obvious a big miner is present. It does not theoretically affect big-miner "cheating" because if N is small, they just come back more often. They profit during N/2 then have to go away for N/2 to wait for the median to go back to normal. Since it's the avg of N difficulties, it is better for them to wait N. This is without a time-warp attack. It's just what they can do if they have a large percentage of hashrate. There is no fix. It's only profitable if they can easily switch what coin they are mining, or if computer time or electricity are largest expense and it's OK to let their equipment go idle (unlikely).

I think 6x the target solve time to copy bitcoin should be the maximum difference in the timestamps. 6x4x60 = 1440 seconds. I think it is risky to adjust the median by combing it with the average, if I understand your comments.

I've said N value is not critical, but it may interact with the 6x. I can't figure out if N=17 or N=30 is better if the limit is 6x the 4 minutes. So I'm not sure N=30 is better.

A timewarp attack can be seen if timestamps are set as far ahead in the future as possible to cause the difficulty to drop, then they'll mine full power and set timestamps as low as possible. The difficulty will stay low until the median time-to-solve catches up to the forwarded time. This is why 6x is loses half as many blocks to the attacker as 12x. You can see the actual hashrate of the attacker by looking at how fast block solves are coming into your personal node. If he has 2x the network hashrate, you'll see solves coming in about 2x faster than your 4 minutes.

I think other miners are paying the price when someone has a lot of hash power and is cycling through several alt coins when it sees a low difficulty. So the other miners have to suffer a higher difficulty when they leave, and do not benefit when the difficulty is low. But it's hard to call it an attack. The big miner profits from the difficulty accidentally being low, and their actions cause the difficulty to rise and fall more adding excess variablility to solve times. If the big miner stayed around all the time, the other miners would get the same amount. So the other miners are paying the price in the sense that if they have 1% of average network hashrate, they get something like only 0.66% of the blocks if the big miner mines N/2 and then waits N to return. So he needs to find 3 small alt coins with N=17. Again, this is not related to a time warp attack.

I may have made an error in my previous post: The timewarp attack may only cause more coins to be issued than the protocol wants (not really harming other miners directly). It's an attack on the value of everyone's coins, not on other miners, unless they are combing it with cycling through to other coins. It's an "attack" because it is lying about the time. It's an unsolvable deficiency in the protocol in not being able to know the real time itself. It's unsolvable without relying on a trusted third party, like a group of peers it trusts (a trusted consensus) which I believe is what ETH does to get a an accurate timestamp.

The following is an actual timewarp attack on the Zcash testnet. The first chart is the rate at which blocks are being issued. The target is 24 per hour. You can see too many blocks were issued during the attack. The second chart shows a positive spike when the timestamp was set >2000 seconds into the future from previous block. It shows a negative spike when they blocks had timestamps less than 10 seconds apart. The darkness of the downward spikes shows they got a lot of blocks quickly. They stopped when the difficulty returned to normal.

tw

zawy12 commented Jun 14, 2017

I think N from 14 to N=50 might be OK. I would select N=30 to make difficulty more predictable for everyone and there is a statistical standard in letting 30 be a cut-off. A lot of research is immediately rejected if N < 30. Large N also makes it more obvious a big miner is present. It does not theoretically affect big-miner "cheating" because if N is small, they just come back more often. They profit during N/2 then have to go away for N/2 to wait for the median to go back to normal. Since it's the avg of N difficulties, it is better for them to wait N. This is without a time-warp attack. It's just what they can do if they have a large percentage of hashrate. There is no fix. It's only profitable if they can easily switch what coin they are mining, or if computer time or electricity are largest expense and it's OK to let their equipment go idle (unlikely).

I think 6x the target solve time to copy bitcoin should be the maximum difference in the timestamps. 6x4x60 = 1440 seconds. I think it is risky to adjust the median by combing it with the average, if I understand your comments.

I've said N value is not critical, but it may interact with the 6x. I can't figure out if N=17 or N=30 is better if the limit is 6x the 4 minutes. So I'm not sure N=30 is better.

A timewarp attack can be seen if timestamps are set as far ahead in the future as possible to cause the difficulty to drop, then they'll mine full power and set timestamps as low as possible. The difficulty will stay low until the median time-to-solve catches up to the forwarded time. This is why 6x is loses half as many blocks to the attacker as 12x. You can see the actual hashrate of the attacker by looking at how fast block solves are coming into your personal node. If he has 2x the network hashrate, you'll see solves coming in about 2x faster than your 4 minutes.

I think other miners are paying the price when someone has a lot of hash power and is cycling through several alt coins when it sees a low difficulty. So the other miners have to suffer a higher difficulty when they leave, and do not benefit when the difficulty is low. But it's hard to call it an attack. The big miner profits from the difficulty accidentally being low, and their actions cause the difficulty to rise and fall more adding excess variablility to solve times. If the big miner stayed around all the time, the other miners would get the same amount. So the other miners are paying the price in the sense that if they have 1% of average network hashrate, they get something like only 0.66% of the blocks if the big miner mines N/2 and then waits N to return. So he needs to find 3 small alt coins with N=17. Again, this is not related to a time warp attack.

I may have made an error in my previous post: The timewarp attack may only cause more coins to be issued than the protocol wants (not really harming other miners directly). It's an attack on the value of everyone's coins, not on other miners, unless they are combing it with cycling through to other coins. It's an "attack" because it is lying about the time. It's an unsolvable deficiency in the protocol in not being able to know the real time itself. It's unsolvable without relying on a trusted third party, like a group of peers it trusts (a trusted consensus) which I believe is what ETH does to get a an accurate timestamp.

The following is an actual timewarp attack on the Zcash testnet. The first chart is the rate at which blocks are being issued. The target is 24 per hour. You can see too many blocks were issued during the attack. The second chart shows a positive spike when the timestamp was set >2000 seconds into the future from previous block. It shows a negative spike when they blocks had timestamps less than 10 seconds apart. The darkness of the downward spikes shows they got a lot of blocks quickly. They stopped when the difficulty returned to normal.

tw

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 14, 2017

How exactly is block solve time calculated? I mean can a timestamp be set forward 1000 seconds from actual time and the next block set at -1000 seconds from the actual time or -1000 seconds from the previous timestamp?

zawy12 commented Jun 14, 2017

How exactly is block solve time calculated? I mean can a timestamp be set forward 1000 seconds from actual time and the next block set at -1000 seconds from the actual time or -1000 seconds from the previous timestamp?

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 14, 2017

Contributor

Thanks again for invaluable comments and great questions. Let's begin with your questions to the latest codes. Here are some maths, correct me if I'm wrong:

  1. If a block with 1000 seconds from future, it will add up the same timespan to total or 1000/16 (= 62) seconds to average block timespan. It means around 25% increase in timespan (240 seconds as target for simplicity) that would in turn reduce 25% in difficulty. There is an adjustment using median, so actual average timespan would be, given median timespan remains 240 seconds:

(240 + 62) - 62/4 = 278 seconds or actual 19% increase in average timespan.

  1. Then, every block with -1000s timestamp (or any valid timestamp) will decrease total timespan by one block time (240s) or attacker will have around 4 blocks benefit from lower difficulty (but the difficulty will increase proportionally at every block inserted as total timespan shorten)

Now, if the attacker has 20% total hashpower, the chance to have some benefit from lower diff would be about only one block or none.

How do you think?

Contributor

haruto-tanno commented Jun 14, 2017

Thanks again for invaluable comments and great questions. Let's begin with your questions to the latest codes. Here are some maths, correct me if I'm wrong:

  1. If a block with 1000 seconds from future, it will add up the same timespan to total or 1000/16 (= 62) seconds to average block timespan. It means around 25% increase in timespan (240 seconds as target for simplicity) that would in turn reduce 25% in difficulty. There is an adjustment using median, so actual average timespan would be, given median timespan remains 240 seconds:

(240 + 62) - 62/4 = 278 seconds or actual 19% increase in average timespan.

  1. Then, every block with -1000s timestamp (or any valid timestamp) will decrease total timespan by one block time (240s) or attacker will have around 4 blocks benefit from lower difficulty (but the difficulty will increase proportionally at every block inserted as total timespan shorten)

Now, if the attacker has 20% total hashpower, the chance to have some benefit from lower diff would be about only one block or none.

How do you think?

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 14, 2017

Contributor

Shorten block timestamp limit to 1440 and 12 median blocks in accordantly as @zawy12 suggestion would reduce timewarp effectiveness, at least discourage attackers to do it for very little benefit (if your math is correct)

Contributor

sumoshi commented Jun 14, 2017

Shorten block timestamp limit to 1440 and 12 median blocks in accordantly as @zawy12 suggestion would reduce timewarp effectiveness, at least discourage attackers to do it for very little benefit (if your math is correct)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 14, 2017

I can't follow the code or completely understand your description. I know what I know from working backwards from the top-level descriptions I could find on the internet. I never did understand the reason for the -62/4 part. The 1/4 factor helps in your example, but it may hurt in an unnoticed way, so I do not trust it. But I would stick with it to make people more comfortable. My position is to try N=30 but N-17 is good and N=12 is risky, let the time limit be 1440, do not adjust median timestamp by the average (if I understand you correctly), and do not place a limit on how fast the difficulty rises and falls.

[edit: all the following is probably wrong. see subsequent post way below]

Let me describe the problem if the average instead of median is used. I am trying to argue that the median should not be influenced by the average. Let's say you use my suggested time limit 6x240 = 1440. Suppose they have have 1x network hashrate and get 50% of 8 blocks (4 blocks) and make them all >1440 ahead so that the program reduces them to 1440 ahead because of the limit. Then they mine 50% of the following blocks and give a timestamp just 1 second newer than the average. Let's say average solve before they came was 240 and so they made it 240 x (13/17) + 1440 x ((4/17) = 522 so the difficulty decreases by about (1-240/522) = 54%. Difficulty started at 100 and it should have gone to 200 because they are present, but it becomes 46 after 8 blocks where they got 4.

Then they keep mining but give old timestamps. If they are disallowed timestamps that are less then the average and add just 1 second to the average, then the system still thinks the hashrate is just the same as before they began so it slowly rises back to 100% of the previous difficulty, but it really needs to be 200% because they are present. (I might have this wrong, maybe they need to use -1440, or maybe they to keep adding 1440). They might get 9 more blocks before it's above 100% because the difficulty started at an artificial low. So they get about 13 blocks and everyone else gets 13 blocks when only about 10 blocks total were supposed to be issued. The difficulty was supposed to be 200% when they arrive but it was at an average of about 75% for 26 blocks that were issued in 240x10. So about 3x more blocks can end up being issued. Now when they leave, the difficulty keeps rising and other miners suffer.

So the above is if the median is not used. So I would try to avoid letting the average influence the median.

zawy12 commented Jun 14, 2017

I can't follow the code or completely understand your description. I know what I know from working backwards from the top-level descriptions I could find on the internet. I never did understand the reason for the -62/4 part. The 1/4 factor helps in your example, but it may hurt in an unnoticed way, so I do not trust it. But I would stick with it to make people more comfortable. My position is to try N=30 but N-17 is good and N=12 is risky, let the time limit be 1440, do not adjust median timestamp by the average (if I understand you correctly), and do not place a limit on how fast the difficulty rises and falls.

[edit: all the following is probably wrong. see subsequent post way below]

Let me describe the problem if the average instead of median is used. I am trying to argue that the median should not be influenced by the average. Let's say you use my suggested time limit 6x240 = 1440. Suppose they have have 1x network hashrate and get 50% of 8 blocks (4 blocks) and make them all >1440 ahead so that the program reduces them to 1440 ahead because of the limit. Then they mine 50% of the following blocks and give a timestamp just 1 second newer than the average. Let's say average solve before they came was 240 and so they made it 240 x (13/17) + 1440 x ((4/17) = 522 so the difficulty decreases by about (1-240/522) = 54%. Difficulty started at 100 and it should have gone to 200 because they are present, but it becomes 46 after 8 blocks where they got 4.

Then they keep mining but give old timestamps. If they are disallowed timestamps that are less then the average and add just 1 second to the average, then the system still thinks the hashrate is just the same as before they began so it slowly rises back to 100% of the previous difficulty, but it really needs to be 200% because they are present. (I might have this wrong, maybe they need to use -1440, or maybe they to keep adding 1440). They might get 9 more blocks before it's above 100% because the difficulty started at an artificial low. So they get about 13 blocks and everyone else gets 13 blocks when only about 10 blocks total were supposed to be issued. The difficulty was supposed to be 200% when they arrive but it was at an average of about 75% for 26 blocks that were issued in 240x10. So about 3x more blocks can end up being issued. Now when they leave, the difficulty keeps rising and other miners suffer.

So the above is if the median is not used. So I would try to avoid letting the average influence the median.

@billaue2

This comment has been minimized.

Show comment
Hide comment
@billaue2

billaue2 Jun 14, 2017

Contributor

Trying to catch up your maths without much success ;)

I think at the latest code version (still at Haruto's repo branch, not submitted here yet), he is trying to solve hashrate surging rather than to tackle timewarp problem. Though it works excellent at testnet now, responding well to multipool flash hashrate but the timewarp must be addressed or we'll introduce another flaw.

Contributor

billaue2 commented Jun 14, 2017

Trying to catch up your maths without much success ;)

I think at the latest code version (still at Haruto's repo branch, not submitted here yet), he is trying to solve hashrate surging rather than to tackle timewarp problem. Though it works excellent at testnet now, responding well to multipool flash hashrate but the timewarp must be addressed or we'll introduce another flaw.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 14, 2017

Edit: ignore this comment. see next post
Edit: no, don't ignore this. median might be better than average
Edit: no, I can't make up my mind. I'll have to see a good explanation for why people are using median. It's bad. It also needs to be divided by 0.7 if you want the median of a Poisson to be close to the average.

To summarize what I think is best:

Next difficulty = (avg last 17 difficulties) x 240 / (median of past 17 solve times)

and don't allow a timestamp greater than 1440 or less than -1440 from the previous timestamp. I use 17 instead of 18 because the median is the 9th one, with 8 above and 8 below. And not use the 1/4 because I don't know the reasoning. But if everyone is doing it, then I guess there must be a good reason. It seems to just slow change in difficulty, but up and down. It adds skepticism to the most recent timestamp.

If hashrate surging is a real problem, then N=13 is good but expect difficulty swinging way up and down and the occasional long times to solve. Even with N=17 I think you will see an hour to solve about once per day.

zawy12 commented Jun 14, 2017

Edit: ignore this comment. see next post
Edit: no, don't ignore this. median might be better than average
Edit: no, I can't make up my mind. I'll have to see a good explanation for why people are using median. It's bad. It also needs to be divided by 0.7 if you want the median of a Poisson to be close to the average.

To summarize what I think is best:

Next difficulty = (avg last 17 difficulties) x 240 / (median of past 17 solve times)

and don't allow a timestamp greater than 1440 or less than -1440 from the previous timestamp. I use 17 instead of 18 because the median is the 9th one, with 8 above and 8 below. And not use the 1/4 because I don't know the reasoning. But if everyone is doing it, then I guess there must be a good reason. It seems to just slow change in difficulty, but up and down. It adds skepticism to the most recent timestamp.

If hashrate surging is a real problem, then N=13 is good but expect difficulty swinging way up and down and the occasional long times to solve. Even with N=17 I think you will see an hour to solve about once per day.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 14, 2017

Is this what your code is doing?

[edit: I changed the eq below to correct an error]

Next Diff = (avg past 17 Diff) x 240 / (3/4 of median past 16 block solvetimes + 1/4 x (240-ST) )

where ST = most recent block solve time = current timestamp minus AVG of past 16 timestamps.

I think bitcoin says the current timestamp is limited to +/- 6x (target time) when subtracted from the previous timestamp, but I wonder if it would be better to let the maximum forward time be (6+8.5)x from the median timestamp of the past 16 timestamps and let (8.5-6)x from the median be the minimum. The 8,5 is based on N=17. The median of 16 is the avg of the 8th and 9th position. So the expected timestamp of the most recent timestamp is expected to by 8.5x after the median of the previous 16 if hashrate has not change.

ST and some of the previous block solvetimes are allowed to be negative.
Are each of the block solvetimes used for the median = their timestamp minus the previous timestamp
or = their timestamp minus the AVG of the previous 16 timestamps?

[edit: I changed my mind back to my original thinking. Ignore the following comment.]
Either way, I like it better than my previous comment. My equation in the previous comment basically ignores the most recent time-to-solve until it becomes the median.[edit, no it does not ignore it. It uses it immediately if it is the median, doh]

zawy12 commented Jun 14, 2017

Is this what your code is doing?

[edit: I changed the eq below to correct an error]

Next Diff = (avg past 17 Diff) x 240 / (3/4 of median past 16 block solvetimes + 1/4 x (240-ST) )

where ST = most recent block solve time = current timestamp minus AVG of past 16 timestamps.

I think bitcoin says the current timestamp is limited to +/- 6x (target time) when subtracted from the previous timestamp, but I wonder if it would be better to let the maximum forward time be (6+8.5)x from the median timestamp of the past 16 timestamps and let (8.5-6)x from the median be the minimum. The 8,5 is based on N=17. The median of 16 is the avg of the 8th and 9th position. So the expected timestamp of the most recent timestamp is expected to by 8.5x after the median of the previous 16 if hashrate has not change.

ST and some of the previous block solvetimes are allowed to be negative.
Are each of the block solvetimes used for the median = their timestamp minus the previous timestamp
or = their timestamp minus the AVG of the previous 16 timestamps?

[edit: I changed my mind back to my original thinking. Ignore the following comment.]
Either way, I like it better than my previous comment. My equation in the previous comment basically ignores the most recent time-to-solve until it becomes the median.[edit, no it does not ignore it. It uses it immediately if it is the median, doh]

Adjust difficulty calculation algorithm (ver 3.0)
- This version is to calulate diff mostly basing on average timespan rather than
on median like previous one

- Adjust block timespan limit + median block limit to (1440 + 12 block) as
suggestion from @zawy12
@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 15, 2017

Contributor

The code at version 3.0 to calculate diff is quite clear as follows:

Next = (avg past 17 Diff) x 240 / (A + 1/4x(A - M))

A = average past 16 block solvetimes
M = median past 16 block solvetimes

So, it mostly bases on average of block solvetimes with (25%) adjustment from median value and it seems working very well on testnet, the network diff/hashrate catched up with hashrate pouring from multi-pool (x5 or x10 folds) easily and didn't create too high/low diff at times. Fundamental flaw is, I think, it would not deal with timewarp properly. So my idea is to cut 6 most high timestamps from the selected N for the average calculation. The result is the algo would lag 6 blocks in flash-hashrate attacks but you may reduce the difference by making some adjustment toward median of 17 first blocks which won't be affected with timewarp.

How do you think @haruto-tanno, @zawy12?

Edit: Median adjustment is 1/4(A - M) or 25%, not 3/4(A-M) that means 75%

Contributor

sumoshi commented Jun 15, 2017

The code at version 3.0 to calculate diff is quite clear as follows:

Next = (avg past 17 Diff) x 240 / (A + 1/4x(A - M))

A = average past 16 block solvetimes
M = median past 16 block solvetimes

So, it mostly bases on average of block solvetimes with (25%) adjustment from median value and it seems working very well on testnet, the network diff/hashrate catched up with hashrate pouring from multi-pool (x5 or x10 folds) easily and didn't create too high/low diff at times. Fundamental flaw is, I think, it would not deal with timewarp properly. So my idea is to cut 6 most high timestamps from the selected N for the average calculation. The result is the algo would lag 6 blocks in flash-hashrate attacks but you may reduce the difference by making some adjustment toward median of 17 first blocks which won't be affected with timewarp.

How do you think @haruto-tanno, @zawy12?

Edit: Median adjustment is 1/4(A - M) or 25%, not 3/4(A-M) that means 75%

@billaue2

This comment has been minimized.

Show comment
Hide comment
@billaue2

billaue2 Jun 15, 2017

Contributor

I think bitcoin says the current timestamp is limited to +/- 6x (target time) when subtracted from the previous timestamp, but I wonder if it would be better to let the maximum forward time be (6+8.5)x from the median timestamp of the past 16 timestamps and let (8.5-6)x from the median be the minimum. The 8,5 is based on N=17. The median of 16 is the avg of the 8th and 9th position. So the expected timestamp of the most recent timestamp is expected to by 8.5x after the median of the previous 16 if hashrate has not change.

If I understood this correctly, the max forward time would be 14.5 x target block time (14.5x240) from median timestamp of 17 recent blocks (and the min is 2.5x block times from median), right?

Interesting.

Contributor

billaue2 commented Jun 15, 2017

I think bitcoin says the current timestamp is limited to +/- 6x (target time) when subtracted from the previous timestamp, but I wonder if it would be better to let the maximum forward time be (6+8.5)x from the median timestamp of the past 16 timestamps and let (8.5-6)x from the median be the minimum. The 8,5 is based on N=17. The median of 16 is the avg of the 8th and 9th position. So the expected timestamp of the most recent timestamp is expected to by 8.5x after the median of the previous 16 if hashrate has not change.

If I understood this correctly, the max forward time would be 14.5 x target block time (14.5x240) from median timestamp of 17 recent blocks (and the min is 2.5x block times from median), right?

Interesting.

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 15, 2017

Contributor

I think bitcoin says the current timestamp is limited to +/- 6x (target time) when subtracted from the previous timestamp, but I wonder if it would be better to let the maximum forward time be (6+8.5)x from the median timestamp of the past 16 timestamps and let (8.5-6)x from the median be the minimum. The 8,5 is based on N=17. The median of 16 is the avg of the 8th and 9th position. So the expected timestamp of the most recent timestamp is expected to by 8.5x after the median of the previous 16 if hashrate has not change.

If I understood this correctly, the max forward time would be 14.5 x target block time (14.5x240) from median timestamp of 17 recent blocks (and the min is 2.5x block times from median), right?

Interesting.

I'm not sure Bill. Block time limit is calculated from current time (i.e TIME(NULL) in code). I don't think median is good anchor bz between it and current time can be vastly different if there are some long mining blocks. But I'm not sure if you understand @zawy12 idea correctly.

Contributor

sumoshi commented Jun 15, 2017

I think bitcoin says the current timestamp is limited to +/- 6x (target time) when subtracted from the previous timestamp, but I wonder if it would be better to let the maximum forward time be (6+8.5)x from the median timestamp of the past 16 timestamps and let (8.5-6)x from the median be the minimum. The 8,5 is based on N=17. The median of 16 is the avg of the 8th and 9th position. So the expected timestamp of the most recent timestamp is expected to by 8.5x after the median of the previous 16 if hashrate has not change.

If I understood this correctly, the max forward time would be 14.5 x target block time (14.5x240) from median timestamp of 17 recent blocks (and the min is 2.5x block times from median), right?

Interesting.

I'm not sure Bill. Block time limit is calculated from current time (i.e TIME(NULL) in code). I don't think median is good anchor bz between it and current time can be vastly different if there are some long mining blocks. But I'm not sure if you understand @zawy12 idea correctly.

@billaue2

This comment has been minimized.

Show comment
Hide comment
@billaue2

billaue2 Jun 15, 2017

Contributor

So my idea is to cut 6 most high timestamps from the selected N from the average calculation. The result is the algo would lag 6 blocks in flash-hashrate attacks but you may reduce the different by making some adjustment toward median of 17 first blocks which won't be affected with timewarp.

So, basically it would be back to current cryptonote algo with much shorter selected time ranges?

Contributor

billaue2 commented Jun 15, 2017

So my idea is to cut 6 most high timestamps from the selected N from the average calculation. The result is the algo would lag 6 blocks in flash-hashrate attacks but you may reduce the different by making some adjustment toward median of 17 first blocks which won't be affected with timewarp.

So, basically it would be back to current cryptonote algo with much shorter selected time ranges?

@sumoprojects sumoprojects merged commit 200ea1b into sumoprojects:master Jun 18, 2017

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 18, 2017

Ingnoring blocks corrects accidental timestamp errors but it helps intentional timestamp attackers get 28% more blocks. If they have 28% of network hashrate, they just assign the 6 high and 6 low timestamps without increasing difficulty. Difficulty will be 28% too low (or rather, 100/(100-28) = 39% higher) because they are invisible. If they have 50% of network hashrate, they get those 12 blocks, then the difficulty only sees 22% of their 50%. So they get 0.50 of 43 = 21.5 blocks but the difficulty still needs to be 100/72 = 39% higher.

Despite my previous comments, I think that as long they have <50%, then there is no real timewarp attack danger for rolling averages, if cut are not used. As I mentioned in a previous post, a bitcoin-like timewarp attack is not possible for rolling averages if negative solve times are allowed. For example, they have 50% of hashrate and are assigning +1440 in an attempt to lower diff, then honest miners behind then will assign -1440 which erases their +1440 lie. Actually, honest miners will assign about -1440+480 which will erase the lie and give the correct average for both the lie and the truth.

Cutting blocks creates timestamp attacks where there was no possibility of a timestamp attack. If they have a lot more than 50%, then there is nothing you can do except a hard fork because they control the timestamp.

You can stop them by advertising everywhere "we will do a hard fork and you will lose your coins if too many timestamp errors cause coins to be released too fast or if people with >2x hashrate keep turning on and off ni a way that prevents them from suffering the correct difficulty." Threatening hard fork might be the solution for both.

zawy12 commented Jun 18, 2017

Ingnoring blocks corrects accidental timestamp errors but it helps intentional timestamp attackers get 28% more blocks. If they have 28% of network hashrate, they just assign the 6 high and 6 low timestamps without increasing difficulty. Difficulty will be 28% too low (or rather, 100/(100-28) = 39% higher) because they are invisible. If they have 50% of network hashrate, they get those 12 blocks, then the difficulty only sees 22% of their 50%. So they get 0.50 of 43 = 21.5 blocks but the difficulty still needs to be 100/72 = 39% higher.

Despite my previous comments, I think that as long they have <50%, then there is no real timewarp attack danger for rolling averages, if cut are not used. As I mentioned in a previous post, a bitcoin-like timewarp attack is not possible for rolling averages if negative solve times are allowed. For example, they have 50% of hashrate and are assigning +1440 in an attempt to lower diff, then honest miners behind then will assign -1440 which erases their +1440 lie. Actually, honest miners will assign about -1440+480 which will erase the lie and give the correct average for both the lie and the truth.

Cutting blocks creates timestamp attacks where there was no possibility of a timestamp attack. If they have a lot more than 50%, then there is nothing you can do except a hard fork because they control the timestamp.

You can stop them by advertising everywhere "we will do a hard fork and you will lose your coins if too many timestamp errors cause coins to be released too fast or if people with >2x hashrate keep turning on and off ni a way that prevents them from suffering the correct difficulty." Threatening hard fork might be the solution for both.

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 18, 2017

Contributor

I'll put timelines + calculation to a spreadsheet to have better view when timewarp attack happen and posted here for further discussion. But here are things to consider:

  1. In cryptonote, attackers cannot set timestamps lower than median of N most recent block timestamp, in our new config, N=12, so they cannot add any block with low, invisible from difficulty calculation.

  2. Now, they can only add blocks with max future timestamp, and if they successfully add 6 uninterrupted blocks, they can get their timewarp works fully (that results in 16.7% diff low), the probability of that with 50% total hashpower is (0.5)6 or 1.56%.

Edit: After review, I think I missed sth, too late now, sorry

Contributor

haruto-tanno commented Jun 18, 2017

I'll put timelines + calculation to a spreadsheet to have better view when timewarp attack happen and posted here for further discussion. But here are things to consider:

  1. In cryptonote, attackers cannot set timestamps lower than median of N most recent block timestamp, in our new config, N=12, so they cannot add any block with low, invisible from difficulty calculation.

  2. Now, they can only add blocks with max future timestamp, and if they successfully add 6 uninterrupted blocks, they can get their timewarp works fully (that results in 16.7% diff low), the probability of that with 50% total hashpower is (0.5)6 or 1.56%.

Edit: After review, I think I missed sth, too late now, sorry

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 18, 2017

By invisible for the low end I mean it is one of the 6 you have cut for having a fast solve time. With a limit of -6x240 they are not likely to get near the median that is around -15x240.

zawy12 commented Jun 18, 2017

By invisible for the low end I mean it is one of the 6 you have cut for having a fast solve time. With a limit of -6x240 they are not likely to get near the median that is around -15x240.

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 19, 2017

Contributor

Still working on my speadsheet, initial simulations confirm that our algo doesn't dismiss timewarp effect but delays it only. Coming to a solution soon.

@haruto-tanno Did you see the same?

timewarp_attack2

Note: Pink cells are timewarp, green cells are honest blocks

File sent to your email, pls confirm the bug

Contributor

sumoshi commented Jun 19, 2017

Still working on my speadsheet, initial simulations confirm that our algo doesn't dismiss timewarp effect but delays it only. Coming to a solution soon.

@haruto-tanno Did you see the same?

timewarp_attack2

Note: Pink cells are timewarp, green cells are honest blocks

File sent to your email, pls confirm the bug

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 19, 2017

Contributor

By invisible for the low end I mean it is one of the 6 you have cut for having a fast solve time. With a limit of -6x240 they are not likely to get near the median that is around -15x240.

You are right, too much delay, probably we'll have to find better responsive arguments

Contributor

sumoshi commented Jun 19, 2017

By invisible for the low end I mean it is one of the 6 you have cut for having a fast solve time. With a limit of -6x240 they are not likely to get near the median that is around -15x240.

You are right, too much delay, probably we'll have to find better responsive arguments

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 19, 2017

Contributor

Confirmed the bug @sumoshi. I know what causes it, we are still hounded by old cryptonote algo some way 😞

Contributor

haruto-tanno commented Jun 19, 2017

Confirmed the bug @sumoshi. I know what causes it, we are still hounded by old cryptonote algo some way 😞

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 19, 2017

If they come online with 3x hashrate (75% of new total) and assign 1440 timestamp always, the difficulty will keep dropping. At 50%, the difficulty will be correct. But anything above 50% will keep dropping. But if you allow +1200 and -4800 timestamps (+5x and -20x) then they will need over 4x hashrate to make the difficulty keeping dropping. If they assign -20x all the time they can make the difficulty keep rising to I believe 20x their hashrate, if they want to waste that much computer power to slow your coin release (they have to stick around for as much as 20xN). My calculation indicates 5x instead of 6x with -20x multiplier means blocks will be issued 4% faster (5x) instead of 1.7% too fast (6x) due to the lack of symmetry. Sometimes they're should be >5x and < -5x but with the >5x block, the <-5x takes some advantage and makes the difficulty a little bit too low.

I'll work on an old idea I had for Zcash to make diff more responsive but smoother. Basically you use N=60 or something, but keep checking the most recent N=12 and if it changes so much that it only had a 1% chance of changing that much 1% of the time (there were <=4 or >=20 blocks when 12 were expected), then you switch to N=12 and keep N=12 for 12 blocks to let any outliers fall out of the average, then let N slowly rise back to N=60. This means that if 4 occur, then D drops from 100 to 33 in 1 block. If 20 then it rises to 100 x 20/12 = 166 in 1 block.

zawy12 commented Jun 19, 2017

If they come online with 3x hashrate (75% of new total) and assign 1440 timestamp always, the difficulty will keep dropping. At 50%, the difficulty will be correct. But anything above 50% will keep dropping. But if you allow +1200 and -4800 timestamps (+5x and -20x) then they will need over 4x hashrate to make the difficulty keeping dropping. If they assign -20x all the time they can make the difficulty keep rising to I believe 20x their hashrate, if they want to waste that much computer power to slow your coin release (they have to stick around for as much as 20xN). My calculation indicates 5x instead of 6x with -20x multiplier means blocks will be issued 4% faster (5x) instead of 1.7% too fast (6x) due to the lack of symmetry. Sometimes they're should be >5x and < -5x but with the >5x block, the <-5x takes some advantage and makes the difficulty a little bit too low.

I'll work on an old idea I had for Zcash to make diff more responsive but smoother. Basically you use N=60 or something, but keep checking the most recent N=12 and if it changes so much that it only had a 1% chance of changing that much 1% of the time (there were <=4 or >=20 blocks when 12 were expected), then you switch to N=12 and keep N=12 for 12 blocks to let any outliers fall out of the average, then let N slowly rise back to N=60. This means that if 4 occur, then D drops from 100 to 33 in 1 block. If 20 then it rises to 100 x 20/12 = 166 in 1 block.

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 19, 2017

Contributor

@sumoshi The latest version on testnet has fixed the flaw. It's rather complicated issue and easy being missed, in brief, old algo didn't include timewarp blocks to shift the to total time span up in calculation so that when they became to median, they could do their job. New algo has adjusted the cheat. Here is screenshot from the debug log after I simulated a timewrap attack to show different bw 2 algo: with old algo, the timewarp would cause 1140 seconds increase in total timespan in comparison to new one (spreadsheet sent to you).

timewarp_attack_log

Contributor

haruto-tanno commented Jun 19, 2017

@sumoshi The latest version on testnet has fixed the flaw. It's rather complicated issue and easy being missed, in brief, old algo didn't include timewarp blocks to shift the to total time span up in calculation so that when they became to median, they could do their job. New algo has adjusted the cheat. Here is screenshot from the debug log after I simulated a timewrap attack to show different bw 2 algo: with old algo, the timewarp would cause 1140 seconds increase in total timespan in comparison to new one (spreadsheet sent to you).

timewarp_attack_log

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 19, 2017

Contributor

@zawy12 I think the median of first 13 blocks can act as an adjustment as it sees the changes of solvetimes much before average of next 30 blocks. Would love to try with it.

Contributor

haruto-tanno commented Jun 19, 2017

@zawy12 I think the median of first 13 blocks can act as an adjustment as it sees the changes of solvetimes much before average of next 30 blocks. Would love to try with it.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 19, 2017

Pseudocode for switching to lower lower averaging period is an unlikely change in hashrate is detected. The post after this one shows how to make it more dynamic, choosing any N as needed. This one includes protection against timestamp errors that the other one does not (yet).

# Perl psuedocode. 
# Must allow negative timestamps and there can't be cutting of blocks.

# 6x in the following worked better than 5x because it allows difficulty to drop 
# faster after hash attack. It can't be set high because timestamp manipulation
# could drop diff low. Always a trade-off between hash and timestamp attacks

if ( this_timestamp - last_timestamp > 6*TargetInterval )  {
    this_timestamp = 6*TargetInterval + last_timestamp; 
}
if ( this_timestamp - last_timestamp < -6*TargetInterval ) {
   this_timestamp = -6*TargetInterval +last_timestamp; 
}
# next line prevents artificially low timestamp from affecting it.
if ( current minus last timestamp is not negative AND N >=19)  { 
    # if we just saw 19 blocks when we expected 12 go to N=19
    # will trigger on average once per 50 blocks on accident
    if  ( average(last 19 solvetimes)  <  TargetInterval/1.66  )  {   N=19;  wait=N; i=0; }
}
# next line prevents 8th solvetime from being artificially large
if ( none of past 7 solvetimes were negative AND  N>=6)  { 
         #  If we just saw 6 blocks when we expected 12, go to N=6. 
        if  (  average(last 6 solvetimes)  > TargetInterval/0.50 )  {   N=6; wait=N;  i=0;}
    }
}
# If we saw  5  blocks when we expected 1, go to N=5.  This needs to be last. 
# Will trigger about 1 per 250 blocks on accident.  Detects >2x hash rate quickly.
# It is one-sided (for rise but not fall in hashrate) it may slow coin release rate a little.
 if ( none of past 5 timestamps are negative)  { 
    if  (  sum(last 5 solvetimes) /  TargetInterval < 1 )  {   N=5;  wait=N;  i=0;  }
}
# give outliers a chance to get out of the new avg range assigned above before letting 
# N increase but it did nto seem to have large effect. Debating it. 
if ( wait > 0 )  { wait=wait-1; }
else { N=N+1; }
if (N>14) { attack='no'; }

Next_D= avg(past N Ds) * TargetTime / avg(past N solvetimes);

#  Allowing more than 1 +/- 2.5/N changes to diff per block seems unnecessary and 
# limiting it to this provides protection against timestamp errors and statistical accidents 
# that can make it rise or fall a lot.  Limiting to 1.33x increase per block can help prevent 
# a 10x attack from jumping to a diff that is 40x higher than you want for 2 or 3 blocks 
# when they suddenly leave.  1.33x will still result in 20x for 2 or 3 blocks but this is 
# unavoidable if you want good protection when they first attack.
# the 1440 limit on forward timestamp to protect against timestamp attack/error
# will will cause a post 10x hash attack to remain at high diff longer.

if ( Next_D > 1+ * avg(past N Ds) ) { Next_D=3*avg(past N Ds)
if ( Next_D < 0.5* avg(past N Ds) ) { Next_D=0.5*avg(past N Ds)

zawy12 commented Jun 19, 2017

Pseudocode for switching to lower lower averaging period is an unlikely change in hashrate is detected. The post after this one shows how to make it more dynamic, choosing any N as needed. This one includes protection against timestamp errors that the other one does not (yet).

# Perl psuedocode. 
# Must allow negative timestamps and there can't be cutting of blocks.

# 6x in the following worked better than 5x because it allows difficulty to drop 
# faster after hash attack. It can't be set high because timestamp manipulation
# could drop diff low. Always a trade-off between hash and timestamp attacks

if ( this_timestamp - last_timestamp > 6*TargetInterval )  {
    this_timestamp = 6*TargetInterval + last_timestamp; 
}
if ( this_timestamp - last_timestamp < -6*TargetInterval ) {
   this_timestamp = -6*TargetInterval +last_timestamp; 
}
# next line prevents artificially low timestamp from affecting it.
if ( current minus last timestamp is not negative AND N >=19)  { 
    # if we just saw 19 blocks when we expected 12 go to N=19
    # will trigger on average once per 50 blocks on accident
    if  ( average(last 19 solvetimes)  <  TargetInterval/1.66  )  {   N=19;  wait=N; i=0; }
}
# next line prevents 8th solvetime from being artificially large
if ( none of past 7 solvetimes were negative AND  N>=6)  { 
         #  If we just saw 6 blocks when we expected 12, go to N=6. 
        if  (  average(last 6 solvetimes)  > TargetInterval/0.50 )  {   N=6; wait=N;  i=0;}
    }
}
# If we saw  5  blocks when we expected 1, go to N=5.  This needs to be last. 
# Will trigger about 1 per 250 blocks on accident.  Detects >2x hash rate quickly.
# It is one-sided (for rise but not fall in hashrate) it may slow coin release rate a little.
 if ( none of past 5 timestamps are negative)  { 
    if  (  sum(last 5 solvetimes) /  TargetInterval < 1 )  {   N=5;  wait=N;  i=0;  }
}
# give outliers a chance to get out of the new avg range assigned above before letting 
# N increase but it did nto seem to have large effect. Debating it. 
if ( wait > 0 )  { wait=wait-1; }
else { N=N+1; }
if (N>14) { attack='no'; }

Next_D= avg(past N Ds) * TargetTime / avg(past N solvetimes);

#  Allowing more than 1 +/- 2.5/N changes to diff per block seems unnecessary and 
# limiting it to this provides protection against timestamp errors and statistical accidents 
# that can make it rise or fall a lot.  Limiting to 1.33x increase per block can help prevent 
# a 10x attack from jumping to a diff that is 40x higher than you want for 2 or 3 blocks 
# when they suddenly leave.  1.33x will still result in 20x for 2 or 3 blocks but this is 
# unavoidable if you want good protection when they first attack.
# the 1440 limit on forward timestamp to protect against timestamp attack/error
# will will cause a post 10x hash attack to remain at high diff longer.

if ( Next_D > 1+ * avg(past N Ds) ) { Next_D=3*avg(past N Ds)
if ( Next_D < 0.5* avg(past N Ds) ) { Next_D=0.5*avg(past N Ds)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 20, 2017

@haruto-tanno Using median at N=13 is a good idea, but I just realized the median of a poisson distribution is 70% of the mean. Instead of my 1.12x factor, we should have been using p=0.75 and q=0.25/0.7=0.357. This does not make much difference, but if using 100% median in the "trigger", it makes a big difference.

I hope to get the above code in my spreadsheet working today. If it works, I want to make a continuous version from N=5 to N=36 instead of the 3 conditions. It will need to go to higher N if negative times are encountered. If there is no negative time, then a forward-stamp is only in the most recent block. So maybe throw out most recent block and use N=12 of the past N=13 but only if none of N=1 to N=13 were negative. By using this, I hope to use average instead of median.

zawy12 commented Jun 20, 2017

@haruto-tanno Using median at N=13 is a good idea, but I just realized the median of a poisson distribution is 70% of the mean. Instead of my 1.12x factor, we should have been using p=0.75 and q=0.25/0.7=0.357. This does not make much difference, but if using 100% median in the "trigger", it makes a big difference.

I hope to get the above code in my spreadsheet working today. If it works, I want to make a continuous version from N=5 to N=36 instead of the 3 conditions. It will need to go to higher N if negative times are encountered. If there is no negative time, then a forward-stamp is only in the most recent block. So maybe throw out most recent block and use N=12 of the past N=13 but only if none of N=1 to N=13 were negative. By using this, I hope to use average instead of median.

@haruto-tanno

This comment has been minimized.

Show comment
Hide comment
@haruto-tanno

haruto-tanno Jun 20, 2017

Contributor

Yep, it looks much better than the adjustment

Contributor

haruto-tanno commented Jun 20, 2017

Yep, it looks much better than the adjustment

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 20, 2017

My theory and testing indicates it should be pretty much the same. The 1/1.1 adjustment is same as p=0.825 and q=0.275, so it should be un-noticeably better if timestamp is not manipulated because it is more average than median. For large N like N=30, I think using only average is better.

zawy12 commented Jun 20, 2017

My theory and testing indicates it should be pretty much the same. The 1/1.1 adjustment is same as p=0.825 and q=0.275, so it should be un-noticeably better if timestamp is not manipulated because it is more average than median. For large N like N=30, I think using only average is better.

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 20, 2017

Contributor

M can cool down the hashrate (especially becoming too high) when in attacks, that's why I insisted to put that to the algo. It works for my coin but pure average may work betters for most others

Contributor

sumoshi commented Jun 20, 2017

M can cool down the hashrate (especially becoming too high) when in attacks, that's why I insisted to put that to the algo. It works for my coin but pure average may work betters for most others

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 20, 2017

I don't understand. I view median as only useful for bad timestamps. During high hash attacks with no timestamp manipulation, it only slows down the increase in difficulty, and slows down the return to normal difficulty when the hash-attack ends.

zawy12 commented Jun 20, 2017

I don't understand. I view median as only useful for bad timestamps. During high hash attacks with no timestamp manipulation, it only slows down the increase in difficulty, and slows down the return to normal difficulty when the hash-attack ends.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 20, 2017

I have what I think could be an excellent algorithm for all coins that would be especially beneficial to small coins, but it's a little difficult to understand. I would like sumokoin to eventually employ it so that monero and cryptonote can believe in it. They are currently showing an interest here and here but they will lose interest if we are not able to demonstrate something better on a live coin.

I want to demonstrate it in a spreadsheet today or tomorrow that it is noticeably superior, at least when timestamps are accurate. Then I want to harden it against timestamp errors as in the previous pseudocode. The long post at cryptonote above describing "Zawy v1.0" is my recommendation until this continuous version is finished.

In general, I want it to check for sudden hashrate increases or decreases and switch to the correct N if it detects an unlikely event. I want it to do it continuously, minimizing any constants that people select. Actually, people should not select any constants in cryptocoins. For example, total coin and coin release rate should not be set by devs, but by the marketplace. Likewise, the following will let the "market" of "network hashrate" "bid" for the correct N averaging time.

# This demonstrates the idea.  I tested it last year. 
# This is to be substituted in place of the 3 conditionals in previous psuedocode.
#  I'll employ it in a spreadsheet to prove it it's better than simple averaging.  
# It will not work well until it is hardened against timestamp errors.  
# The previous pseudocode is hardened against timestamp errors and 
# shows generally how the following needs to be changed

Nstart = 5 =minimal number of blocks we will check for statistical event
Nend = 36 = max number of blocks we will check for statistical event

# Go to N averaging = Nstart=5  only ~1% of the time by chance alone
STDEVstart = 4; 
# Go to N averaging =  Nend=36  32% of the time by chance alone
STDEVend = 1

# Now create a function using the above that will determine the amount of 
#  statistical significance we require before we switch to an N averaging that 
# is between Nstart and Nend.
# I'll Substitute the above assigned values for clarity.
function STDev(NA) =  4 - (4-1)/(36-5)*(NA-5)

N=current number of blocks used in averaging, determined in previous code

# Test all past-block-ranges for a statistical event, from 5 to N
for NA=-Nstart to -N  
     NE= N_Expected_blocks_in_NAs_time= sum(NA previous solvetimes) / TargetInterval
     S = STDev(NA)
     NH = an NA above this should not have occurred in NE time within bound of STDev 
     NL = an NA below this should not have occurred in NE time within bound of STDev 
     NH = NE + S*SQRT(NE)
     NL = NE - S*SQRT(NE) +1  # the +1 was needed in testing to make it symmetrical.
     if ( NA > NH or NA < NL)  { 
     # throw out earliest blocks in case they were before the attack or outliers. The +2
     # prevents thowing out 2 out of 5. +3 might be better.
        N=int(2*NA/3+2) 
        exit for loop, last NA;
    } 
}  

I may edit this post all day as I find errors or improvements.

zawy12 commented Jun 20, 2017

I have what I think could be an excellent algorithm for all coins that would be especially beneficial to small coins, but it's a little difficult to understand. I would like sumokoin to eventually employ it so that monero and cryptonote can believe in it. They are currently showing an interest here and here but they will lose interest if we are not able to demonstrate something better on a live coin.

I want to demonstrate it in a spreadsheet today or tomorrow that it is noticeably superior, at least when timestamps are accurate. Then I want to harden it against timestamp errors as in the previous pseudocode. The long post at cryptonote above describing "Zawy v1.0" is my recommendation until this continuous version is finished.

In general, I want it to check for sudden hashrate increases or decreases and switch to the correct N if it detects an unlikely event. I want it to do it continuously, minimizing any constants that people select. Actually, people should not select any constants in cryptocoins. For example, total coin and coin release rate should not be set by devs, but by the marketplace. Likewise, the following will let the "market" of "network hashrate" "bid" for the correct N averaging time.

# This demonstrates the idea.  I tested it last year. 
# This is to be substituted in place of the 3 conditionals in previous psuedocode.
#  I'll employ it in a spreadsheet to prove it it's better than simple averaging.  
# It will not work well until it is hardened against timestamp errors.  
# The previous pseudocode is hardened against timestamp errors and 
# shows generally how the following needs to be changed

Nstart = 5 =minimal number of blocks we will check for statistical event
Nend = 36 = max number of blocks we will check for statistical event

# Go to N averaging = Nstart=5  only ~1% of the time by chance alone
STDEVstart = 4; 
# Go to N averaging =  Nend=36  32% of the time by chance alone
STDEVend = 1

# Now create a function using the above that will determine the amount of 
#  statistical significance we require before we switch to an N averaging that 
# is between Nstart and Nend.
# I'll Substitute the above assigned values for clarity.
function STDev(NA) =  4 - (4-1)/(36-5)*(NA-5)

N=current number of blocks used in averaging, determined in previous code

# Test all past-block-ranges for a statistical event, from 5 to N
for NA=-Nstart to -N  
     NE= N_Expected_blocks_in_NAs_time= sum(NA previous solvetimes) / TargetInterval
     S = STDev(NA)
     NH = an NA above this should not have occurred in NE time within bound of STDev 
     NL = an NA below this should not have occurred in NE time within bound of STDev 
     NH = NE + S*SQRT(NE)
     NL = NE - S*SQRT(NE) +1  # the +1 was needed in testing to make it symmetrical.
     if ( NA > NH or NA < NL)  { 
     # throw out earliest blocks in case they were before the attack or outliers. The +2
     # prevents thowing out 2 out of 5. +3 might be better.
        N=int(2*NA/3+2) 
        exit for loop, last NA;
    } 
}  

I may edit this post all day as I find errors or improvements.

zawy12 referenced this pull request in seredat/karbowanec Jun 20, 2017

Difficulty algo
Difficulty algo cleanup, based on suggestions by Eugene
@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 21, 2017

Here are the results for 10x hashrate attacks. "Blue" areas indicate blocks were obtained at a low difficulty. Black area that is not on top of "blue" are blocks obtained at a costly difficulty. Everything is correct when black is on top of blue.

The "Zawy v2" algorithm is almost exactly like the theory-based pseudo-code which shows it is based on good theory. N=30 is not shown because the thin high-hash rates are only 15 blocks wide and N=30 does not give good results on N=15 attacks. Before the attacks, you can see a gentle rise to 2x hashrate and a drop down to 1/2 hashrate. Default hashrate and difficulty = 1, so the scale is accurate.

edit:

  • with zawy v1 N=18 with these hash attacks, 10% of blocks were gained at < 1/2 the appropriate difficulty (attacker) and 10% suffered >2x the appropriate difficulty (constant miners).
  • with zawy v2 it was 4% and 7%
  • since the v2 dynamic averaging period went as low as 5 and there were 6 attacks in 970 blocks, 6x5/970 = 3% makes sense, as well as 6x18/970 = 11% for v1. The 7% is a little high because there is a 1440 limit on the timestamp that prevents long solve times for affecting it. (it's to prevent timestamp manipulation from forcing difficulty low)

difficulty_zawy_v1
difficulty_zawy_v2

zawy12 commented Jun 21, 2017

Here are the results for 10x hashrate attacks. "Blue" areas indicate blocks were obtained at a low difficulty. Black area that is not on top of "blue" are blocks obtained at a costly difficulty. Everything is correct when black is on top of blue.

The "Zawy v2" algorithm is almost exactly like the theory-based pseudo-code which shows it is based on good theory. N=30 is not shown because the thin high-hash rates are only 15 blocks wide and N=30 does not give good results on N=15 attacks. Before the attacks, you can see a gentle rise to 2x hashrate and a drop down to 1/2 hashrate. Default hashrate and difficulty = 1, so the scale is accurate.

edit:

  • with zawy v1 N=18 with these hash attacks, 10% of blocks were gained at < 1/2 the appropriate difficulty (attacker) and 10% suffered >2x the appropriate difficulty (constant miners).
  • with zawy v2 it was 4% and 7%
  • since the v2 dynamic averaging period went as low as 5 and there were 6 attacks in 970 blocks, 6x5/970 = 3% makes sense, as well as 6x18/970 = 11% for v1. The 7% is a little high because there is a 1440 limit on the timestamp that prevents long solve times for affecting it. (it's to prevent timestamp manipulation from forcing difficulty low)

difficulty_zawy_v1
difficulty_zawy_v2

@sumoshi

This comment has been minimized.

Show comment
Hide comment
@sumoshi

sumoshi Jun 21, 2017

Contributor

Hi @zawy12. I've sent you an email. Please read it. Thanks ;)

Contributor

sumoshi commented Jun 21, 2017

Hi @zawy12. I've sent you an email. Please read it. Thanks ;)

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Jun 21, 2017

Thanks for the email. I knew you probably needed a rapid fix and a new algorithm needs a lot of time to verify. I'll continue working on this new one to get it hardened against timestamp errors and implemented in Perl instead of spreadsheet.

zawy12 commented Jun 21, 2017

Thanks for the email. I knew you probably needed a rapid fix and a new algorithm needs a lot of time to verify. I'll continue working on this new one to get it hardened against timestamp errors and implemented in Perl instead of spreadsheet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment