New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Security Analysis of Difficulty Adjustment Algorithm #998

Open
defuse opened this Issue Jun 7, 2016 · 13 comments

Comments

Projects
None yet
5 participants
@defuse
Contributor

defuse commented Jun 7, 2016

We selected the difficulty adjustment algorithm in #147. There's a ticket about the timewarp attack #696 which is currently being fixed by switching difficulty adjustment algorithms. All of the difficulty adjustment algorithms in #147 seem ad-hoc to me. If we have bandwidth before launch, or can find a security auditor competent to carry out an analysis, we should get a better understanding of the one we picked.

@defuse

This comment has been minimized.

Show comment
Hide comment
@defuse

defuse Jun 7, 2016

Contributor

Here's the best "timewarp-like" attack on DigiShield v3 with NUM_ALGOS=1 I can think of. I can't say if it's of any practical use, since I don't really understand how the original timewarp attack could be exploited in the first place.

DigiShield v3 Summary

The following ignores special cases that happen when the blockchain is just starting and when NUM_ALGOS > 1. I may have also made errors in interpreting the code linked to above.

Constants

nAveragingInterval=10
nMedianTimespan=11
nAveragingTargetTimespan=nAveragingInterval * <desired block interval>
nMinActualTimespanV3 = nAveragingTargetTimespan * (100 - nMaxAdjustUpV3) / 100;
nMaxActualTimespanV3 = nAveragingTargetTimespan * (100 + nMaxAdjustDownV3) / 100;

Algorithm

  1. Get the last block B.
  2. Go backwards from B nAveragingInterval blocks to get block A.
  3. Get the nMedianTimespan blocks prior to and including B: B1, ..., BnMedianTimespan, where B = BnMedianTimespan.
  4. Sort B1, ..., BnMedianTimespan by block time (wall clock time the block was claimed to be mined), then set B' = BnMedianTimespan/2.
  5. Get the nMedianTimespanblocks prior to and including a: A1, ..., AnMedianTimespan, where A = AnMedianTimespan.
  6. Sort A1, ..., AnMedianTimespan by block time (wall clock time the block was claimed to be mined), then set A' = AnMedianTimespan/2.
  7. Compute nActualTimespan = B'.time - A'.time.
  8. Compute nActualTimespan = nAveragingTargetTimespan + (nActualTimespan - nAveragingTargetTimespan)/6.
  9. Bound nActualTimespan between nMinActualTimespanV3 and nMaxActualTimespanV3 by saturation.
  10. Scale the difficulty by multiplying the last block's difficulty by nActualTimespan/nAveragingTargetTimespan.

A Difficulty-Lowering Attack

Let's work in units where the block target time is 1. An attacking miner has ~25% of the network's hashrate and, as an example, the past 21 block times look like [..., 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120] all having about the same difficulty D. The algorithm will find B.time=120 and A.time=110, and hence A'.time = 105 and B'.time = 115. The following steps compute nActualTimeSpan = 10 = nAveragingTargetTimespan so the next block's difficulty is D*1, as we expect.

Attack idea: Taking the median of a list of values that are supposed to be monotonically increasing doesn't really make sense. If they are monotonically increasing, you always get the middle value. Forged timestamps can "shift everything over" to make the median higher (or lower) than it should be. The amount of shift you can add is proportional to the amount of input elements you can "replace" before (or after) what would nominally be the median element.

Here's how an attack (goal: lower the difficulty) could work if it started from the block with time=110 onward for the next 11 blocks. The 25%-hashrate attacker gets to choose the timestamp of roughly 1/4 of them, or roughly 3. Suppose they do this (forged timestamps in bold, which block they win is random): [..., 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 120.1, 113, 120.2, 115, 116, 117, 118, 120.3, 120]. Re-running the difficulty adjustment gives A.time = 110 and A'.time = 105 (since we haven't changed those blocks), and B.time=120. But now the list over which the later median is taken is (in sorted order) [110, 111, 113, 115, 116, 117, 118, 120, 120.1, 120.2, 120.3], and so B'.time = 117. Step 7 computes nActualTimespan = 117 - 115 = 12 (it was 15 nominally). In step 8 this is adjusted to 10.33333, which doesn't hit the upper bound of nMaxActualTimespanV3 (11.6). In step 10 the difficulty is adjusted by a factor of 10.3333/10.0 = 1.0333 (~3%).

(Notice the value of 120.3 doesn't matter, as long as it's higher than 117. The only forged timestamps that matter are the ones that replaced valid timestamps which would have ordinarily fallen below the middle value.)

In this particular case, assuming the next blocks all have valid timestamps greater than the largest forged timestamp, the attack's full effect lasts until the first forged timestamps falls out of the last nMedianTimespan blocks (the Bi's) into the Ai's where it begins to have the opposite effect (the difficulty is adjusted to be harder than it should be).

Other miners can use the very same attack in the opposite direction to counteract the effect of the first attack (i.e. forge early timestamps to push the middle element back where it should be!).

In general, roughly, if the attacker can forge N of every nMedianTimespan/2 timestamps, then they can shift which block gets selected as the median over by N blocks on average, artificially lowering (or increasing) the difficulty. I have no idea if this is actually useful to an attacker.

Contributor

defuse commented Jun 7, 2016

Here's the best "timewarp-like" attack on DigiShield v3 with NUM_ALGOS=1 I can think of. I can't say if it's of any practical use, since I don't really understand how the original timewarp attack could be exploited in the first place.

DigiShield v3 Summary

The following ignores special cases that happen when the blockchain is just starting and when NUM_ALGOS > 1. I may have also made errors in interpreting the code linked to above.

Constants

nAveragingInterval=10
nMedianTimespan=11
nAveragingTargetTimespan=nAveragingInterval * <desired block interval>
nMinActualTimespanV3 = nAveragingTargetTimespan * (100 - nMaxAdjustUpV3) / 100;
nMaxActualTimespanV3 = nAveragingTargetTimespan * (100 + nMaxAdjustDownV3) / 100;

Algorithm

  1. Get the last block B.
  2. Go backwards from B nAveragingInterval blocks to get block A.
  3. Get the nMedianTimespan blocks prior to and including B: B1, ..., BnMedianTimespan, where B = BnMedianTimespan.
  4. Sort B1, ..., BnMedianTimespan by block time (wall clock time the block was claimed to be mined), then set B' = BnMedianTimespan/2.
  5. Get the nMedianTimespanblocks prior to and including a: A1, ..., AnMedianTimespan, where A = AnMedianTimespan.
  6. Sort A1, ..., AnMedianTimespan by block time (wall clock time the block was claimed to be mined), then set A' = AnMedianTimespan/2.
  7. Compute nActualTimespan = B'.time - A'.time.
  8. Compute nActualTimespan = nAveragingTargetTimespan + (nActualTimespan - nAveragingTargetTimespan)/6.
  9. Bound nActualTimespan between nMinActualTimespanV3 and nMaxActualTimespanV3 by saturation.
  10. Scale the difficulty by multiplying the last block's difficulty by nActualTimespan/nAveragingTargetTimespan.

A Difficulty-Lowering Attack

Let's work in units where the block target time is 1. An attacking miner has ~25% of the network's hashrate and, as an example, the past 21 block times look like [..., 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120] all having about the same difficulty D. The algorithm will find B.time=120 and A.time=110, and hence A'.time = 105 and B'.time = 115. The following steps compute nActualTimeSpan = 10 = nAveragingTargetTimespan so the next block's difficulty is D*1, as we expect.

Attack idea: Taking the median of a list of values that are supposed to be monotonically increasing doesn't really make sense. If they are monotonically increasing, you always get the middle value. Forged timestamps can "shift everything over" to make the median higher (or lower) than it should be. The amount of shift you can add is proportional to the amount of input elements you can "replace" before (or after) what would nominally be the median element.

Here's how an attack (goal: lower the difficulty) could work if it started from the block with time=110 onward for the next 11 blocks. The 25%-hashrate attacker gets to choose the timestamp of roughly 1/4 of them, or roughly 3. Suppose they do this (forged timestamps in bold, which block they win is random): [..., 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 120.1, 113, 120.2, 115, 116, 117, 118, 120.3, 120]. Re-running the difficulty adjustment gives A.time = 110 and A'.time = 105 (since we haven't changed those blocks), and B.time=120. But now the list over which the later median is taken is (in sorted order) [110, 111, 113, 115, 116, 117, 118, 120, 120.1, 120.2, 120.3], and so B'.time = 117. Step 7 computes nActualTimespan = 117 - 115 = 12 (it was 15 nominally). In step 8 this is adjusted to 10.33333, which doesn't hit the upper bound of nMaxActualTimespanV3 (11.6). In step 10 the difficulty is adjusted by a factor of 10.3333/10.0 = 1.0333 (~3%).

(Notice the value of 120.3 doesn't matter, as long as it's higher than 117. The only forged timestamps that matter are the ones that replaced valid timestamps which would have ordinarily fallen below the middle value.)

In this particular case, assuming the next blocks all have valid timestamps greater than the largest forged timestamp, the attack's full effect lasts until the first forged timestamps falls out of the last nMedianTimespan blocks (the Bi's) into the Ai's where it begins to have the opposite effect (the difficulty is adjusted to be harder than it should be).

Other miners can use the very same attack in the opposite direction to counteract the effect of the first attack (i.e. forge early timestamps to push the middle element back where it should be!).

In general, roughly, if the attacker can forge N of every nMedianTimespan/2 timestamps, then they can shift which block gets selected as the median over by N blocks on average, artificially lowering (or increasing) the difficulty. I have no idea if this is actually useful to an attacker.

@nathan-at-least

This comment has been minimized.

Show comment
Hide comment
@nathan-at-least

nathan-at-least Jun 20, 2016

Contributor

Note that #764 has security analysis discussion which may be relevant here.

Contributor

nathan-at-least commented Jun 20, 2016

Note that #764 has security analysis discussion which may be relevant here.

@mcelrath

This comment has been minimized.

Show comment
Hide comment
@mcelrath

mcelrath Jun 20, 2016

Retargeting the difficulty can be done independently from retargeting the reward.

In particular: allow a block to write a coinbase that is proportional to the target difficulty over the "high water mark" of the highest difficulty ever seen. This will allow difficulty retargets (which keep blocks coming at a reasonable pace) while not allowing an attacker to slew the hashrate in order to win rewards.

It is the ability of a multipool to remove its hashrate, thereby increasing the reward/difficulty that makes multipools able to earn more than their share by making the hashrate oscillate.

The solution to this simply is to ensure that d(reward)/d(time) is monotonically decreasing in time. By using a high water mark like this, the details of the retargeting algorithm are pretty much irrelevant.

It's left as an exercise to the reader to figure out what to do with coins that didn't get allocated in epochs where the hashrate decreases. Just don't give them to the entities that caused the hashrate decrease.

mcelrath commented Jun 20, 2016

Retargeting the difficulty can be done independently from retargeting the reward.

In particular: allow a block to write a coinbase that is proportional to the target difficulty over the "high water mark" of the highest difficulty ever seen. This will allow difficulty retargets (which keep blocks coming at a reasonable pace) while not allowing an attacker to slew the hashrate in order to win rewards.

It is the ability of a multipool to remove its hashrate, thereby increasing the reward/difficulty that makes multipools able to earn more than their share by making the hashrate oscillate.

The solution to this simply is to ensure that d(reward)/d(time) is monotonically decreasing in time. By using a high water mark like this, the details of the retargeting algorithm are pretty much irrelevant.

It's left as an exercise to the reader to figure out what to do with coins that didn't get allocated in epochs where the hashrate decreases. Just don't give them to the entities that caused the hashrate decrease.

@Cryptoslave

This comment has been minimized.

Show comment
Hide comment
@Cryptoslave

Cryptoslave Jun 20, 2016

Just a few comments...

I agree with @mcelrath that multipool withdrawing their hashrate is one of the way they get their earning increased, because the faithful miners will mine long blocks alone and when found the difficulty retarget will quickly reduce difficulty to keep targetspacing and at the time of the quick block arriving the multipools hop back on the chain.

When tuning GroundRod RetargetPID I found that the way to make this less lucrative was to increase the noise ie the block to block fluctuation in difficulty via the strength and speed of retargeting. Noise is automatically generated by mining due to the poisson distribution of hashrate leading to possible very short block time like much less than targetspacing. Of course, those very quick block may not happen with the current equihash of 55 sec time to solve.

If the difficulty retargetting react in as few blocks as possible to a high hashrate, the reward is less for multipools. There is a decreasing probability with each block of consecutive quick blocks that make them stand appart from mere noise. The difficulty algorithm shall identify quickly those non-random occurence and adjust the difficulty exponentially until the difficulty is high enough to stop the occurence of quick blocks. The worst for a difficulty retarget algo is to react very slowly like KimotoGravityWell that regularly give 150+ easy blocks to mine after a standstill of the blockchain due to multipool hashrate withdrawal. This is pure profit for multipool hoping on it when easy blocks start. The best way to retarget I could achieve by tuning GroundRod RetargetPID was when I made the easy blocks almost lost in the noise, and with a difficulty increase in about 8 blocks to the new target (after 800% attack). For less obvious attack, the noise will cover most of potential reward.

About having a decreasing difficulty (not reward) in time:
GroundRod RetargetPID has two way of functioning, the second way use a constantly decreasing difficulty, that follow exponentially decreasing curves and update every two seconds by request of a new header by getblocktemplate. Ho well! it was really excellent for keeping the targetblockspacing to 180 sec in Anoncoin. It was so much regular and marvellous. But we decided to deactivate it for production because of the issue to have a time consensus between peers. Without time consensus you can request an easier header at a time in advance and then find a solution and broadcast it just before the others start to hash on that header corresponding to this time. Lots of attacks can be done by manipulating the time, and the solution of POW to the byzantine general problem is then circumvented.

The first way to use RetargetPID is the classical one with just pure nBits determined at each block once and for everybody. This is much harder to get good retarget and goot targetspacing but at least it is not possible to attack it by changing the time on miners. At some point I also did trials to have a time-bound rule for getting easier block following certain rule between the current time and time of the block, but again this system was found readily to be attackable by advancing clock on the fly etc... IMO the only secure way is an immutable block-stamped nBits calculated once by the miners for each block found and necessary to win next block. Even if this comes with sometimes high difficulty blocks that are hard to solve due to multipool change in hashrate, it is the only way to have consensus on the next difficulty required.

Although your d(reward)/d(time) may not be equivalent, I would be very afraid of time exploit on such a setup too, due to lack of time consensus and byzantine GP. Also linking reward to difficulty I think is not wise because often there are stream of lucky block that can change the difficulty readily, even on constant hashrate.

Cryptoslave commented Jun 20, 2016

Just a few comments...

I agree with @mcelrath that multipool withdrawing their hashrate is one of the way they get their earning increased, because the faithful miners will mine long blocks alone and when found the difficulty retarget will quickly reduce difficulty to keep targetspacing and at the time of the quick block arriving the multipools hop back on the chain.

When tuning GroundRod RetargetPID I found that the way to make this less lucrative was to increase the noise ie the block to block fluctuation in difficulty via the strength and speed of retargeting. Noise is automatically generated by mining due to the poisson distribution of hashrate leading to possible very short block time like much less than targetspacing. Of course, those very quick block may not happen with the current equihash of 55 sec time to solve.

If the difficulty retargetting react in as few blocks as possible to a high hashrate, the reward is less for multipools. There is a decreasing probability with each block of consecutive quick blocks that make them stand appart from mere noise. The difficulty algorithm shall identify quickly those non-random occurence and adjust the difficulty exponentially until the difficulty is high enough to stop the occurence of quick blocks. The worst for a difficulty retarget algo is to react very slowly like KimotoGravityWell that regularly give 150+ easy blocks to mine after a standstill of the blockchain due to multipool hashrate withdrawal. This is pure profit for multipool hoping on it when easy blocks start. The best way to retarget I could achieve by tuning GroundRod RetargetPID was when I made the easy blocks almost lost in the noise, and with a difficulty increase in about 8 blocks to the new target (after 800% attack). For less obvious attack, the noise will cover most of potential reward.

About having a decreasing difficulty (not reward) in time:
GroundRod RetargetPID has two way of functioning, the second way use a constantly decreasing difficulty, that follow exponentially decreasing curves and update every two seconds by request of a new header by getblocktemplate. Ho well! it was really excellent for keeping the targetblockspacing to 180 sec in Anoncoin. It was so much regular and marvellous. But we decided to deactivate it for production because of the issue to have a time consensus between peers. Without time consensus you can request an easier header at a time in advance and then find a solution and broadcast it just before the others start to hash on that header corresponding to this time. Lots of attacks can be done by manipulating the time, and the solution of POW to the byzantine general problem is then circumvented.

The first way to use RetargetPID is the classical one with just pure nBits determined at each block once and for everybody. This is much harder to get good retarget and goot targetspacing but at least it is not possible to attack it by changing the time on miners. At some point I also did trials to have a time-bound rule for getting easier block following certain rule between the current time and time of the block, but again this system was found readily to be attackable by advancing clock on the fly etc... IMO the only secure way is an immutable block-stamped nBits calculated once by the miners for each block found and necessary to win next block. Even if this comes with sometimes high difficulty blocks that are hard to solve due to multipool change in hashrate, it is the only way to have consensus on the next difficulty required.

Although your d(reward)/d(time) may not be equivalent, I would be very afraid of time exploit on such a setup too, due to lack of time consensus and byzantine GP. Also linking reward to difficulty I think is not wise because often there are stream of lucky block that can change the difficulty readily, even on constant hashrate.

@mcelrath

This comment has been minimized.

Show comment
Hide comment
@mcelrath

mcelrath Jun 21, 2016

@Cryptoslave you've identified an attacker model which may not hold true in the future. If you know how the attacker behaves, of course one can optimize an algorithm to work against him. Of course, then your algorithm ceases to work...

For the sake of argument, let's consider multipools to be "profit maximizing miners" and not "attackers". According to Satoshi, his system was exactly designed to incentivize the former.

For "quick blocks" the best thing to do is retarget with every block. Variance due to the statistical nature of mining will be averaged out, but with jitter. A worse problem is that you can't expect that the new target can be communicated to all miners at the same time. So there will be "difficulty orphans" -- blocks who are otherwise fine, but didn't receive the difficulty adjustment. (depending on the setup, this is a normal orphan occuring on top of a difficulty adjustment) This is why Satoshi made blocks slow and difficulty adjustments slow. Since Zcash wants to target 2.5m blocks, they will have an orphan rate around 4-5%, and a corresponding selfish mining problem.

There is a "retarget winow" which is around 5s, and is the synchronicity time of the network -- the propagation time of a block, or retarget. The best way to deal with this, in my opinion, is not to have a single synchronous difficulty that everyone must agree on. Instead, allow a range of difficulties that are allowed, that encompasses the previous dificulty and the new difficulty (for a range of time ~5s) before disallowing the old difficulty (but still allowing a range, in case some other node decides it needs to retarget difficulty before I see the corresponding block!)

mcelrath commented Jun 21, 2016

@Cryptoslave you've identified an attacker model which may not hold true in the future. If you know how the attacker behaves, of course one can optimize an algorithm to work against him. Of course, then your algorithm ceases to work...

For the sake of argument, let's consider multipools to be "profit maximizing miners" and not "attackers". According to Satoshi, his system was exactly designed to incentivize the former.

For "quick blocks" the best thing to do is retarget with every block. Variance due to the statistical nature of mining will be averaged out, but with jitter. A worse problem is that you can't expect that the new target can be communicated to all miners at the same time. So there will be "difficulty orphans" -- blocks who are otherwise fine, but didn't receive the difficulty adjustment. (depending on the setup, this is a normal orphan occuring on top of a difficulty adjustment) This is why Satoshi made blocks slow and difficulty adjustments slow. Since Zcash wants to target 2.5m blocks, they will have an orphan rate around 4-5%, and a corresponding selfish mining problem.

There is a "retarget winow" which is around 5s, and is the synchronicity time of the network -- the propagation time of a block, or retarget. The best way to deal with this, in my opinion, is not to have a single synchronous difficulty that everyone must agree on. Instead, allow a range of difficulties that are allowed, that encompasses the previous dificulty and the new difficulty (for a range of time ~5s) before disallowing the old difficulty (but still allowing a range, in case some other node decides it needs to retarget difficulty before I see the corresponding block!)

@Cryptoslave

This comment has been minimized.

Show comment
Hide comment
@Cryptoslave

Cryptoslave Jun 21, 2016

@mcelrath I agree with you there is a network propagation time, the value of which can be low if block are slow or quite big if blocks are big (or a high latency network is used such as Tor or I2P). With the slow POW of zcash there is also the case when the POW is much slower than propagation to every node. Also there is no block orphaned right away, indeed the client will receive each current block chain, automatically selecting the first one as the active chain and working on it but the best blockchain may change if others node find another block on the second blockchain, which will then become the new active chain on receiving. But you know this already. I don't think the POW model shall be changed more than bitcoin such as allowing a range of difficulties or change the reward with time etc... this open a can of worm and could makes zcash less resilient. Already I am worried enough of the slow POW #764. But let me check your braid examples, it looks interesting. https://rawgit.com/mcelrath/braidcoin/master/Braid%2BExamples.html

Cryptoslave commented Jun 21, 2016

@mcelrath I agree with you there is a network propagation time, the value of which can be low if block are slow or quite big if blocks are big (or a high latency network is used such as Tor or I2P). With the slow POW of zcash there is also the case when the POW is much slower than propagation to every node. Also there is no block orphaned right away, indeed the client will receive each current block chain, automatically selecting the first one as the active chain and working on it but the best blockchain may change if others node find another block on the second blockchain, which will then become the new active chain on receiving. But you know this already. I don't think the POW model shall be changed more than bitcoin such as allowing a range of difficulties or change the reward with time etc... this open a can of worm and could makes zcash less resilient. Already I am worried enough of the slow POW #764. But let me check your braid examples, it looks interesting. https://rawgit.com/mcelrath/braidcoin/master/Braid%2BExamples.html

@defuse

This comment has been minimized.

Show comment
Hide comment
@defuse

defuse Jul 26, 2016

Contributor

See #764 (comment) for an attack idea.

Contributor

defuse commented Jul 26, 2016

See #764 (comment) for an attack idea.

@nathan-at-least

This comment has been minimized.

Show comment
Hide comment
@nathan-at-least

nathan-at-least Aug 1, 2016

Contributor

@defuse Why does this have the agenda label on it? What decision needs to be made?

Contributor

nathan-at-least commented Aug 1, 2016

@defuse Why does this have the agenda label on it? What decision needs to be made?

@nathan-at-least

This comment has been minimized.

Show comment
Hide comment
@nathan-at-least

nathan-at-least Aug 1, 2016

Contributor

I need to decide when this must be completed and by whom.

Contributor

nathan-at-least commented Aug 1, 2016

I need to decide when this must be completed and by whom.

@nathan-at-least nathan-at-least self-assigned this Aug 1, 2016

@nathan-at-least

This comment has been minimized.

Show comment
Hide comment
@nathan-at-least

nathan-at-least Aug 8, 2016

Contributor

We're going to postpone this analysis until after launch. If any volunteers wish to post more analyses, we'd appreciate that, though.

Contributor

nathan-at-least commented Aug 8, 2016

We're going to postpone this analysis until after launch. If any volunteers wish to post more analyses, we'd appreciate that, though.

@nathan-at-least

This comment has been minimized.

Show comment
Hide comment
@nathan-at-least

nathan-at-least Aug 31, 2016

Contributor

This ticket isn't closed, but we've already selected an algorithm to launch with. I've labeled it as a potential future audit task.

Contributor

nathan-at-least commented Aug 31, 2016

This ticket isn't closed, but we've already selected an algorithm to launch with. I've labeled it as a potential future audit task.

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Sep 8, 2016

The recent time warp attack on z9 seemed pretty simple while using apparently 2.5x hashrate. They simply forward-timestamped a block by 3000 seconds, then mined faster than everyone else for about 30 minutes, getting about 30 blocks, losing a few to others. They timestamped the ones they got all 30 minutes behind in time from the first one, within 1 second of each other. This forced the median calculation to think it was taking more than the target time to get the first 17 blocks he timestamped together, lowering the difficulty. And they started on an already-low swing in the difficulty. They averaged 1 minute per block in real time which I was logging, indictating 150/60 multiplier of network hashrate. After 17 blocks passed, the forward-timestamped block passed out of the window and the algoritm starts to rise. They seemed to keep mining about another 15 blocks until it reached up to the pre-attack difficulty. Too many blocks for that hour were released and not subsequently corrected for. At least 70 extra blocks were released for the day. There were several attacks as shown in the chart below covering about 2 days. The top marks are advanced-timed blocks. I am not sure why they had so many, but there was other manipulation at those points with 0 solution time blocks mixed in with the forward time stamps (they were not all together, so I did not plot them). It may be a different way of doing the same thing: getting a lot of blocks quickly without the difficulty rising, getting the coin release ahead of schedule, but i did not look into it. The bottom marks are the blocks where more than 1 were timestamped at about the same second.
screenshot from 2016-09-08 18-31-25

zawy12 commented Sep 8, 2016

The recent time warp attack on z9 seemed pretty simple while using apparently 2.5x hashrate. They simply forward-timestamped a block by 3000 seconds, then mined faster than everyone else for about 30 minutes, getting about 30 blocks, losing a few to others. They timestamped the ones they got all 30 minutes behind in time from the first one, within 1 second of each other. This forced the median calculation to think it was taking more than the target time to get the first 17 blocks he timestamped together, lowering the difficulty. And they started on an already-low swing in the difficulty. They averaged 1 minute per block in real time which I was logging, indictating 150/60 multiplier of network hashrate. After 17 blocks passed, the forward-timestamped block passed out of the window and the algoritm starts to rise. They seemed to keep mining about another 15 blocks until it reached up to the pre-attack difficulty. Too many blocks for that hour were released and not subsequently corrected for. At least 70 extra blocks were released for the day. There were several attacks as shown in the chart below covering about 2 days. The top marks are advanced-timed blocks. I am not sure why they had so many, but there was other manipulation at those points with 0 solution time blocks mixed in with the forward time stamps (they were not all together, so I did not plot them). It may be a different way of doing the same thing: getting a lot of blocks quickly without the difficulty rising, getting the coin release ahead of schedule, but i did not look into it. The bottom marks are the blocks where more than 1 were timestamped at about the same second.
screenshot from 2016-09-08 18-31-25

@zawy12

This comment has been minimized.

Show comment
Hide comment
@zawy12

zawy12 Nov 11, 2016

Zcash 1.0.2 is following Poisson statistic and I can find no evidence of a time warp attack. https://forum.z.cash/t/zcash-following-poisson-no-observable-problem/8061

zawy12 commented Nov 11, 2016

Zcash 1.0.2 is following Poisson statistic and I can find no evidence of a time warp attack. https://forum.z.cash/t/zcash-following-poisson-no-observable-problem/8061

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment