Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mining in Logarithmic Space - adaptations for variable difficulty and DAG #3

someone235 opened this issue May 26, 2021 · 6 comments


Copy link

someone235 commented May 26, 2021

Mining in Logarithmic Space (MLS) is a work by Kiayias et al. that enables pruning headers by saving a logarithmic amount of super blocks. Super blocks are divided to levels: a level-1 super block is a block with hash < target/2, level-2 super block is a block with hash < target/4, level-3 super block is a block with hash < target/8 and so on.

We propose two changes to MLS:

  1. An adaptation for variable difficulty.
  2. An adaptation for DAGs.

MLS with variable difficulty

The idea is very simple: instead of considering a level μ super block as block with hash < target/2^μ, we define it as hash < max_target/2^μ (max target can be 2256-1, or any other smaller number defined by the protocol). The rest of the protocol will work exactly the same, and the proof size will be log(accumulated_work), instead of log(chain_length) in MLS.


Instead of having a chain for each μ, we're maintaining a DAG ↑ μ for each μ. Each block has to point to all level tips in its past.
We'll have some minor changes in the notation proposed in the paper:

  • DAG[i] means the highest block in the GHOST chain of this DAG that has past greater than i
  • DAG[-i] means the highest block in the GHOST chain of this DAG that has future greater than i
  • DAG[i:] means the future of DAG[i]
  • DAG{b:} means the future of block b
  • DAG{:b} means the past of block b

Our Dissolve function will now look like this:

function Dissolve(m, k, DAG)
  DAG* = DAG[:− k]
  D = ∅
  if |DAG*| ≥ 2m then
    l = max{μ : |DAG* ↑ μ | ≥ 2m}
    D[l] = DAG* ↑ l
    for μ = l − 1 down to 0 do
      b = DAG* ↑ μ+1 [−m]
      D[μ] ← DAG* ↑ μ [−2m:] ∪ DAG* ↑ μ {b:}
    end for
    D[0] = DAG*
  end if
  χ = DAG*[−k:]
  return (D, l, χ)
end function

Every other aspect of the protocol will look exactly the same.


Our first approach was to run GHOSTDAG for each DAG ↑ μ and to use the GHOSTDAG selected chain as an alternative to the block chain in MLS. This works well on constant hashrate, but it breaks when we apply "MLS with variable difficulty". Because the super block level is defined by log(hash/max_target) and not log(hash/target) the number of super blocks on each level is not regulated by the DAA, so we can't use GHOSTDAG, because it requires the network block rate as an argument.
Instead of that we chose to use the GHOST chain. The GHOST protocol doesn't require the block rate as an argument, and although it doesn't guarantee liveness, it does guarantees that the honest majority respects the given chain.

Copy link

hashdag commented May 26, 2021

To be precise, when we refer here to the GHOST chain we mean an iterative procedure that traverses the DAG from past (genesis) to future (virtual) and at every step picks the child-block with maximum future size

Copy link

This comment is also relevant here

Copy link

someone235 commented Aug 9, 2021

One problem with the said protocol is that it should be relatively easy for an attacker to build P blocks with very low (and fake) difficulty on top of the DAG's selected tip, and then the attacker proof will be chosen by the verifier.
We can prevent such an attack like this:

  1. If the verifier needs to choose between two proofs (P1, P2) where P1.PruningPoint is in the past of P2.PruningPoint, it will do as follow:
  • If P2.SelectedTip.Timestamp is smaller than P1.SelectedTip.Timestamp, choose P1.
  • If P2.SelectedTip.Timestamp-P1.SelectedTip.Timestamp is lower than some small number (let's say 10 minutes), choose P1.
  • Otherwise, choose P2.
  1. If P2.PruningPoint won't have P1.FinalityPoint in its past, trigger a finality conflict.

Copy link

someone235 commented Aug 11, 2021

The finality violation detection that is mentioned here has some flaws:

  1. Because finality point moves with a sliding window, two finality chains usually won't share any blocks except genesis.
  2. Pruning points are not necessarily finality points of each other, so it means every time a pruning point changes, you will have to recreate the chain of finality points to genesis, while some of the blocks might be deleted.


  1. When checking for finality violations, you should check if finality point of your finality point is in the selected chain of the most recent known block in the provided finality chain. Unfortunately this only detects finality violation of 2F.
  2. A block header should also have a commitment to the pruning point from its point-of-view. When a node moves the pruning point, it just needs to append the chain of finality points between the previous pruning point and the new one (including the previous one). When the verifier gets the chain, he should also allow blocks that are not the previous block in the finality chain, but rather a pruning point of a future block in the chain.

Copy link

Note: unlike mentioned above, the final implementation does implement GHOSTDAG and not GHOST. Although block rate of super levels does drop exponentially, we can still use the same K for each level since K is an upper-bound (and confirmation times are irrelevant when discussing pruning proof).

Copy link

zawy12 commented Dec 13, 2023

I don't know if this is useful but the total chain work using the MLS paper notation is 2m * 2^u with stdev error 1/sqrt(2m). It's interesting that max target isn't needed. It cancels out from this: = (2^256 -1) * 2m / 2^(256-u). 2m is the total count of hashes below target 2^(256-u). This is for any varying difficulty algorithm as long as there are no difficulty targets below 2^(256-u) which is the case for their suggested 2m = 6k = 36 hashes unless the difficulty target has an extremely fast decrease (an increase in difficulty). Specifically, a difficulty would have to be D > chain_work/36. My thinking is based on single chains but I think the same equations may apply to DAGs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet

No branches or pull requests

4 participants