From original Artcle by Shai (Deshe) Wyborski. Outdated – the most recent work on this can be found here:  Proof-of-Work: the Book

The Nakamoto Consensus Scaling Problem

So How Can We Allow Parallel Blocks?

  1. It has to be topological: a block can not appear in the ordering before any of its parents
  2. It has to be in consensus: at any point in time all of the nodes in the network must agree on the ordering of all but a constant number of new blocks
  3. It has to be secure: a computationally inferior adversary can not revert the ordering of blocks retroactively
  4. It has to offer liveness: there should be a clear criterion for when a block is “finalized” in the sense that it will never change its place in the ordering, and every block should satisfy this criterion within a constant amount of time
  5. It has to be efficient: the problem of determining, calculating and maintaining the order should be feasible for today’s computers even in light of an ever growing DAG

PHANTOM — GHOSTDAG In an Ideal World

(This picture actually has a mistake in it, see if you can spot it 🙂 I will replace it when I have the time)
The red set forms a maximal 3-cluster in the DAG above
  1. Choose k so that most of the time the honest network does not create more than k+1 parallel blocks
  2. Find a maximal k-cluster (choose some arbitrary tie-breaking rule if there are several maximal k-clusters)
  3. Order the blocks in the maximal k-cluster via an arbitrary topological order with the following property: a block outside the chosen k-cluster will appear as late as possible, that is, either after all blocks inside the k-cluster appeared, or when the ordering reached a block in the k-cluster which has this block in its past (this leaves open to interpretation what should be done with the blocks outside the k-cluster, or whether they should appear at all)

From PHANTOM to GHOSTDAG

  1. It could not be implemented efficiently: Indeed, the problem of finding a maximal k-cluster in a given DAG is NP-complete.
  2. It is not incremental: Every time the DAG updates, the entire computation must be restarted. In particular, it requires storing the entire DAG structure.

But Why Should GHOSTDAG Be Secure?

From GHOSTDAG To Kaspa — Concluding Remarks