paint-brush
Could Block Proposals Be Fair and Efficient Without Proof of Work (PoW)?by@steven-pu
277 reads

Could Block Proposals Be Fair and Efficient Without Proof of Work (PoW)?

by Steven Pu June 23rd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Could Block Proposals Be Fair and Efficient Without Proof of Work (PoW)? Could block proposals be fair and efficient without PoW? The answer is a simple cryptographic puzzle whose solution is arrived at by making rapid guesses, and whoever guessed the right answer first is chosen to produce the next block on the network. This simple algorithm simultaneously provides true randomness which make block proposals fair and decentralized, a delay to ensure sufficient time for propagation to minimize forking, and an economic stake in the form of hardware and electric power investments.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Could Block Proposals Be Fair and Efficient Without Proof of Work (PoW)?
Steven Pu  HackerNoon profile picture

PoW is a simple and elegant consensus algorithm. Every node solves a simple cryptographic puzzle, whose solution is arrived at by making rapid guesses, and whoever guessed the right answer first is chosen to produce the next block on the network. That’s it.

This simple algorithm simultaneously provides true randomness which make block proposals fair and decentralized, a delay to ensure sufficient time for propagation to minimize forking, and an economic stake in the form of hardware and electric power investments so miners have a vested interest to behave honestly.

So what’s wrong with PoW? Why do we need to build something different?

Parallelization is the Culprit

Early assembly line working in parallel

A critical problem with the simple and elegant PoW consensus is that it is a puzzle which can be highly parallelized. These puzzles are typically a hashing function whereby nodes just keep generating random strings, hashing them, and see if the resultant hash value matches a certain criteria. If you’re just one player (e.g., one machine, one thread), let’s say you can on average guess the right answer after N seconds. But if you’re 100 players, then on average you can guess the right answer after N/100 seconds, since you can simply divide up the work. For example, if there are a total M possible guesses, player 1 can be put in charge of testing guesses 1 to M/100, player 2 is in charge of testing guesses M/100 to 2M/100, and so on.

Miners working on PoW blockchain systems typically purchase a huge number of specialized computers, or application-specific integrated circuits (ASICs), and then use programs to coordinate these ASICs to divide up the guesswork so that on average they can guess the right answer faster. As time goes on, different miners decide to team up and divide up the work between their big clusters of ASICs, creating mining pools.

For networks such as BTC, there’s an internal algorithm that makes guessing the right number more difficult if the network detects that miners are guessing the right answers too quickly, to maintain an average of ~10 minute delay between blocks. So the faster the miners guess, the harder it is to guess, creating incentives for miners to keep going faster and faster by building faster ASICs, but more importantly, building MORE ASICs.

Faster machines and more of them eat up more and more and more energy, so on and on it goes, until the amount of energy being used up to maintain a network becomes absurdly high.

Hence, the fact that the hashing functions used in PoW consensus can be parallelized is the primary culprit in creating adverse economic incentives driving a hardware arms race among miners, eating up unsustainably large amounts of energy.

Design Goal: Random Delay

Hence, if we want to design a system that is not as wasteful as PoW but also produces a random delay, we’ll need to hit the following design goals,

1/ True randomness
 to ensure fairness and decentralization.

2/ Delay 
that cannot be reduced by parallelization to minimize energy use.

Let’s see how we can do better.

Randomness from Verifiable Random Function (VRF)

White noise is a naturally occurring source of randomness

What true randomness is, is more of a philosophical question. What we’re really looking for when we say randomness is unpredictability. If we have a mechanism to produce output that cannot be predicted by any of the participants on the network, then that is considered random, and consequently fair.

There are many cryptographic functions that seem to generate unpredictable outputs, such as hashing functions and signature schemes. But they were not designed specifically to produce unpredictable output and an observer can derive patterns given large enough of a sample size.

In 1999, a paper was published describing a verifiable random function, written by Micali, Rabin, and Vadhan, that is specifically designed to generate a highly unpredictable output. Professor Micali then went on to found the Algorand project, and a more detailed but easier-to-digest article was written by Algorand’s core team member Sergey Gorbunov. If you’re interested in a more technical treatment of VRFs, please refer to the aforementioned article and paper.

Let's look at the example of VRF providing the randomness of the random delay in block DAG construction. The inputs of the VRF are:

1/ Block DAG Level (L) at the time when the proposer is packing the block, the level defined here is the length of the anchor chain plus one. So if you’re a proposer, you calculate the current anchor chain L, find the terminating block on the frontier upon which you will be constructing your ghost pointer, then your proposed block’s level is L+1. Please note that this is not the same definition as the conventional “depth” definition for graphs in general.

2/ Block hash of the latest Period block (P), which is the latest finalized block within the Block DAG to help achieve true finalization via a parallel PBFT process, described in our next article. In consideration of boundary conditions whereby proposers have yet to hear the latest confirmed Period block, there is some tolerance in the protocol that the block hash of the one before the latest Period block is also acceptable.

3/ Secret VRF key of the block proposer (SK), which is required to construct the VRF function. This is distinct from the transaction signature scheme, and is generated for each node specifically to be used with VRF.

The output of the VRF function is in two pieces:

v, a pseudo-random value, which is used to determine the length of the delay.

p, a proof, which is used by other nodes to validate that the VRF has been honestly and correctly executed. Think of this as a signature, that with the proposer’s VRF public key, any other node can easily determine the correctness of the calculation.

In the end, we can write this simple equation,

VRF(L, P, SK) → (v, p)

Delay Difficulty Shaping Function

Before the output of the VRF (a delay difficulty, or length) can be translated into a delay, we would want to shape it into a certain distribution. The characteristics of the distribution are roughly as follows,

1/ There needs to be a minimum delay, since we can’t have blocks being generated instantly, as this leaves no time for proper network propagation.

2/ There needs to be a maximum delay, since we don’t want the entire network stalling and not producing blocks for extended periods of time.

3/ A few proposers need to be quick, while the rest of them are slow, this is related to the number of eligible proposers and the overall network diameter (time it takes for a block to reach most proposing nodes)

So the resulting shaping function, with input being the VRF’s output (x-axis) and the output being the shaped delay difficulty factor (y-axis), might looks something like this,


Let the shaping function be S, and we can write this equation,

S(v) = d, where d is the difficult factor for the next stage.

Delay from Verifiable Delay Function (VDF)

As we discussed at the beginning, the prime culprit for PoW, an otherwise elegant algorithm, is that it can parallel processed. A specific class of function called verifiable delay function (VDF) were designed specifically to simulate a delay that cannot be accelerated by parallel processing.

A function can strictly qualify as a VDF if it fits the following two simple criteria,

1/ Must be sequential, in that one cannot use multiple parallel processors to accelerate the computation of a VDF, this is unlike PoW.
2/ Must be easily verifiable, in that an observer can easily verify (i.e., the time it takes to verify must be exponentially smaller than the time it took to compute the function) that the VDF has been correctly computed and hence the right amount of delay incurred, this is like PoW.

There are several candidates that have proposed VDFs which meet these criteria, Boneh et al.Pietrzak, and Wesolowski. In particular, Pietrzak and Wesolowski both independently arrived at highly similar approaches that are strongly resistant to parallel processing, based on the principle of repeated squaring in groups of unknown order.

Let’s examine this at a very high level, as the math is pretty complex.

These VDF constructs conducting a repeated squaring computation that cannot be parallel processed because each iteration requires input from the previous iteration, and no information about future iterations is revealed by any given iteration. In other words, you don’t get the answer until you finish all the iterations, step by step. This makes it sequential.

What makes these VDF constructs easily verifiable is that you can construct a proof with a random linear combination involving an intermediary output and the final output of the VDF. Computation for these linear combinations is far simpler because they involve far fewer steps than the entire VDF. A very crude analogy is to imagine having to calculate values along the entire curve (computing the entire VDF) vs. picking a point and pushing it forward by a few steps (proving the validity of a VDF), the few steps forward clearly takes far less time to compute. This makes it easily verifiable.

In our network design, we place the following inputs in the VDF,

Parent hash (gP), or the parent block to which your newly constructed block is pointing to with a ghost pointer.
Hash of all the transactions (Tx) you plan to place into the block, so you can’t pre-compute the VDF.d, the difficulty factor from the previous step.

So the VDF computation prior to proposing a block for a node would look like,

VDF(gP, Tx, d) = z

In practice to make sure that verification of the output z is not interactive, the node proposer will need to insert intermediary proofs along with the final output into the proposed block.

So to a node calculating these VDF functions, they may experience delays like these (this is a highly simplified view, discounting all the other delays caused by doing useful work like packing blocks and processing transactions),

For illustrative purposes, this was generated from a uniform distribution.

As of this writing, VDF is still considered highly experimental technology and is being actively researched and tested. We keep collaborating with and learn from the best in the open source and academic communities. Reach out if you have thoughts on the matter!

Originally posted here