paint-brush
All About OlaVM and What Lies Ahead!by@sin7y
768 reads
768 reads

All About OlaVM and What Lies Ahead!

by Sin7YNovember 16th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

OlaVM is the first ZKVM based on a parallel execution architecture and achieving higher TPS through the improvement of ZK-friendly design and ZK algorithms. The speed of proof generation is the single most important aspect to increasing the overall throughput of the system, and there are two means to accelerate proof generation, keeping your circuit scale to a minimum and using the most efficient ZK algorithm. We’re introducing a module we’ve focused on “Predict” and then “Verifiyiy” - the main purpose of this is to use the instruction set of the VM to compute some complex calculations. This story is about the progress of OlaVM and its future.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - All About OlaVM and What Lies Ahead!
Sin7Y HackerNoon profile picture

TL;DR

  1. We are working on building the first ZKVM based on a parallel execution architecture and achieving higher TPS through the improvement of ZK-friendly design and ZK algorithms. The technical features are as follows:
    • Fast proof generation

      • ZK-friendly: smaller circuit scale and simplified bottom constraint units
      • Fast ZK: further optimization on Plonky2
    • Fast execution: Utilizing parallel execution to significantly shorten the proof generation time


  2. Current progress:
    • In July 2022, we released the OlaVM Whitepaper.
    • November 2022, completed instruction set design and development, and realized the OlaVM execution module of the virtual machine, you can check the link: https://github.com/Sin7Y/olavm to view our code, continuously updated.
    • For the ZK algorithm with the fastest execution efficiency, we have completed the circuit design and algorithm research of plonky2. You can check the link: https://github.com/Sin7Y/plonky2/tree/main/plonky2/designs to learn more about the design of plonky2, we will optimize and improve it in the next step. Please stay tuned.

What are we up to?

OlaVM is the first ZKVM that introduces parallel VM execution, it integrates the technical features of the two schemes to obtain faster execution speed and faster proof speed, thus bringing the highest TPS of the system.

There are two main reasons why Ethereum has a low transactional throughput:


  1. Consensus process: each node executes all the transactions repeatedly to verify the validity of the transactions.


  2. Transaction execution: transaction execution is single-threaded.
    In order to solve our first problem, whilst still possessing programmability at the same time, many projects have conducted ZK (E) VM research, that is, transactions are completed off chain, and only leave the state verification on the chain (of course there are other capacity expansion schemes, but we won’t go into depth on that in this post). In order to improve the systems throughput,proofs must be generated as fast as possible. In order to solve our second problem, Aptos, Solona, Sui and other new public chains introduced virtual machines with parallel execution(PE-VM) (Of course, it also includes a faster consensus mechanism) to improve the systems overall TPS.

    At this stage, for ZK (E) VM, the bottleneck that affects TPS of the entire system is the generation of proofs. However, when Parallel Prove is used to accelerate the throughput, the faster the block is generated, the earlier the corresponding proof generation starts (with the evolution of ZK algorithms and the improvement of acceleration means, the shorter the proof generation time and more efficient and significant improvement provided by this).

How do you Improve the Systems Throughput?

Increasing the speed of proof generation is the single most important aspect to increasing the overall throughput of the system, and there are two means to accelerate proof generation, keeping your circuit scale to a minimum and using the most efficient ZK algorithm. You further breakdown the meaning of an efficient algorithm, as this can be divided into improving the tuning of parameters such as selection of a smaller field, and secondly, the improvement of the external execution environment, utilizing specific hardware to optimize and scale the solution.


  1. Keeping your circuit scale to a minimum

As described above, the cost of proof generation is strongly related to the overall size of the constraint n, hence, if you are able to greatly reduce the size of the constraint, your generation time will be significantly reduced as well. This is achievable by utilizing different design schemes in a clever way to keep your circuit as small as possible.


  • We’re introducing a module we’ll be referring to as “Prophet”
    There’s many different definitions of a prophet, but we’ve focused on “Predict” and then “Verifiy”, the main purpose of this module is to, given some complex calculation, we don’t have to use the instruction set of the VM to compute these calculations. Why this is, is because it may consume a lot of instructions, thus increasing the execution trajectory of VM and the final constraint scale. Instead, this is where the Prophet module would come into play, it is a built-in module that performs the calculation for us, sends the results to the VM, which will perform a legitimacy check, and verify the result. The Prophet is a set of built-in functions with specific computing functions, such as division, square root, cube root, etc. We will gradually enrich the Prophets library based on actual scenarios to maximize the overall constraint reduction effect for most complex computing scenarios.


  • ZK-friendly

    Dealing with complex calculations the Prophet module can help us reduce the overall size of the virtual machines execution trace, however, it would be convenient and preferred if the actual calculations themselves are ZK-friendly. Therefore, in our architecture we’ve opted for designing the solution around ZK-friendly operations(Choice of hash algorithms and so on), some of these optimizations are present in other ZK(E)VMs as well. In addition to the computing logic that the VM itself performs, there are other operations that also need to be proven, such as RAM operations. Given a stack-based VM, POP and PUSH operations have to be executed at every access. At the verification level, it is still necessary to verify the validity of these operations, they will form independent tables, to then use constraints to verify the validity of these stack operations. Register-based VMs on the other hand, executing the exact same logic, would result in a smaller execution trajectory and therefore a smaller constraint scale.


  1. ZK Algorithms & Efficiency

So far, ZK algorithm has made amazing progress in engineering feasibility. Scenarios are becoming more general and efficient, from R1CS to Plonkish, from a larger field (Cairo VM):

to a smaller field (Plonky2):

acceleration from CPU to GPU/FPGA/ASIC implementation, such as Ingonyama FPGA accelerated design and Semisand ASIC design, etc.

Due to the amazing performance of Plonky2, we temporarily use Plonky2 as the ZK backend of OlaVM. We’ve conducted an in-depth analysis of Plonky2’s Gate design, Gadget design and core protocol principles, and identified areas of design where we can contribute and further improve efficiency. Check out our Github Repo: Plonky2 designs for more information.


Faster Transaction Execution (Currently not a bottleneck at this stage)

In OlaVM’s design, the Prover is unlicensed and anyone can access it, therefore, when you have many Provers, you can generate proofs for these blocks in parallel, and then aggregate these proofs together and submit them to the chain for verification. Since the Prover module is executing in parallel, the faster the block generation(the faster the transactions in the corresponding block are executed), the corresponding proof can be generated in advance, resulting in the final on-chain verification time being significantly reduced.


When the proof generation is very slow, e.g several hours, the efficiency improvement from utilizing the design of parallel execution is not obvious. There are two scenarios that can improve the effect of parallelism, one being that the number of aggregated blocks becomes larger, so that quantitative change causes qualitative change, and another is that the proof time is greatly reduced. Combined, this can greatly increase efficiency.

What about Compatibility?

In the context of ZKVMs, achieving compatibility is to facilitate the connection to the development efforts already made on certain public blockchains. After all, many applications have already been developed on top of the existing ecosystems we have today, e.g, the Ethereum ecosystem. Therefore, if we can utilize these abundant resources already present by achieving compatibility with these already developed ecosystems, enabling projects to migrate seamlessly, it will greatly increase the speed of adoption of ZKVMs and scale those ecosystems.


OlaVM’s main objective is currently to build the most efficient ZKVM with the highest transactional throughput. If our initial development turns out well, our following goals will be considering achieving compatibility with different blockchain ecosystems, aside from the Ethereum ecosystem, which is already included in our roadmap, supporting Solidity at the compiler level.


All Together, with all the above modules integrated, the dataflow diagram of the whole system is shown in the figure below.

What’s next?