paint-brush
Exploring Lookup Argumentsby@sin7y
3,516 reads
3,516 reads

Exploring Lookup Arguments

by Sin7YDecember 22nd, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

As mentioned in the previous article Hello, OlaVM!, OlaVM’s vision is to build a high-performance ZKVM, and this article will focus on one of the tools that make OlaVM high-performance, namely, Lookup Arguments. Lookup Arguments play an important role in reducing the size of the circuit, thereby improving Zero Knowledge efficiency, and it’s widely used in the circuit design of ZKVMs.
featured image - Exploring Lookup Arguments
Sin7Y HackerNoon profile picture


TL;DR

As mentioned in the previous article Hello, OlaVM!, OlaVM’s vision is to build a high-performance ZKVM, and this article will focus on one of the tools that make OlaVM high-performance, namely, Lookup Arguments. Lookup Arguments play an important role in reducing the size of the circuit, thereby improving Zero Knowledge efficiency, and it’s widely used in the circuit design of ZKVMs. Throughout this article you’ll learn more about the following:


  1. What role do Lookup Arguments play in ZKVM?
  2. Plookup protocol principles
  3. Lookup Argument protocol principle of Halo 2
  4. The connection between the two Lookup Argument algorithms

The Roles of a ZKVM

The ZKVM utilizes Zero Knowledge to constrain all the execution processes of the VM, and the execution process of the VM can generally be divided into: instruction execution, memory access and built-in function execution. It is somewhat impractical to execute constraints on these operations in one trace. First of all, in a trace, a row represents an operation type, and one operation type corresponds to multiple constraints, and different constraints correspond to different numbers of columns, resulting in different widths. If one of the rows is too wide due to one constraint corresponding to too many columns, then the width of the entire trace is affected, becoming too large. Resource usage of this design is wasteful when the rows corresponding to the remaining constraints do not require so many columns. Then, if there are too many different operation types in a single trace, more selectors will be introduced, increasing not only the number of polynomials but also the order of the constraint. Finally, due to the order limitation of the group, the number of rows of the trace itself cannot exceed the order of the group, so the number of trace rows occupied by a certain type of operation should be minimized.


So, for simplicity, we need to:


  1. Split different operation types into multiple sub-traces and prove them separately, with data consistency between the main trace and the sub-traces to be ensured by Lookup argument.


  2. For some ZK-unfriendly computations, we can reduce the size of the trace by Lookup Argument techniques, such as bitwise operations.
    Of course, there are other technical means to reduce the size of the trace, and that will be explained further down in this article.

Lookup Between Trace Tables

All the execution processes of the VM will form a complete trace, called “the main trace”. “Complete” refers to that it contains all the states of the VM execution, but not auxiliary states, such as some extended information that is convenient for ZK verification. As mentioned earlier, including this auxiliary information in the main trace will make the main trace complex and difficult to constrain. Therefore, for the convenience of constraints, some sub-traces are usually established, and then constrained for these sub-traces respectively, and the main traces are mainly used to execute the correct program constraints and Context constraints.

By creating different sub-traces, we divide the different operations performed by the VM and ensure that the data in the sub-trace is derived from the main trace by using Lookup Argument techniques. For the data validity proof in the sub-trace, you need to generate different traces according to a specific operation type, and then use the corresponding constraints to prove the validity of these traces. In particular, for zk-unfriendly operations such as Bitwise and rangecheck.

Lookup for ZK-unfriendly Operations

As mentioned earlier, the proofs of each sub-trace are independent, so getting the smallest possible trace will improve the efficiency of the prover. Taking Bitwise as an example, Bitwise operations include AND, XOR, and NOT. If you simply want to implement the constraints of Bitwise operations through circuits, you may need to split each OP into multiple binary limbs, and if these OPs are 32 bits wide, they will be split into 32 limbs. Then, you need to constrain that:

A total of 2+323=99 trace cells, and the number of constraints is 3 * sumcheck +33 bitwise=35.


If there are some truth tables at this time, for AND, XOR, and NOT calculations, you can define three tables, which contain data for bitwise calculations that refer to the op with a specified bit width, such as 8-bit. For 32-bit ops, you only need to split them into four 8-bit limbs, and then the bitwise relationship between these OP limbs does not need to be implemented with corresponding constraints, only needs to execute Lookup on the fixed table. At this time, a total of 3+4*3=15 trace cells are taken up with 3 constraints sumcheck and one Lookup argument (Batch Lookup is supported).


Utilizing Lookup Arguments is a huge boost not only for proofs of bitwise operations but also for rangecheck operations. For a 32-bit OP, you only need to split it into two 16-bit limbs. There are two good designs here. One is that it allows rangecheck to take up fewer trace cells, and the other is that our customized ADD-MUL constraint can be reused for rangecheck’s sum constraint. For different calculation types, being able to reuse the same constraint is of great help to the overall efficiency improvement. As shown in the figure above, the customized ASS-MUL gate can support the constraint reuse of five calculation types: ADD, MUL, ADD-MUL, EQ, and RANGECHECK.



Plookup Protocol

Plookup is a protocol used to check that a polynomial

of the order less than n, the value on the multiplicative group H of n order is contained in a d-dimensional table F^d. A typical scenario is to do a rangecheck in a zk-snark, verifying that the values of all polynomials over H are on [0, m].

Symbol Description

Pretreatment

Protocol Process

Protocol Understanding

  1. Define two polynomials F and G:

where FG, d = n + 1,if and only if:

  • f ⊂ t
  • s is the ordering of (f, t) with respect to t


This transfers the proof f ⊂ t to the proof F ≡ G, which is to prove that s is a permutation of (f, t).


Prove that a、b is the prerequisite for FG to be true:

  1. What is Z of PP in the calculation



  1. What V is verifying:

Halo2 Lookup Protocol


Introduction


The lookup protocol function of Halo2 is to, given two columns of data A and S with a length of 2^k, proving that every cell in A is in S, and some cells in S can not be in A, and

  • S can be fixed or variable.
    • The case where S is variable is the inclusion of columns in trace, and the values are not fixed.
  • A and S can contain duplicate data, and if the size of A and S does not reach 2^k, they need to be completed to 2^k.
    • Fill A with some data from S.
    • Complete S with the last data.

Protocol Process


The protocol only checks against the expanded relationship between A and S. How is it guaranteed that the expansion is valid for S ?


Suppose A = {1,2} and S = {3,4}, both of which do not satisfy the subset argument, are expanded to become A = {1,2,3,4} and S = {1,2,3,4}, which can be justified by the subset argument.


It is unreasonable, but how can it be proved to pass?

Support ZK

Extend - 1 : Vector Lookup

Extend - 2 : Multi-tables

Comparison between Plookup and Lookup


Also published here.