jigar (@04040444) • Hey
I am a crypto trader and inventor
Publications
- <https://www.mintchain.io/mint-forest?inviteCode=C5AD8178>
- “Inspiration does exist, but it must find you working.”
- <https://reya.network/lp-pool?referredBy=wxi4hdo1>
- <https://www.elixir.xyz/refer/aufderhar6308>
- follow me
- GM Friends...
- Mint onchain–offgrid <https://zora.co/collect/base:0x34ee9c5307d2bbcbf0c961b14eea7faaaa2dbf21/1?referrer=0x1EFe088E25aD05e958CD1759b732a2e3e9B6804F>
- I've claimed 600 GoatPoints!🐐 Discover if you're eligible to claim points: <https://points.cashmere.exchange/>
- <https://www.elixir.xyz/refer/aufderhar6308>
- <https://hey.xyz/u/04040444>
- gm friends...
- <https://x.com/plumenetwork/status/1793678320913252838…> <https://deform.plumenetwork.xyz/testnet?referral=Ee6Ys1PUNwQP>
- <https://x.com/jigarmathur/status/1796041186400153972>
- <https://x.com/jigarmathur/status/1796041186400153972>
- Zero-knowledge proof paradigm: What is zkVM
“In the next 5 years, we will be talking about the adoption of zero-knowledge protocols as much as we are about the adoption of blockchain protocols. The potential unlocked by the breakthroughs of the past few years will sweep the crypto mainstream.”
— Jill, CSO of Espresso Systems, May 2021
Since 2021, the zero-knowledge proof (ZK) landscape has evolved into a diverse ecosystem of primitives, networks, and applications across multiple domains. However, while ZK is gradually gaining momentum, with the launch of ZK-powered rollups like Starknet and zkSync Era marking the latest advances in the space, much of ZK remains a mystery to ZK users and the crypto space as a whole.
But times are changing. We believe that zero-knowledge crypto is a powerful, pervasive tool for scaling and securing software. Simply put, ZK is the bridge to crypto mass adoption. To quote Jill again, anything involving zero-knowledge proofs (ZKPs) will create tremendous value (both fundamental and speculative) in both web2 and web3. The best minds in crypto are working hard to iterate and make ZK economically viable and production-ready. Even so, there is still much that needs to be done before the model we envision becomes a reality.
Compare ZK adoption to Bitcoin adoption, where one reason Bitcoin evolved from an internet currency on fringe enthusiast forums to “digital gold” approved by BlackRock was the proliferation of developer and community-generated content that fostered interest. For now, ZK exists in a bubble within a bubble. Information is fragmented and polarized, with articles either filled with arcane terms or too layman-like to convey any meaningful information beyond repetitive examples. It seems that everyone (experts and laymen) knows what zero-knowledge proofs are, but no one can describe how it actually works.
As one of the teams contributing to the zero-knowledge paradigm, we hope to demystify our work and help a wider audience establish a canonical foundation for understanding and analyzing ZK systems and applications, in order to promote education and discussion among relevant parties and enable the spread of relevant information.
In this article, we will introduce the basics of zero-knowledge proofs and zero-knowledge virtual machines, provide a high-level summary of the operation process of zkVM, and finally analyze the evaluation criteria of zkVM.
1\. Zero-knowledge proof basics
What is a zero-knowledge proof (ZKP)?
In short, a ZKP enables one party (the prover) to prove to another party (the verifier) that they know something without revealing the specific content of that thing or any other information. More specifically, a ZKP proves knowledge of a piece of data or the result of a calculation without revealing that data or the input. The process of creating a zero-knowledge proof involves a series of mathematical models that convert the result of a calculation into otherwise meaningless information that proves that the code was successfully executed, which will be verified later.
In some cases, the amount of work required to verify a proof that has been constructed through multiple rounds of algebraic transformations and cryptography is less than the amount of work required to run the calculation. It is this unique combination of security and scalability that makes zero-knowledge cryptography such a powerful tool.
zkSNARK: Zero-Knowledge Succinct Non-Interactive Argument of Knowledge
· Relies on an initial (trusted or untrusted) setup process to establish parameters for verification
· Requires at least one interaction between the prover and verifier
· Proofs are small and easy to verify
· Rollups like zkSync, Scroll, and Linea use SNARK-based proofs
zkSTARK: Zero-Knowledge Scalable Transparent Argument of Knowledge
· No trusted setup required
· Provides high transparency by using publicly verifiable randomness to create a trustless verifiable system, i.e. generating provable random parameters for proofs and verification.
· Highly scalable, as they can quickly (not always) generate and verify proofs, even if the underlying witness (data) is large.
· No interaction is required between the prover and verifier
· The trade-off is that STARKs generate larger proofs, which are harder to verify than SNARKs.
· Proofs are harder to verify than some zkSNARK proofs, but relatively easier to verify than others.
· Starknet and zkVMs such as Lita, Risc Zero, and Succinct Labs all use STARKs.
(Note: Succinct bridge uses SNARKs, but SP1 is a STARK-based protocol)
It is worth noting that all STARKs are SNARKs, but not all SNARKs are STARKs.
2\. What is zkVM?
A virtual machine (VM) is a program that runs programs. In context, a zkVM is a virtual computer that is implemented as a system, general circuit, or tool for generating zero-knowledge proofs, used to generate zkVMs for any program or computation.
zkVM does not require learning complex mathematics and cryptography to design and code ZK, allowing any developer to execute programs written in their favorite language and generate ZKPs (zero-knowledge proofs), making it easier to integrate and interact with zero-knowledge. Broadly speaking, most zkVMs mean a compiler toolchain and a proof system that are attached to the virtual machine that executes the program, not just the virtual machine itself. Below, we summarize the main components of zkVM and their functions
The design and implementation of each component is governed by the choice of proofs (SNARKs or STARKs) and instruction set architecture (ISA) for the zkVM. Traditionally, an ISA specifies the capabilities of a CPU (data types, registers, memory, etc.) and the order of operations that the CPU performs when executing a program. In context, an ISA determines the machine code that can be interpreted and executed by a VM. The choice of ISA can make a fundamental difference in the accessibility and usability of a zkVM, as well as the speed and efficiency of the proof generation process, and supports the construction of any zkVM.
Below are some examples of zkVMs and their components for reference only.
For now, we will focus on the high-level interactions between each component to provide a framework for understanding the algebraic and cryptographic processes and design trade-offs of zkVM in later articles.
3\. Abstract zkVM flow
The following figure is an abstract and generalized zkVM flow chart, splitting and classifying the format (input/output) as the program moves between zkVM components.
The general process of zkVM is as follows:
(1) Compilation phase
The compiler first compiles the program written in traditional languages (C, C++, Rust, Solidity) into machine code. The format of the machine code is determined by the selected ISA.
(2) VM phase
The VM executes the machine code and generates an execution trace, which is a series of steps of the underlying program. Its format is determined by the choice of algorithm and the set of polynomial constraints. Common algorithm schemes include R1CS in Groth16, PLONKish algorithm in halo2, and AIR in plonky2 and plonky3.
(3) Verification phase
The prover receives the trace and represents it as a set of polynomials subject to a set of constraints, essentially converting the computation into algebra by mathematically mapping facts.
The prover submits these polynomials using a polynomial commitment scheme (PCS). A commitment scheme is a protocol that allows the prover to create a fingerprint of some data X, which is called a commitment to X, and then use the commitment to X to prove facts about X without revealing the content of X. PCS is a fingerprint, a "preprocessed" concise version of the computational constraints. This allows the prover to use the random values that the verifier proposes in the next step to prove facts about the computation, now represented by a polynomial equation.
The prover runs a Polynomial Interactive Oracle Proof (PIOP) to prove that the submitted polynomial represents an execution trace that satisfies the given constraints. PIOP is an interactive proof protocol where the prover sends a commitment to a polynomial, the verifier responds with random field values, and the prover provides an evaluation of the polynomial, similar to "solving" a polynomial equation using random values to convince the verifier in a probabilistic manner.
Application of the Fiat-Shamir heuristic; the prover runs PIOP in non-interactive mode, where the verifier's actions are limited to sending anonymous random challenge points. In cryptography, the Fiat-Shamir heuristic converts an interactive proof of knowledge into a digital signature for verification. This step encrypts the proof and makes it a zero-knowledge proof.
The prover must convince the verifier that the polynomial evaluation it sends to the verifier is correct, regarding the polynomial commitment it sends to the verifier. To do this, the prover produces an "evaluation" or "opening" proof, provided by the polynomial commitment scheme (fingerprint).
(4) Verifier Phase
The verifier checks the proof by following the verification protocol of the proof system, either using constraints or commitments. The verifier accepts or rejects the result based on the validity of the proof.
In summary, a zkVM proof can prove that for a given program, a given result, and a given initial condition, there exists some input that causes the program to produce the given result when executed from the given initial condition. We can combine this statement with the flow to get the following description of zkVM.
A zkVM proof will prove that for a given VM program and a given output, there exists some input that causes the given program to produce the given output when executed on the VM.
4\. Evaluating zkVM
What is the criterion for evaluating zkVM? In other words, under what circumstances should we say that one zkVM is better than another? In practice, the answer depends on the use case.
Lita's market research shows that for most commercial use cases, between speed, efficiency, and simplicity, the most important attribute is either speed or kernel time efficiency, depending on the application. Some applications are price-sensitive and want to optimize the proof process to be low-energy and low-cost. For these applications, kernel time efficiency may be the most important optimization metric. Other applications, especially those related to finance or trading, are very sensitive to latency and need to optimize for speed.
Most public performance comparisons focus only on speed, which is certainly important, but is not a comprehensive measure of performance. There are also several important properties that measure the reliability of zkVM, most of which are not up to production standards, even for market-leading incumbents.
We recommend evaluating zkVMs on the following criteria, divided into two subcategories:
Baseline: used to measure the reliability of zkVM
· Correctness
· Security
· Trust assumptions
Performance: used to measure the capabilities of zkVM
· Efficiency
· Speed
· Simplicity
(1) Baseline: Correctness, Security, and Trust Assumptions
Correctness and security should be used as baselines when evaluating zkVM for mission-critical applications. There needs to be sufficient reason to be confident in the correctness, and the security claims need to be strong enough. In addition, the trust assumptions need to be weak enough for the application.
Without these properties, zkVM may be worse than useless for the application, as it may not perform as specified and expose users to hacker attacks and exploits.
Correctness
· The VM must perform the computation as expected
· The proof system must satisfy the security properties it claims
Correctness contains three major properties:
· Robustness: The proof system is true, so everything it proves is true. The verifier rejects evidence of false statements and only accepts computational results where the input produces the actual computational result.
· Completeness: The proof system is complete, able to prove all true statements. If the prover claims that it can prove the result of a computation, it must be able to produce a proof acceptable to the verifier.
· Zero-knowledge proofs: Having a proof does not reveal more about the inputs to a computation than knowing the result itself.
You can have completeness without soundness, if the proof system proves everything including false statements, it is obviously complete but not sound. And you can have soundness without completeness, if the proof system proves the existence of a program but cannot prove the computation, it is obviously sound (after all, it has never proved any false statements) but not complete.
Security
· Related to tolerances of soundness, completeness, and zero-knowledge proofs
In practice, all three correctness properties have non-zero tolerances. This means that all proofs are statistical probabilities of correctness, rather than absolute certainty. Tolerance refers to the maximum tolerable probability that a property fails. Zero tolerance is of course ideal, but in practice zkVM does not achieve zero tolerance on all of these properties. Perfect robustness and completeness seem incompatible with simplicity, and there is no known way to achieve perfect zero-knowledge proofs. A common way to measure security is in bits of security, where a tolerance of 1/(2^n) is called n-bit security. The higher the bit, the better the security.
If a zkVM is completely correct, this does not necessarily mean that it is reliable. Correctness only means that the zkVM satisfies its claimed security properties within the tolerance. It does not mean that the claimed tolerance is low enough to enter the market. Furthermore, if a zkVM is sufficiently secure, it does not mean that it is correct, and security refers to the claimed tolerance, not the tolerance actually achieved. Only when a zkVM is both completely correct and sufficiently secure can it be said that the zkVM is reliable within the claimed tolerance.
Trust Assumptions
· Assume the honesty of the prover and verifier to conclude that the zkVM operates reliably.
When a zkVM has trust assumptions, it usually takes the form of a trusted setup process. The setup process of a ZK proof system is run once to generate some information called "setup data" before the first proof is generated using this proof system. In the trusted setup process, one or more individuals generate some randomness, which is incorporated into the setup data, and it is necessary to assume that at least one of these individuals removes the randomness they incorporated into the setup data.
There are two common trust assumption models in practice.
The "honest majority" trust assumption states that more than half of a group of N people behave honestly in certain specific interactions with the system, which is a trust assumption commonly used in blockchains.
The "1/N" trust assumption states that at least one of a group of N people behaves honestly in certain specific interactions with the system, which is a trust assumption commonly used by MPC-based tools and applications.
It is generally believed that zkVM without trust assumptions is more secure than zkVM with trust assumptions, all other things being equal.
(2) zkVM trilemma: the balance between speed, efficiency, and simplicity in zkVM
9o0f7GPLNhZhjTlfyOmOcoXZvgbWjH5qf7L2Nk04.png
Speed, efficiency, and simplicity are all scalable properties. All of these factors contribute to the end-user cost of zkVM. How to weigh them in the evaluation depends on the application. In general, the fastest solution is not the most efficient or the most concise, the concise solution is not the fastest or the most efficient, and so on. Before explaining how they relate, let's define the various properties.
Speed
· How fast the prover can generate a proof
· Measured in wall-clock time, i.e., the time it takes to compute from start to finish
Speed should be defined and measured based on the specific test program, input, and system to ensure that it can be quantitatively evaluated. This metric is critical for latency-sensitive applications where timely availability of proofs is essential, but it also comes with higher overhead and larger proofs.
Efficiency
· The resources consumed by the prover, the less the better.
· Approximate user time, i.e., the computer time spent by the program code.
The prover consumes two resources: kernel time and space. Therefore, efficiency can be broken down into kernel time efficiency and space efficiency.
Kernel time efficiency: the average time the prover runs across all cores multiplied by the number of cores running the prover.
For a single-core prover, kernel time consumption and speed are the same thing. For a multi-core function prover running in multi-core mode on a multi-core system, kernel time consumption and speed are not the same thing. If a program fully utilizes 5 cores or threads for 5 seconds, this would be 25 seconds of user time and 5 seconds of wall clock time.
Space efficiency: refers to the amount of storage capacity used, such as RAM.
It is very interesting to use user time as a proxy for the energy consumed to run a computation. In the case where almost all cores are fully utilized, the energy consumption of the CPU should remain relatively constant. In this case, the user time spent on a CPU-bound, mostly user-mode code execution should be roughly linearly proportional to the watt-hours (i.e. energy) consumed by the code execution.
From the perspective of any proof operation of sufficient scale, saving energy consumption or the use of computing resources should be an interesting question, because the energy bill for proof (or cloud computing bill) is a significant cost of operation. For these reasons, user time is an interesting metric. Lower proof costs allow service providers to pass on lower proof prices to cost-sensitive customers.
Both efficiencies are related to the energy consumption of the proof process and the amount of money used by the proof process, which in turn is related to the financial cost of the proof. In order for a definition of measuring efficiency to be operational, the definition must be related to one or more test programs, one or more test inputs for each program, and one or more test systems.
Simplicity
· Size of the proofs generated and the complexity of verifying them
Simplicity is a combination of three different metrics, further broken down by the complexity of proof verification:
· Proof size: The physical size of the proof, typically in kilobytes.
· Proof verification time: The time required to verify the proof.
· Proof verification space: The memory usage during proof verification.
Verification is typically a single core operation, so speed and core time efficiency are often the same thing in this context. As with speed and efficiency, a definition of simplicity requires specifying the test program, test inputs, and test system.
Once each performance attribute is defined, we will demonstrate the impact of optimizing one attribute over the others.
· Speed: Fast proof generation results in larger proofs, but slower proof verification. The more resources consumed in generating proofs, the less efficient it is.
· Simplicity: Prover needs more time to compress the proof. But proof verification is fast. The more concise the proof, the more computational overhead it has.
· Efficiency: Minimizing resource usage will slow down proof generation and reduce proof simplicity.
Generally, optimizing for one aspect means not optimizing for another, so a multi-dimensional analysis is needed to select the best solution on a case-by-case basis.
A good way to weigh these attributes in an evaluation might be to define an acceptable level for each attribute and then determine which attributes are the most important. The most important attributes should be optimized while maintaining a good enough level on all other attributes.
Below we summarize the attributes and their key considerations:
- “I have not failed. ...
- “Every strike brings me closer to the next home run.” — ...
- “Everything you've ever wanted is sitting on the other side of fear.” —
- follow me friends pls....
<https://hey.xyz/u/04040444>
- gm friends...
- As we look ahead into the next century, leaders will be those who empower others. ...
- Fascism is a religion. ...
- Ping-pong was invented on the dining tables of England in the 19th century, and it was called Wiff-waff! ...
- Good character is not formed in a week or a month. ...
- If you could kick the person in the pants responsible for most of your trouble, you wouldn't sit for a month. ...
- <https://hey.xyz/u/04040444>
- gm friends..
- <https://www.elixir.xyz/refer/pfeffer3238>
- <https://zora.co/collect/oeth:0xfa63d3edbd6e7ce4594e3f74af7ee4da578263a6/1?referrer=0x25CC275CFE3Cce1700E816e00d4CD1f60872038A>
- <https://www.elixir.xyz/refer/pfeffer3238>
- <https://hey.xyz/u/04040444>
- good evening friends...
- <https://app.layer3.xyz/collections/the-infinity-cubes-campaign>
- best software for stock market..Free
<https://www.screener.in/>
- follow me for regular updates on Airdrop...
<https://hey.xyz/u/04040444>
- <https://www.elixir.xyz/refer/aufderhar6308>
- <https://snapshot.org/#/stgdao.eth/proposal/0x331603498dd2beb9d9f8ff960a0d7d8fb902783b9ae15a6012086451e49acfb1>
- Stargate DAO Proposal: Distribution of RFP (snapshot.org)
- follow me for regular updates on Airdrop
<https://hey.xyz/u/04040444>
- Good Evening friends..
- 🔥🔥🔥 BREAKING: 🇺🇸 SEC officially approves all spot #Ethereum 💰 ETFs. Ethereum is effectively deemed as Commodity.
- Short term target for ETH is 4090..
- gr8 news guys..
ETH ETF Approved...
- follow me
<https://hey.xyz/u/04040444>
- gm friends....
- <https://hey.xyz/u/04040444>
- gm friends...
- I just voted "Yes" on "Stargate ImmuneFi Bug Bounty Program" https://snapshot.org/#/stgdao.eth/proposal/0xc87599acecab6170b5e728a579faf7eed0a5cedb7fb3bb7b8071100610665e78 #SnapshotVote
- <https://hey.xyz/u/04040444>