PMS (@pms2000) • Hey
💰Crypto & NFT Researcher
Publications
- Dogs are kind.
- Jessy takes the reigns!
- Short story 😱😱
"He got tired of seeing his disfigured face in a mirror, now he wanted to see it reflected in other people... with his sharp claws he went out to make up scars. He couldn't break his curse but he could share it with others."
- returned the rabbit r1 today… so naturally I got the new meta rayban glasses (that folks are raving about) for “free”!!!
#girlmath
- Gib ZK airdrop
- https://imgflip.com/i/8sbimr
- Zero-knowledge proof paradigm: What is zkVM
“In the next 5 years, we will be talking about the adoption of zero-knowledge protocols as much as we are about the adoption of blockchain protocols. The potential unlocked by the breakthroughs of the past few years will sweep the crypto mainstream.”
— Jill, CSO of Espresso Systems, May 2021
Since 2021, the zero-knowledge proof (ZK) landscape has evolved into a diverse ecosystem of primitives, networks, and applications across multiple domains. However, while ZK is gradually gaining momentum, with the launch of ZK-powered rollups like Starknet and zkSync Era marking the latest advances in the space, much of ZK remains a mystery to ZK users and the crypto space as a whole.
But times are changing. We believe that zero-knowledge crypto is a powerful, pervasive tool for scaling and securing software. Simply put, ZK is the bridge to crypto mass adoption. To quote Jill again, anything involving zero-knowledge proofs (ZKPs) will create tremendous value (both fundamental and speculative) in both web2 and web3. The best minds in crypto are working hard to iterate and make ZK economically viable and production-ready. Even so, there is still much that needs to be done before the model we envision becomes a reality.
Compare ZK adoption to Bitcoin adoption, where one reason Bitcoin evolved from an internet currency on fringe enthusiast forums to “digital gold” approved by BlackRock was the proliferation of developer and community-generated content that fostered interest. For now, ZK exists in a bubble within a bubble. Information is fragmented and polarized, with articles either filled with arcane terms or too layman-like to convey any meaningful information beyond repetitive examples. It seems that everyone (experts and laymen) knows what zero-knowledge proofs are, but no one can describe how it actually works.
As one of the teams contributing to the zero-knowledge paradigm, we hope to demystify our work and help a wider audience establish a canonical foundation for understanding and analyzing ZK systems and applications, in order to promote education and discussion among relevant parties and enable the spread of relevant information.
In this article, we will introduce the basics of zero-knowledge proofs and zero-knowledge virtual machines, provide a high-level summary of the operation process of zkVM, and finally analyze the evaluation criteria of zkVM.
1\. Zero-knowledge proof basics
What is a zero-knowledge proof (ZKP)?
In short, a ZKP enables one party (the prover) to prove to another party (the verifier) that they know something without revealing the specific content of that thing or any other information. More specifically, a ZKP proves knowledge of a piece of data or the result of a calculation without revealing that data or the input. The process of creating a zero-knowledge proof involves a series of mathematical models that convert the result of a calculation into otherwise meaningless information that proves that the code was successfully executed, which will be verified later.
In some cases, the amount of work required to verify a proof that has been constructed through multiple rounds of algebraic transformations and cryptography is less than the amount of work required to run the calculation. It is this unique combination of security and scalability that makes zero-knowledge cryptography such a powerful tool.
zkSNARK: Zero-Knowledge Succinct Non-Interactive Argument of Knowledge
· Relies on an initial (trusted or untrusted) setup process to establish parameters for verification
· Requires at least one interaction between the prover and verifier
· Proofs are small and easy to verify
· Rollups like zkSync, Scroll, and Linea use SNARK-based proofs
zkSTARK: Zero-Knowledge Scalable Transparent Argument of Knowledge
· No trusted setup required
· Provides high transparency by using publicly verifiable randomness to create a trustless verifiable system, i.e. generating provable random parameters for proofs and verification.
· Highly scalable, as they can quickly (not always) generate and verify proofs, even if the underlying witness (data) is large.
· No interaction is required between the prover and verifier
· The trade-off is that STARKs generate larger proofs, which are harder to verify than SNARKs.
· Proofs are harder to verify than some zkSNARK proofs, but relatively easier to verify than others.
· Starknet and zkVMs such as Lita, Risc Zero, and Succinct Labs all use STARKs.
(Note: Succinct bridge uses SNARKs, but SP1 is a STARK-based protocol)
It is worth noting that all STARKs are SNARKs, but not all SNARKs are STARKs.
2\. What is zkVM?
A virtual machine (VM) is a program that runs programs. In context, a zkVM is a virtual computer that is implemented as a system, general circuit, or tool for generating zero-knowledge proofs, used to generate zkVMs for any program or computation.
zkVM does not require learning complex mathematics and cryptography to design and code ZK, allowing any developer to execute programs written in their favorite language and generate ZKPs (zero-knowledge proofs), making it easier to integrate and interact with zero-knowledge. Broadly speaking, most zkVMs mean a compiler toolchain and a proof system that are attached to the virtual machine that executes the program, not just the virtual machine itself. Below, we summarize the main components of zkVM and their functions
The design and implementation of each component is governed by the choice of proofs (SNARKs or STARKs) and instruction set architecture (ISA) for the zkVM. Traditionally, an ISA specifies the capabilities of a CPU (data types, registers, memory, etc.) and the order of operations that the CPU performs when executing a program. In context, an ISA determines the machine code that can be interpreted and executed by a VM. The choice of ISA can make a fundamental difference in the accessibility and usability of a zkVM, as well as the speed and efficiency of the proof generation process, and supports the construction of any zkVM.
Below are some examples of zkVMs and their components for reference only.
For now, we will focus on the high-level interactions between each component to provide a framework for understanding the algebraic and cryptographic processes and design trade-offs of zkVM in later articles.
3\. Abstract zkVM flow
The following figure is an abstract and generalized zkVM flow chart, splitting and classifying the format (input/output) as the program moves between zkVM components.
The general process of zkVM is as follows:
(1) Compilation phase
The compiler first compiles the program written in traditional languages (C, C++, Rust, Solidity) into machine code. The format of the machine code is determined by the selected ISA.
(2) VM phase
The VM executes the machine code and generates an execution trace, which is a series of steps of the underlying program. Its format is determined by the choice of algorithm and the set of polynomial constraints. Common algorithm schemes include R1CS in Groth16, PLONKish algorithm in halo2, and AIR in plonky2 and plonky3.
(3) Verification phase
The prover receives the trace and represents it as a set of polynomials subject to a set of constraints, essentially converting the computation into algebra by mathematically mapping facts.
The prover submits these polynomials using a polynomial commitment scheme (PCS). A commitment scheme is a protocol that allows the prover to create a fingerprint of some data X, which is called a commitment to X, and then use the commitment to X to prove facts about X without revealing the content of X. PCS is a fingerprint, a "preprocessed" concise version of the computational constraints. This allows the prover to use the random values that the verifier proposes in the next step to prove facts about the computation, now represented by a polynomial equation.
The prover runs a Polynomial Interactive Oracle Proof (PIOP) to prove that the submitted polynomial represents an execution trace that satisfies the given constraints. PIOP is an interactive proof protocol where the prover sends a commitment to a polynomial, the verifier responds with random field values, and the prover provides an evaluation of the polynomial, similar to "solving" a polynomial equation using random values to convince the verifier in a probabilistic manner.
Application of the Fiat-Shamir heuristic; the prover runs PIOP in non-interactive mode, where the verifier's actions are limited to sending anonymous random challenge points. In cryptography, the Fiat-Shamir heuristic converts an interactive proof of knowledge into a digital signature for verification. This step encrypts the proof and makes it a zero-knowledge proof.
The prover must convince the verifier that the polynomial evaluation it sends to the verifier is correct, regarding the polynomial commitment it sends to the verifier. To do this, the prover produces an "evaluation" or "opening" proof, provided by the polynomial commitment scheme (fingerprint).
(4) Verifier Phase
The verifier checks the proof by following the verification protocol of the proof system, either using constraints or commitments. The verifier accepts or rejects the result based on the validity of the proof.
In summary, a zkVM proof can prove that for a given program, a given result, and a given initial condition, there exists some input that causes the program to produce the given result when executed from the given initial condition. We can combine this statement with the flow to get the following description of zkVM.
A zkVM proof will prove that for a given VM program and a given output, there exists some input that causes the given program to produce the given output when executed on the VM.
4\. Evaluating zkVM
What is the criterion for evaluating zkVM? In other words, under what circumstances should we say that one zkVM is better than another? In practice, the answer depends on the use case.
Lita's market research shows that for most commercial use cases, between speed, efficiency, and simplicity, the most important attribute is either speed or kernel time efficiency, depending on the application. Some applications are price-sensitive and want to optimize the proof process to be low-energy and low-cost. For these applications, kernel time efficiency may be the most important optimization metric. Other applications, especially those related to finance or trading, are very sensitive to latency and need to optimize for speed.
Most public performance comparisons focus only on speed, which is certainly important, but is not a comprehensive measure of performance. There are also several important properties that measure the reliability of zkVM, most of which are not up to production standards, even for market-leading incumbents.
We recommend evaluating zkVMs on the following criteria, divided into two subcategories:
Baseline: used to measure the reliability of zkVM
· Correctness
· Security
· Trust assumptions
Performance: used to measure the capabilities of zkVM
· Efficiency
· Speed
· Simplicity
(1) Baseline: Correctness, Security, and Trust Assumptions
Correctness and security should be used as baselines when evaluating zkVM for mission-critical applications. There needs to be sufficient reason to be confident in the correctness, and the security claims need to be strong enough. In addition, the trust assumptions need to be weak enough for the application.
Without these properties, zkVM may be worse than useless for the application, as it may not perform as specified and expose users to hacker attacks and exploits.
Correctness
· The VM must perform the computation as expected
· The proof system must satisfy the security properties it claims
Correctness contains three major properties:
· Robustness: The proof system is true, so everything it proves is true. The verifier rejects evidence of false statements and only accepts computational results where the input produces the actual computational result.
· Completeness: The proof system is complete, able to prove all true statements. If the prover claims that it can prove the result of a computation, it must be able to produce a proof acceptable to the verifier.
· Zero-knowledge proofs: Having a proof does not reveal more about the inputs to a computation than knowing the result itself.
You can have completeness without soundness, if the proof system proves everything including false statements, it is obviously complete but not sound. And you can have soundness without completeness, if the proof system proves the existence of a program but cannot prove the computation, it is obviously sound (after all, it has never proved any false statements) but not complete.
Security
· Related to tolerances of soundness, completeness, and zero-knowledge proofs
In practice, all three correctness properties have non-zero tolerances. This means that all proofs are statistical probabilities of correctness, rather than absolute certainty. Tolerance refers to the maximum tolerable probability that a property fails. Zero tolerance is of course ideal, but in practice zkVM does not achieve zero tolerance on all of these properties. Perfect robustness and completeness seem incompatible with simplicity, and there is no known way to achieve perfect zero-knowledge proofs. A common way to measure security is in bits of security, where a tolerance of 1/(2^n) is called n-bit security. The higher the bit, the better the security.
If a zkVM is completely correct, this does not necessarily mean that it is reliable. Correctness only means that the zkVM satisfies its claimed security properties within the tolerance. It does not mean that the claimed tolerance is low enough to enter the market. Furthermore, if a zkVM is sufficiently secure, it does not mean that it is correct, and security refers to the claimed tolerance, not the tolerance actually achieved. Only when a zkVM is both completely correct and sufficiently secure can it be said that the zkVM is reliable within the claimed tolerance.
Trust Assumptions
· Assume the honesty of the prover and verifier to conclude that the zkVM operates reliably.
When a zkVM has trust assumptions, it usually takes the form of a trusted setup process. The setup process of a ZK proof system is run once to generate some information called "setup data" before the first proof is generated using this proof system. In the trusted setup process, one or more individuals generate some randomness, which is incorporated into the setup data, and it is necessary to assume that at least one of these individuals removes the randomness they incorporated into the setup data.
There are two common trust assumption models in practice.
The "honest majority" trust assumption states that more than half of a group of N people behave honestly in certain specific interactions with the system, which is a trust assumption commonly used in blockchains.
The "1/N" trust assumption states that at least one of a group of N people behaves honestly in certain specific interactions with the system, which is a trust assumption commonly used by MPC-based tools and applications.
It is generally believed that zkVM without trust assumptions is more secure than zkVM with trust assumptions, all other things being equal.
(2) zkVM trilemma: the balance between speed, efficiency, and simplicity in zkVM
9o0f7GPLNhZhjTlfyOmOcoXZvgbWjH5qf7L2Nk04.png
Speed, efficiency, and simplicity are all scalable properties. All of these factors contribute to the end-user cost of zkVM. How to weigh them in the evaluation depends on the application. In general, the fastest solution is not the most efficient or the most concise, the concise solution is not the fastest or the most efficient, and so on. Before explaining how they relate, let's define the various properties.
Speed
· How fast the prover can generate a proof
· Measured in wall-clock time, i.e., the time it takes to compute from start to finish
Speed should be defined and measured based on the specific test program, input, and system to ensure that it can be quantitatively evaluated. This metric is critical for latency-sensitive applications where timely availability of proofs is essential, but it also comes with higher overhead and larger proofs.
Efficiency
· The resources consumed by the prover, the less the better.
· Approximate user time, i.e., the computer time spent by the program code.
The prover consumes two resources: kernel time and space. Therefore, efficiency can be broken down into kernel time efficiency and space efficiency.
Kernel time efficiency: the average time the prover runs across all cores multiplied by the number of cores running the prover.
For a single-core prover, kernel time consumption and speed are the same thing. For a multi-core function prover running in multi-core mode on a multi-core system, kernel time consumption and speed are not the same thing. If a program fully utilizes 5 cores or threads for 5 seconds, this would be 25 seconds of user time and 5 seconds of wall clock time.
Space efficiency: refers to the amount of storage capacity used, such as RAM.
It is very interesting to use user time as a proxy for the energy consumed to run a computation. In the case where almost all cores are fully utilized, the energy consumption of the CPU should remain relatively constant. In this case, the user time spent on a CPU-bound, mostly user-mode code execution should be roughly linearly proportional to the watt-hours (i.e. energy) consumed by the code execution.
From the perspective of any proof operation of sufficient scale, saving energy consumption or the use of computing resources should be an interesting question, because the energy bill for proof (or cloud computing bill) is a significant cost of operation. For these reasons, user time is an interesting metric. Lower proof costs allow service providers to pass on lower proof prices to cost-sensitive customers.
Both efficiencies are related to the energy consumption of the proof process and the amount of money used by the proof process, which in turn is related to the financial cost of the proof. In order for a definition of measuring efficiency to be operational, the definition must be related to one or more test programs, one or more test inputs for each program, and one or more test systems.
Simplicity
· Size of the proofs generated and the complexity of verifying them
Simplicity is a combination of three different metrics, further broken down by the complexity of proof verification:
· Proof size: The physical size of the proof, typically in kilobytes.
· Proof verification time: The time required to verify the proof.
· Proof verification space: The memory usage during proof verification.
Verification is typically a single core operation, so speed and core time efficiency are often the same thing in this context. As with speed and efficiency, a definition of simplicity requires specifying the test program, test inputs, and test system.
Once each performance attribute is defined, we will demonstrate the impact of optimizing one attribute over the others.
· Speed: Fast proof generation results in larger proofs, but slower proof verification. The more resources consumed in generating proofs, the less efficient it is.
· Simplicity: Prover needs more time to compress the proof. But proof verification is fast. The more concise the proof, the more computational overhead it has.
· Efficiency: Minimizing resource usage will slow down proof generation and reduce proof simplicity.
Generally, optimizing for one aspect means not optimizing for another, so a multi-dimensional analysis is needed to select the best solution on a case-by-case basis.
A good way to weigh these attributes in an evaluation might be to define an acceptable level for each attribute and then determine which attributes are the most important. The most important attributes should be optimized while maintaining a good enough level on all other attributes.
Below we summarize the attributes and their key considerations:
- Purple color.💜
- Whats happening in your neighborhood?
- Lens Frames are now integrated with Frames.js 🎉
- frames.js package can be used to develop Lens Frames (https://framesjs.org/)
- frames.js debugger can be used to test multi-protocol Frames (Lens, Farcaster, XMTP) with integrated logins for each (https://framesjs-debugger.vercel.app/)
- Demo Frame with Lens + Farcaster + XMTP support (https://demo-multi-frame.vercel.app/, https://github.com/defispartan/demo-multi-frame)
- The debugger can also serve as a reference for apps to integrate the Lens Frames spec (https://github.com/framesjs/frames.js/blob/main/packages/debugger/app/hooks/use-lens-identity.tsx)
What's Next?
- Collaborating with Lens apps to add support for Frames in app feeds
- Docs and examples for other Frame tools (OnchainKit & Frog middleware)
- Template for transaction Frames 🤝 Open Actions
Why Lens Frames?
Frames are a tool to embed interactive elements between websites, enabling new experiences in social feeds and new ways for developers to distribute and monetize their work.
Lens has several features that can enhance the experience of performing onchain interactions within a social feed:
- Sign in With Lens = Sign in With Ethereum. A Lens session is a connection to an Ethereum wallet address, allowing message and transaction signing to occur directly within a Lens session.
- Open Actions are smart contract calls attached to a Lens Publication. Actions can modify a transaction based on the social context (incorporate quote, comment, mirror, follow, application data), enabling new synergies, security guarantees, and revenue opportunities for transaction Frames.
- ****ANNOUNCING:
THE DOMS ARE COMING (soon)
x,xxx DOMS are coming to a screen near you!!
Each DOM comes with a SONG !!!
Collect this free edition to reserve your spot to get a DOM for the low low
More info coming (soon) !!!
- gm
- Is this the cybertruck of boats? 👀🤔
- Day 34 of 365
#365daysdrawingchallenge #art #freehandsketch
- Rabbits are so cute.
- In this club you will find:
- All my episodes
- Exclusive perks for members and collectors
- AMA with guests (get your questions feature)
- And much more!
- Gm Podcasters 👏🏻
- What is your podcast about?
-philosophy
-do one with me
-ok, what do you want to talk about?
-I don't know, anything
-🫥
People think you go through life improvising and that's enough😂
- Automate, Eliminate, Delegate 💭
- free yourself from limiting beliefs 💭
- Making @club/entheogen private because I am combing through each request to make sure there are no bots 🤖
Excited to expand on this Club’s concept 🥰 really good memes (funny or factual) are welcome, NO WEB3 MEMES lol, I don’t think they are funny 😆
**1**. philosophical thoughts, **2**. stories, and **3**. thought provoking questions are MORE than welcome here
•••••••••••••••••Art Theme••••••••••••••••
**— EXIST BY CREATION —**
Derived from the Greek words;
“éntheos” & “genésthai”
ENTHOGEN is to be “full of god” and thus “come into BEING” through various mediums, and tangibles…the question is
**what should be done?**•••••••••••••••
If art is shared, I’d want it to be in this essence and highly curated by YOUR own standards and **Asking yourself the above question** ⬆️
~BECOMING
*Soon enough you’ll be able to use your $BCM token to purchase mints in the club.
- I love cats so much.
- Good morning ✨💚
-
- 📍Tivat, Montenegro 🇲🇪
The sea woke me up 🌊
And life instantly feels better ❤️
- **let’s get “epic” digital stickers**🟥🔥🤫
- +2500 new holders joined to the $NoiseGang this week. These new collectors, followers, and supporters have all been rewarded with $NOISE tokens.
The next airdrop will be exclusively for all @lens/noise_ followers and site subscribers.
Furthermore, all proceeds from collects and tips on this post will be added directly to the Noise liquidity pool. This will help improve the stability and liquidity of the $NOISE token, ensuring its long-term viability.
Here's to the next 6000+ members! 🎉
Make extra $Noise this week:
- Mirror & Collect this post.
- Follow me on sound and create a post with any of my tracks.
- Suscribe to the noise site.
- Collecting Music NFTs.
Do it directly from:
https://noiseagency.xyz/noisetoken
Token P0wered by @lens/p00ls
- Day 32 of 365
#365daysdrawingchallenge #art #freehandsketch
- Nice green flowers.
- The new orbs logo
- Free collect to mark my trip to the afk event at Eth Berlin ❤️
@club/afk @club/lens @club/orb x SCENES
If you haven’t taken part yet , there’s only 3 weeks left to be a part of the SCENES project.
Head to the site, contribute a memory in response to the music and I’ll make you a custom piece of artwork that will be a part of the final collection (like the ones on the dj booth!) 💿 - it’s my way of re-imagining the music album to give you the chance to be more than just passive listeners and co create with me 🤝
Scenes is about stepping away and reflecting on what’s important in life, and recording those precious moments onchain for prosperity 🖼️
Take part now👇🏻
https://scenes.soundoffractures.com/
- I don't have a problem, you have a problem.
- select one
- FIRST MINT
- Nice blue flowers.🥰
- First step, package crypto into traditional financial products.
Next step, bring global finance onchain.
The significance of the newly approved #ETH ETF and why crypto is inevitable ⬇️
<https://youtu.be/6d3Qbxb5HII>
- Coinbase Inc. (COIN) took another step in its back-and-forth argument with the U.S. Securities and Exchange Commission (SEC) on whether the cryptocurrency exchange should be allowed to raise a single, core legal point for consideration by a higher court.
After the company's effort to dismiss the SEC's enforcement case against it was rejected in federal court, Coinbase lawyers on Friday filed for a so-called interlocutory appeal that seeks to get one question considered at the next level up: Is a digital asset transaction that poses no obligation to the original issuer of the asset an investment contract regulated by the SEC?
Coinbase's filing described the query as "a novel legal question in a regulatory action against a market leader that could shape or distort a multi-trillion-dollar industry."
Source: Coindesk
\#coin #coinbase #sec #news #lens
- **Hardware wallet vs Software wallet**
Hardware wallets and software wallets serve the same purpose of storing and securing your cryptocurrency, but there are key differences between the two. A hardware wallet is a physical device that stores your private keys offline, making it less vulnerable to hacking attacks compared to software wallets. It offers an extra layer of security by keeping your keys away from potential online threats. On the other hand, software wallets are digital applications that store your private keys online. While convenient and easy to use, they are more susceptible to cyber attacks, such as malware or phishing scams. It's important to be cautious when using software wallets and ensure you are taking necessary precautions to protect your funds. Ultimately, the decision between a hardware wallet and a software wallet depends on your individual needs and level of security preference. If you prioritize security and are willing to invest in a physical device, a hardware wallet may be the best option for you. However, if convenience and accessibility are more important, a software wallet could be more suitable. It's essential to do thorough research and consider your own risk tolerance before deciding on which type of wallet to use 😉
- a friend of mine is flying over Nanga Parbat
it’s the 9th highest mountain in the world and the aerial view is incredible 🏔️
- Bots on lens have evolution stages 😂
First they just used to copy paste the oc in comments
Then, they learned to reply in natural language , and tagging others
Idk why but, then someone set this bots to write ai generated inspirational quotes, which many got trapped in and thanking bot for those quotes🤣
Now, at this point they are well trained on lens db and can mint your free posts (the worst evolution imo, as lot of gas fees are wasted and hard to find real collectors involved)
And finally after years of evolution now they have started to learn how to set pfp which is totally random image rn, but just give them some time to evolve further and they would be able to generate ai headshots.
Soon they will be arguing with you under reply comments, that would be most funny, humans arguing against bots in comment section.
Bots will be collecting this posts🤖😂
- A glass of tea.
- Day 28 of 365
#365daysdrawingchallenge #art #freehandsketch
- Another free wallpaper!
This one is…
GROUP (THINK) LOVE inspired of course. 😈
- Day 30 of 365 🕑
#365daysdrawingchallenge #art #freehandsketch
- I decided to sketch this illustration today
Day 29 of 365
#365daysdrawingchallenge #art #freehandsketch
- visit profile ->
- Find ecstasy in life, the mere sense of living is joy enough.
- Bloomberg Intelligence ETF analyst Eric Balchunas expects Ethereum ETFs to be 10-15% of the assets of the BTC ETFs. Several indicators suggest institutional interest in Ethereum is much lower than it was for Bitcoin, according to researcher Noelle Acheson. The leading ETH futures ETF (EETH) is around 4% that of the leading BTC futures ETF (BITO).
Source : https://www.dlnews.com/articles/snapshot/why-this-analyst-thinks-ethereum-etfs-will-disappoint/?utm_source=twitter&utm_medium=organic_social&utm_campaign=
#news #eth #btc #lens #bloomberg #etf #bitcoin
- ✨Experimenting.
🖼️Mint this post.
❤️Like & quote.
⏳…
- ola