Roundtable Discussion: How can zk move towards true mainstream, and how will zk integrate with AI?
Author: ChainCatcher
As the narrative around zk continues to heat up, last week ChainCatcher held the second session of the "zk Master Class" series (previous sessions recap), inviting six members from Ola, Cysic, Hyper Oracle, FOX, Opside, and CatcherVC to discuss the theme "With the zk narrative heating up, how to capture the long-term value of zk," exploring the current development bottlenecks of zk and potential breakthrough directions for the future.

The following is a summary of the event:
1. Host LonersLiu: Why did each guest choose to start a business in the zk direction? Have you discovered any limitations of zk during the entrepreneurial process?
NanFeng: Opside has been researching technology selection since 2018. We found that many projects providing Rollups services, such as AltLayer and recently popular projects, are likely based on the OP Stack framework, modifying the code from Optimism.
In contrast, we believe zk is a more long-term solution that can be faster, safer, and trustless. At the same time, zk can bring some other features, such as our cross-Rollup communication, which cannot be achieved by the Optimism solution.
The biggest limitation of zk lies in how to provide more computational power because the Optimism solution only requires one node, similar to running an application chain. However, zk solutions need to generate zk proofs to provide data verifiability. Additionally, because zk computational power is highly specialized, game or social network developers may not have the capability to provide zk computational power, necessitating the attraction of more miners to provide computational power rather than having developers maintain it themselves.
Based on a hybrid consensus of PoS and PoW, Opside can solve this problem, making zk Rollup a viable service. We have researched many open-source solutions, such as the recently launched Polygon zkEVM and the upcoming Scroll and Taiko solutions. We will make certain modifications to their consensus layers to adapt to the Opside platform. This way, we can benefit from Opside's hybrid consensus of PoS and PoW, allowing other miners to join in providing a decentralized computational power network. For users, they can enjoy the compatibility of zkEVM and seamlessly migrate from BNB Chain or Polygon and other EVM-compatible chains without worrying about underlying computational power issues.
Leo Fan: The founding of Cysic came from my experiences at Algo. Algo itself has a zk proof, and at that time we tried to shorten the proof generation time to under a minute, but despite using many algorithms and software optimizations, we did not achieve that, leading to the idea of zk acceleration hardware.
From an experiential perspective, the threshold for ZKP computational power is slightly higher; not only do you need to have a deep understanding of the entire ZKP algorithm, but designing ZKP algorithms is also far more complex than Bitcoin mining chips. Because the algorithm modules are quite complex and the entire ZKP algorithm is still evolving, the design needs to balance generality and efficiency.
Currently, all ZKP methods on the market are not completely bound to a specific algorithm; people just break these algorithms down into more basic operators and try to accelerate these operators. Even if the algorithms change later, it only requires recompilation at the software level. Therefore, everyone is currently testing the most efficient combinations, such as our MSN, which already has very high efficiency.
White Young: When I entered the crypto industry in 2018, I was working on consortium chains, specifically a zk-based privacy transaction, as many financial institutions were concerned about privacy. With the development of Ethereum, I encountered the concept of Rollup in 2019 and discovered the role of zk in scalability. Until 2021, I transitioned from industrial blockchain to the crypto industry, hoping to implement some real-world applications in the zk direction.
After that, we began researching zkEVM solutions and gradually realized that for a scalability project, achieving the final scalability effect is more important. There are still many performance improvement points in existing projects.
Thus, we founded Ola VM, which focuses on high performance and compatibility as a zkVM. We deployed the entire poc algorithm of Ola VM to the virtual machine, then to the compiler and programming language; this is something designed from scratch. Therefore, in addition to Ola VM, we also developed the programming language Ola Lang.
In January of this year, we adjusted the direction of Ola VM. In addition to scalability, we are also laying out in the privacy field. From the recent financing of Aztec Network, we judge that there is still a real narrative demand for privacy in the crypto industry.
Msfew: The reason we chose the zk direction is that we found that middleware networks like The Graph have very poor security, either relying on a centralized mechanism similar to Optimistic or binding nodes with legal documents to constrain the validity of their computations. However, for all Dapp developers, middleware networks are essential infrastructure, and we saw an opportunity there. zk ensures the validity of data for us, including indexing on-chain data from Ethereum or off-chain computations, etc.
Additionally, we can support arbitrary off-chain computations, which breaks through one of zk's limitations. zkWASM (WebAssembly's zkVM) is fully customizable and programmable; any program running in zkWASM has zk's powerful capabilities, including verifiability, trustlessness, decentralization, and computational integrity.
The existence of zkWASM and zkEVM serves the same purpose: EVM is designed to support the rapid deployment of all Solidity smart contracts to the zkEVM network. zkWASM aims to support The Graph ecosystem, with over 800 smart contract Subgraphs currently deployed on the mainnet. zkWASM can run all code inside the virtual machine, executing code and generating proofs.
Sputnik: The reason we chose this track is that we see two important properties of zk: on one hand, it can achieve zero knowledge and provide privacy protection. At the same time, Rollup development is also a technical specialty of our team.
Some challenges we currently face may be related to the performance of the algorithms themselves, so improving existing algorithms is a challenge we are focusing on. Additionally, in zk implementation, on one hand, there is currently not much infrastructure supporting zk circuits; on the other hand, there is also a uniformity issue with circuits, where the same code can be difficult to implement due to different circuits. However, there is currently no optimal expression method.
2. Host LonersLiu: Although zk replaces previous economic games with mathematics, whether for censorship resistance, avoiding single point failure systems, or incentivizing more participation, a clever token design is needed to incentivize different participants. However, current computational resources may choose to focus on POW or AI computing; why should one choose to participate in a new ZKP project? What are your thoughts on the token economic model for ZKP products?
Msfew: On security, I want to add that we don't necessarily have to use zkWASM, but to ensure that we can inherit Ethereum's security rather than inheriting part of Ethereum's security through StarkNet, we ultimately chose zkWASM, allowing proofs to be verified on Ethereum.
Regarding the economic incentives of the entire system, the key advantage of zk is that it can replace many complex components with pure cryptography. If zk is not used, some current middleware or oracle networks may need to make many economic designs and assumptions on a macro level, designing various very complex curves.
In terms of designing the incentive mechanism, we will combine some designs from The Graph and remove some Ponzi components. The primary purpose of token design is not merely to increase its price but to create a successful product or network in conjunction with the token.
3. Host LonersLiu: In past security models, we would trust the participants in the game to act honestly, but in zk, we need to trust the security of the circuits, constraints, or compilers, but behind this is still human. How do you ensure security, through audits or other means?
Msfew: This point is actually related to the definition of Trustless. Trustless means no trust is required. However, if we trace it back, nothing is truly trustless. To evaluate a zk project, one still needs to trust that its circuits are correct and that the code is correct. Just as we trust that the code written by Bitcoin core developers is correct and not buggy. At a deeper level, it requires faith in cryptography and mathematics that their rules will not fail.
The security of the entire zk system is a relatively new issue. Currently, there are very few security tools, testing tools, or auditing tools related to zk development; most rely on manual audits. In future developments, many zk security-related tools may emerge, which would be beneficial for all zkEVM or other zk networks.
4. Host LonersLiu: When Opside is doing zk-RaaS, assuming there is a lot of computational power, how would you allocate that power? Or how would you incentivize more participants to serve the network?
NanFeng: This is a core issue we need to solve. But the premise is that the supply of computational power in the market far exceeds demand. Currently, after Ethereum transitions to 2.0, those miners eliminated from Ethereum have nowhere to go, releasing a large amount of computational power. Under this premise, this computational power needs to find a target and move to a better platform. Opside provides a unified market pricing for these ZKP computational powers.
Currently, many zk Rollups have been deployed on Ethereum or BSC, but they are all fragmented. For example, the already launched Polygon zkEVM and zkSync Era have no relation to each other. Assets on Polygon and zkSync are completely different accounts, and the underlying computational power and algorithms are also entirely different.
But within Opside, for example, a unified specification of Polygon zkEVM can be adopted, and zkSync, Scroll, etc., can also follow a standard, generating unified pricing. For miners, this is no longer a fragmented zkEVM but a unified batch market, allowing them to seek the highest price for generating ZKP proofs to earn profits.
5. Host LonersLiu: If the computational power on the Opside platform is highly concentrated, will it affect security? For example, in POW, highly concentrated computational power may lead to hard forks; will concentrated computational power in ZKP pose risks to the network?
NanFeng: This is actually the biggest difference between traditional POW and zkEVM's POW. Traditional POW, such as Bitcoin, involves meaningless computations that only calculate hashes. However, from the perspective of zkEVM, its computational work is actually ZKP computational power, but it is not used for the entire system's forks because the consensus of the entire system is still determined by the upper layer, Opside. The only role of the generated ZKP is to verify whether a certain sequence (sorter) is correct; whether or not to fork is determined by Opside.
6. Host LonersLiu: Sputnik, could you share how FOX thinks about token incentives?
Sputnik: The core issue here is whether the token design for L2 is simple and convenient.
Some L2s choose to issue their own tokens, while others continue to use L1 tokens. If a separate token is issued, it means that deploying contracts and executing transactions on the Layer requires separate purchases of GAS, which not only inconveniences users but is also not good for the project. Regardless of the mechanism, the design principle regarding fees should serve the user and not trap them in complex exchanges.
Additionally, the token mechanism also involves whether it can incentivize nodes that generate proofs while ensuring security. Regarding security, there are mainly two aspects: one is algorithmic security, and the other is system-level, consensus-level security.
Fox has also been researching this mechanism. Let me give a simple attack example: if a node submits a proof and receives corresponding token incentives, when it submits a reliable computation proof, can other nodes quickly copy its Proof and submit it, potentially leading to a race? This requires thinking about how to make them generate different proofs, which necessitates incorporating some unique information into its circuits.
7. Host LonersLiu: In a privacy environment, when the zk process is outsourced to a third party, is there a risk of data leakage?
White Young: Ideally, programmable privacy hopes that privacy transactions are generated on the user side. However, due to the current performance limitations of privacy zero-knowledge proofs, they cannot be completed on a machine with relatively weak computational power, so there is generally a proxy solution where privacy matters are set up by a third party. In this case, there is a risk of leaking transaction-related information, such as the sending address, sender, and transaction content.
However, although this private information may be leaked, it does not lead to the forgery of this transaction or subsequent transactions. At the same time, the scope of leakage is limited to exposing the transaction's privacy to this node only.
8. Host LonersLiu: If GPT 3.5 represents a significant event in AI going mainstream, driven by data training to a certain extent, what do you think will make zk truly go mainstream? Recently, there has been frequent discussion about the combination of zk and AI; what are your thoughts on this direction?
Msfew: When a universal zk virtual machine like zkWASM can be combined with traditional computing programs, zk can truly go mainstream.
Because zkEVM, while a significant innovation in the Web3 and blockchain fields, only addresses specific issues in these areas. zkWASM, zkVM, etc., solve traditional computing effectiveness and privacy issues in Web 2; only when zkWASM and zkVM complement each other can they address almost all computing problems. At the same time, we also need to improve the speed of zk proofs to allow both Web2 and Web3 computations to confidently use zkEVM.
Regarding the combination of zk and AI, zk and AI are heading in two different directions. zk is about cryptographic personal sovereignty, addressing computational effectiveness and privacy issues, while AI brings about the liberation of productivity.
zk can ensure that AI model computations are valid and can verify the identity of the suppliers behind AI models, such as confirming that a certain AI supplier is ChatGPT rather than other service providers like Wenyan Yixin.
In terms of privacy, the combination of zk and AI may have two models: one protects personal sensitive data from leakage; the other ensures that the parameters of a certain computational model are not leaked while proving the accuracy and performance of the model itself, such as protecting a trading strategy so that its code cannot be seen while still proving the strategy's effectiveness.
9. Host LonersLiu: In terms of privacy, many traditional internet companies use MPC to solve privacy issues, which is cheaper than zk. When AI requires a lot of computation, what advantages does zk have over MPC?
Msfew: The advantage of zk lies in its simplicity; its verification speed is very fast and can be implemented in any computational environment, such as browsers, mobile phones, or on-chain contracts, etc. In the blockchain field, if a project wants to persuade the entire blockchain network to accept its data, it must provide a proof, and this proof cannot be excessively large or complex. At this point, zk's simpler proof method is indeed more needed. This is also why zk is closely integrated with blockchain.
10. Host LonersLiu: In the AI field, with ChatGPT making general models very powerful, many explorations of vertical "ChatGPTs" have emerged. Analogously, in the zk field, when zkEVM is well perfected, some teams are also starting to create application-specific chains. What are NanFeng's thoughts on specific user needs?
NanFeng: Specific needs require specific circuits. A typical example is that dYdX originally relied on Starkware to provide its exclusive circuits. The advantage is high circuit efficiency, but the downside is that it incurs additional development costs and higher development thresholds. Most importantly, it leaves sovereignty with Starkware rather than dYdX itself, preventing dYdX from autonomously changing many economic models.
Thus, the future may be a long-term coexistence of multi-chain and multi-Rollup states. Some projects need to adopt a multi-chain model, while others will follow the Rollup route.
11. Host LonersLiu: Returning to the previous question, when will zk truly go mainstream, and how will zk and AI combine? What are White Young's thoughts?
White Young: To go mainstream, we first need to attract mainstream applications to gain mainstream users. What zk needs to do is to help the industry meet high-performance demand scenarios when a large number of mainstream users flood into the blockchain.
Regarding the combination of zk and AI, we envision that under conditions of a large user base and high-performance demand, if we still insist on using the native gas model, the gas cost for a single AI call may be very high, and a block may not be able to package a transaction or the result of an AI transaction. Therefore, we can use zk to compress and express a transaction of an AI circuit as efficiently as possible, allowing a block to contain as many transactions as possible. Secondly, we may also need to use some new gas billing models to address gas consumption issues.
When zk can effectively solve scalability issues, and a significant number of mainstream users genuinely enter the blockchain, privacy will naturally begin to receive more attention. At that point, zk can once again play its role in privacy, which is a gradual process.
12. Host LonersLiu: What innovations and breakthroughs in the zk field are worth paying attention to? How do these innovations impact the development of Web3? In what aspects does zk technology demonstrate its long-term value in the Web3 field?
Sputnik: First, we need to consider what zk brings. We believe zk provides a penetrative capability that helps us penetrate some information gaps. For example, when parents say they will keep our New Year's money safe for us to use when we grow up, how can we trust that they are genuinely saving it? There is an information gap, and what zk can do is prove the authenticity of the information without directly showing it to the other party.
Abstractly, there are many scenarios with information gaps, such as proof of reserves for exchanges. zk can eliminate these information gaps, allowing us to trust not only based on human nature but also based on trust in mathematics.
Msfew: From a technical perspective, we will see many technical talents joining zk and creating very novel proof systems, which is a trend. In terms of applications, we can pay attention to zk oracles like Hyper Oracle that support arbitrary off-chain computations. We can utilize zk to truly achieve end-to-end decentralization in decentralized applications while ensuring security.
Our zk components can help Ethereum's consensus be verified in any scenario, such as mobile browsers or even smart contracts. Our zk oracle can be seen as a browser-like application for Ethereum, while the zk components supplement Ethereum's consensus.
In terms of zk's long-term value, zk's essence is to solve trust issues through hardcore cryptographic computations off-chain, while the essence of blockchain is to solve trust issues through many people repeating project computations. I believe that in the future, zk may be a stronger cryptographic application than blockchain; zk is a more important concept than blockchain, but currently, the two complement each other.
13. Host LonersLiu: Vitalik mentioned that in the future, Ethereum Layer 1 will also need to be zk-enabled. In an era where hardware acceleration is mature, is it possible to do zk proofs through mobile phones and computers? What might the scale of zk hardware mining be in the future?
Leo Fan: Verification should not be a problem, as the entire computational load for verification is very small. The generation depends on the size of the circuit and the steps used, which is highly variable. Recently, we have also been discussing cooperation with the Ethereum Foundation regarding the standardization of the entire hardware acceleration interface.
Additionally, we are developing a ZKP chip, which is expected to be launched around Q2 of next year. The performance of this chip is approximately 64 times that of a 3090 graphics card, allowing many complex ZKP computations to be completed in 10 to 20 seconds.
In the future, we will also develop a developer version of this chip, allowing developers to connect it directly to laptops via USB for local computation processing.
The scale should not be small, given the number of ZKP projects. We also plan to establish a zk prove DAO in the second half of this year. For Layer 2 projects, their demand for our chips is around 2000 units, and with Layer 1 and other applications, the rough demand exceeds 10,000 units.
14. Host LonersLiu: What directions in the zk field for future innovations and breakthroughs are worth paying attention to? Do you have any additional comments?
NanFeng: The zkEVM field is still worth looking forward to in the future. With the emergence of projects like Scroll, this field will attract a more competitive market. Points worth paying attention to include the compatibility, modifiability, efficiency generation, and resource consumption of ZKP, etc., which can also highlight the competitiveness of projects. We at Opside will integrate these factors to select some solutions and incorporate them into our platform to provide users with the best experience.
Sputnik: Cross-chain bridges are also a hot application of zk. FOX has also developed corresponding solutions using zk technology.
Some challenges in cross-chain involve how to control assets and balance liquidity and trustworthiness. In terms of trustworthiness, the conventional approach has been to rely on a trusted center; for example, after locking an asset on one side, it needs to be unlocked on another chain. The emergence of zk can provide a good solution for trustworthiness.
White Young: First, it should be clarified that Ola is a platform that supports privacy, not just a platform that can only do privacy. This means that when we conduct some public transactions on our platform, we are also a zkEVM.
Scalability is definitely the first issue that zk or the entire blockchain needs to solve; without solving it, it is difficult to introduce mainstream applications. However, scalability involves a choice issue: do you start from the perspective of compatibility or efficiency? Different choices lead to different technical routes. For example, Scroll started from the perspective of compatibility, initially considering migrating the Ethereum ecosystem directly and then making some efficiency improvements. However, Ola first considered efficiency, thus creating a completely zk-friendly virtual machine at the base layer, and then working on compatibility. Overall, zk's development is to first solve scalability, and only after that will other opportunities arise.
Popular articles














