Scan to download
BTC $72,100.61 +5.88%
ETH $2,113.14 +7.59%
BNB $651.05 +2.99%
XRP $1.42 -4.56%
SOL $81.67 -4.53%
TRX $0.2795 -0.47%
DOGE $0.0974 -3.83%
ADA $0.2735 -4.22%
BCH $459.20 +4.21%
LINK $8.64 -2.97%
HYPE $28.98 -1.81%
AAVE $122.61 -3.42%
SUI $0.9138 -6.63%
XLM $0.1605 -4.62%
ZEC $260.31 -8.86%
BTC $72,100.61 +5.88%
ETH $2,113.14 +7.59%
BNB $651.05 +2.99%
XRP $1.42 -4.56%
SOL $81.67 -4.53%
TRX $0.2795 -0.47%
DOGE $0.0974 -3.83%
ADA $0.2735 -4.22%
BCH $459.20 +4.21%
LINK $8.64 -2.97%
HYPE $28.98 -1.81%
AAVE $122.61 -3.42%
SUI $0.9138 -6.63%
XLM $0.1605 -4.62%
ZEC $260.31 -8.86%

Vitalik talks to Chinese developers: Extreme idealism and casinos are both unhealthy

Summary: In a recent interview, Vitalik stated that the Ethereum POS upgrade is unrelated to price fluctuations, and the key lies in whether ecological applications can create long-term value for ETH. Regarding the issue of L2 liquidity fragmentation, he proposed optimizing cross-chain experiences through technologies like the Open Intents Framework and emphasized that L1 needs to take on more roles to ensure the value of ETH. Additionally, he suggested that developers focus on high-level applications of cryptographic technologies like ZK, while pointing out that AI will lower the development threshold and promote the emergence of "super individuals." Finally, he called on developers to build applications that combine practicality with sustainable business models to promote the healthy development of the Ethereum ecosystem.
Recommended Reading
2025-06-17 09:43:21
Collection
In a recent interview, Vitalik stated that the Ethereum POS upgrade is unrelated to price fluctuations, and the key lies in whether ecological applications can create long-term value for ETH. Regarding the issue of L2 liquidity fragmentation, he proposed optimizing cross-chain experiences through technologies like the Open Intents Framework and emphasized that L1 needs to take on more roles to ensure the value of ETH. Additionally, he suggested that developers focus on high-level applications of cryptographic technologies like ZK, while pointing out that AI will lower the development threshold and promote the emergence of "super individuals." Finally, he called on developers to build applications that combine practicality with sustainable business models to promote the healthy development of the Ethereum ecosystem.

Editor: DappLearning

On April 7, 2025, at the Pop-X HK Research House event co-hosted by DappLearning, ETHDimsum, Panta Rhei, and UETH, Vitalik and Xiao Wei appeared together at the event.

During a break in the event, Yan, the initiator of the DappLearning community, interviewed Vitalik. The interview covered various topics including ETH POS, Layer2, cryptography, and AI. The conversation was in Chinese, and Vitalik's Chinese was very fluent.

Here is the content of the interview (original content has been reorganized for readability):

1. Views on POS Upgrade

Yan: Hello, Vitalik, I am Yan from the DappLearning community. It is a great honor to interview you here.

I started learning about Ethereum in 2017. I remember that in 2018 and 2019, there was a heated discussion about POW and POS, and this topic may continue to be discussed.

Looking back now, (ETH) POS has been running stably for over four years, with millions of validators in the consensus network. However, at the same time, the exchange rate of ETH to BTC has been declining, which has both positive aspects and some challenges.

So, from this point in time, what do you think about Ethereum's POS upgrade?

Vitalik: I think the prices of BTC and ETH have nothing to do with POW and POS at all.

There are many different voices in the BTC and ETH communities, and what these two communities are doing is completely different; their ways of thinking are also completely different.

Regarding the price of ETH, I think there is a problem: ETH has many possible futures, and in these futures, there will be many successful applications on Ethereum, but these successful applications may not bring enough value to ETH.

This is a concern for many people in the community, but it is actually a very normal issue. For example, Google, as a company, does many products and interesting things. However, over 90% of their revenue is still related to their Search business.

The relationship between Ethereum's ecological applications and ETH (price) is similar. Some applications pay a lot of transaction fees and consume a lot of ETH, while there are many (applications) that may be relatively successful, but they do not correspondingly bring that much success to ETH.

So this is a problem we need to think about and continue to optimize. We need to support more applications that have long-term value for Ethereum holders and ETH.

Therefore, I think the future success of ETH may appear in these areas. I don't think there is much correlation with improvements in consensus algorithms.

2. PBS Architecture and Centralization Concerns

Yan: Yes, the prosperity of the ETH ecosystem is also an important reason that attracts us developers to build it.

OK, what do you think about the ETH2.0 PBS (Proposer & Builder Separation) architecture? This is a good direction; in the future, everyone can use a mobile phone as a light node to verify (ZK) proofs, and anyone can stake 1 ether to become a validator.

However, builders may become more centralized, as they need to do MEV resistance and generate ZK proofs. If based rollups are adopted, the tasks of builders may increase, such as acting as sequencers.

In this case, will builders become too centralized? Although validators are already sufficiently decentralized, this is a chain. If one link in the middle has a problem, it will also affect the operation of the entire system. So how do we solve the censorship resistance issue in this area?

Vitalik: Yes, I think this is a very important philosophical question.

In the early days of Bitcoin and Ethereum, there was a subconscious assumption:

Building a block and validating a block is one operation.

Assuming you are building a block, if your block contains 100 transactions, then your own node needs to process that many (100 transactions) gas. When you finish building the block and broadcast it to the world, every node in the world also needs to do that much work (consume the same gas). So if we set the gas limit to allow every laptop or MacBook, or a server of a certain size, to build a block, then we need appropriately configured node servers to validate these blocks.

This was the previous technology; now we have ZK, DAS, many new technologies, and Statelessness (stateless validation).

Before using these technologies, building a block and validating a block needed to be symmetrical, but now it can become asymmetrical. So the difficulty of building a block may become very high, but the difficulty of validating a block may become very low.

Using a stateless client as an example: If we use this technology and increase the gas limit tenfold, the computational requirements for building a block will become enormous, and an ordinary computer may no longer be able to handle it. At this point, we may need to use a particularly high-performance Mac Studio or a more powerful server.

But the cost of validation will become lower, because validation requires no storage at all, relying only on bandwidth and CPU computing resources. If we add ZK technology, the CPU cost of validation can also be eliminated. If we add DAS, the cost of validation will be extremely low. If the cost of building a block becomes higher, the cost of validation will become very low.

So is this better compared to the current situation?

This question is quite complex. I think about it this way: if there are some super nodes in the Ethereum network, that is, some nodes have higher computational power, we need them to perform high-performance computing.

How do we prevent them from acting maliciously? For example, there are several types of attacks:

First: Creating a 51% attack.

Second: Censorship attack. If they refuse to accept some users' transactions, how can we reduce this type of risk?

Third: MEV-related operations; how can we reduce these risks?

Regarding the 51% attack, since the validation process is done by Attesters, those Attester nodes need to validate DAS, ZK Proof, and stateless clients. The cost of this validation will be very low, so the threshold for becoming a consensus node will still be relatively low.

For example, if some Super Nodes build blocks, if such a situation occurs where 90% of these nodes are you, 5% are someone else, and 5% are others, if you completely refuse to accept any transactions, it is not necessarily a bad thing, why? Because you cannot interfere with the entire consensus process.

So you cannot perform a 51% attack; the only thing you can do is to refuse certain users' transactions.

Users may only need to wait for ten or twenty blocks for another person to include their transaction in a block, which is the first point.

The second point is that we have the concept of Fossil. What is Fossil for?

Fossil separates the role of "selecting transactions" from the role of executing transactions. This way, the role of selecting which transactions are included in the next block can be more decentralized, so through the Fossil method, smaller nodes will have the ability to independently choose transactions to include in the next block. Additionally, if you are a larger node, your power is actually very limited.

This method is more complex than before; previously, we thought of each node as a personal laptop. But actually, if you look at Bitcoin, it is also a relatively hybrid architecture now. Because Bitcoin miners are those Mining Data Centers.

So in POS, it is done like this: some nodes need more computational power and resources. But the rights of these nodes are limited, while other nodes can be very decentralized, ensuring the security and decentralization of the network. However, this method is more complex, so this is also a challenge for us.

Yan: Very good thinking. Centralization is not necessarily a bad thing, as long as we can limit malicious actions.

Vitalik: Yes.

3. Issues Between Layer1 and Layer2, and Future Directions

Yan: Thank you for answering my long-standing confusion. We come to the second part of the question. As a witness to Ethereum's journey, Layer2 has actually been very successful. The TPS issue has indeed been resolved. Unlike during the ICO (rush transactions) days, when it was congested.

I personally feel that Layer2 is quite usable now. However, currently, there are many proposals addressing the liquidity fragmentation issue for Layer2. What do you think about the relationship between Layer1 and Layer2? Is the current Ethereum mainnet too laid-back, too decentralized, and does it impose no constraints on Layer2? Should Layer1 establish rules with Layer2, or create some profit-sharing models, or adopt solutions like Based Rollup? Justin Drake recently proposed this solution at Bankless, and I also agree with it. What do you think, and I am also curious when the corresponding solutions will go live?

Vitalik: I think there are several issues with our Layer2 now.

First, their progress in security is not fast enough. So I have been pushing for Layer2 to upgrade to Stage 1, and I hope they can upgrade to Stage 2 this year. I have been urging them to do this and have been supporting L2BEAT to do more transparency work in this area.

Second, there is the issue of L2 interoperability. That is, cross-chain transactions and communication between two L2s; if two L2s are in the same ecosystem, interoperability needs to be simpler, faster, and cheaper than it is now.

Last year, we started this work, now called Open Intents Framework, and there are Chain-specific addresses, which is mostly UX work.

In fact, I think the cross-chain issue of L2 is probably 80% a UX problem.

Although the process of solving UX issues may be painful, as long as the direction is correct, we can simplify complex problems. This is also the direction we are working towards.

Some things need to go further; for example, the withdrawal time for Optimistic Rollup is one week. If you have a token on Optimism or Arbitrum, transferring that token to Layer1 or another Layer2 requires waiting a week.

You can have Market Makers wait a week (and you need to pay them a certain fee). Ordinary users can use methods like Open Intents Framework Across Protocol to transfer from one Layer2 to another for small transactions, which is possible. However, for larger transactions, Market Makers still have limited liquidity. Therefore, the transaction fees they require will be relatively high. Last week, I published that article, where I support the 2 of 3 verification method, which is OP + ZK + TEE.

Because if we do that 2 of 3, we can meet three requirements simultaneously.

The first requirement is completely trustless, no need for a Security Council; TEE technology serves as an auxiliary role, so it does not need to be fully trusted.

Second, we can start using ZK technology, but ZK technology is still relatively early, so we cannot fully rely on it yet.

Third, we can reduce the withdrawal time from one week to one hour.

You can imagine if users use Open Intents Framework, the liquidity cost for Market Makers will decrease by 168 times. Because the time Market Makers need to wait (to perform rebalance operations) will be reduced from one week to one hour. In the long term, we plan to reduce the withdrawal time from one hour to 12 seconds (the current block time), and if we adopt SSF, it can be reduced to 4 seconds.

Currently, we will also adopt, for example, zk-SNARK Aggregation, to parallel process the ZK proof process and reduce latency a bit. Of course, if users do this with ZK, they do not need to go through Intents. However, if they do it through Intents, the cost will be very low; this is all part of Interactability.

Regarding the role of Layer1, in the early stages of the Layer2 Roadmap, many people thought we could completely replicate Bitcoin's Roadmap, where Layer1 would have very few uses, only doing proofs (doing minimal work), while Layer2 could do everything else.

However, we found that if Layer1 does not play any role at all, it is dangerous for ETH.

One of our biggest concerns we discussed earlier is: the success of Ethereum applications cannot become the success of ETH.

If ETH is not successful, it will lead to our community having no money and being unable to support the next round of applications. So if Layer1 does not play a role at all, the user experience and the entire architecture will be controlled by Layer2 and some applications. There will be no one to represent ETH. So if we can allocate more roles to Layer1 in some applications, it will be better for ETH.

Next, we need to answer the question: What will Layer1 do? What will Layer2 do?

In February, I published an article stating that in a Layer2-centric world, there are many important things that need Layer1 to do. For example, Layer2 needs to send proofs to Layer1; if a Layer2 has issues, users will need to cross-chain to another Layer2 through Layer1. Additionally, Key Store Wallets, and Oracle Data can be placed on Layer1, etc. Many such mechanisms depend on Layer1.

There are also some high-value applications, such as DeFi, which are actually more suitable for Layer1. One important reason why some DeFi applications are more suitable for Layer1 is their time horizon; users need to wait a long time, such as one year, two years, or three years.

This is especially evident in prediction markets, where sometimes questions are asked about what will happen in 2028.

Here lies a problem: if the governance of a Layer2 has issues, theoretically, all users there can exit; they can move to Layer1 or another Layer2. However, if there is an application in this Layer2 whose assets are locked in long-term smart contracts, users will not be able to exit. So many theoretically safe DeFi applications are not very safe in practice.

For these reasons, some applications should still be built on Layer1, so we are starting to pay more attention to the scalability of Layer1.

We now have a roadmap to improve Layer1's scalability with about four to five methods by 2026.

The first is Delayed Execution (separating block validation and execution), which means we can only validate blocks in each slot and execute them in the next slot. This has the advantage that the maximum acceptable execution time may increase from 200 milliseconds to 3 seconds or 6 seconds. This allows for more processing time.

The second is Block Level Access List, which means each block will need to specify in its information which accounts' states and related storage states need to be read. This can be somewhat similar to Stateless without Witness, and it has the advantage that we can parallel process EVM execution and IO, which is a relatively simple implementation method for parallel processing.

The third is Multidimensional Gas Pricing, which can set a maximum capacity for a block, which is very important for security.

Another is (EIP4444) historical data processing, which does not require every node to permanently store all information. For example, each node can only store 1%, and we can use a p2p method, where your node may store part of it, and his node stores another part. This way, we can store that information more decentralized.

So if we can combine these four solutions, we believe we can potentially increase Layer1's gas limit by ten times, and all our applications will have the opportunity to start relying more on Layer1 and doing more on Layer1, which will benefit Layer1 and ETH.

Yan: Okay, the next question, are we likely to welcome the Pectra upgrade this month?

Vitalik: Actually, we hope to do two things: approximately at the end of this month, we will conduct the Pectra upgrade, and then we will conduct the Fusaka upgrade in Q3 or Q4.

Yan: Wow, so soon?

Vitalik: Hopefully.

Yan: My next question is also related to this. As someone who has watched Ethereum grow, we know that Ethereum has about five or six clients (consensus clients and execution clients) being developed simultaneously to ensure security, which involves a lot of coordination work, leading to longer development cycles.

This has its pros and cons; compared to other Layer1s, it may indeed be slow, but it is also safer.

However, what kind of solutions can allow us to not wait a year and a half for an upgrade? I have seen you propose some solutions; could you elaborate on them?

Vitalik: Yes, there is a solution where we can improve coordination efficiency. We are now starting to have more people who can move between different teams to ensure more efficient communication between teams.

If a client team has a problem, they can raise that issue and let the research team know. Actually, one of the advantages of Thomas becoming one of our new EDs is that he is in the client (team), and now he is also in the EF (team). He can facilitate this coordination; this is the first point.

The second point is that we can be stricter with client teams. Our current approach is that if there are five teams, we need all five teams to be fully prepared before we announce the next hard fork (network upgrade). We are now considering that we can start the upgrade as long as four teams are ready, so we do not need to wait for the slowest one, and we can also motivate everyone more.

4. Views on Cryptography and AI

Yan: So appropriate competition is still necessary. It’s great; I really look forward to every upgrade, but let’s not keep everyone waiting too long.

Next, I want to ask some questions related to cryptography, which are somewhat scattered.

In 2021, when our community was just established, we gathered developers from major exchanges and researchers from ventures to discuss DeFi. 2021 was indeed a stage where everyone participated in understanding, learning, and designing DeFi, a nationwide participation craze.

Looking back at the development, regarding ZK, whether for the public or developers, learning ZK, such as Groth16, Plonk, Halo2, has become increasingly difficult for later developers to catch up, and the pace of technological advancement is also very fast.

Additionally, we now see a direction where the development of ZKVM is also rapid, leading to the direction of ZKEVM not being as popular as before. When ZKVM matures, developers may not need to pay too much attention to the underlying ZK.

What are your suggestions and views on this?

Vitalik: I think for some ecosystems of ZK, the best direction is that most ZK developers can know some high-level languages, that is, HLL (High-Level Language). They can write their application code in HLL, while those researching proof systems can continue to improve and optimize the underlying algorithms. Developers need to be layered; they do not need to know what happens at the next layer.

Currently, there may be a problem: Circom and Groth16 have a very developed ecosystem, but this poses a significant limitation to ZK ecological applications. Because Groth16 has many drawbacks, such as each application needing its own Trusted Setup, and its efficiency is not very high, so we are also considering that we need to allocate more resources here and help some modern HLLs succeed.

Another good route is ZK RISC-V. Because RISC-V can also become an HLL, many applications, including EVM and some others, can be written on RISC-V.

Yan: Good, so developers only need to learn Rust. I attended Devcon in Bangkok last year and also heard about the development of applied cryptography, which was quite enlightening.

Regarding applied cryptography, how do you view the combination of ZKP with MPC and FHE, and what advice do you have for developers?

Vitalik: Yes, this is very interesting. I think FHE has a good prospect now, but there is a concern that MPC and FHE always require a committee, which means selecting seven or more nodes. If those nodes, possibly 51% or 33%, are attacked, then your system will have problems. This is equivalent to having a Security Council, which is actually more serious than a Security Council. Because if a Layer2 is Stage 1, then the Security Council needs 75% of the nodes to be attacked for problems to arise.

The second point is that the Security Council, if they are reliable, will throw most of their assets into cold wallets, meaning they will mostly be offline. However, in most MPC and FHE scenarios, their committees need to be online all the time to keep the system running, so they may be deployed on a VPS or other servers, making it easier to attack them.

This worries me a bit; I think many applications can still be developed, which have advantages but are not perfect.

Yan: Finally, I want to ask a relatively light question. I see you have recently been paying attention to AI, and I want to list some viewpoints.

For example, Elon Musk said that humanity may just be a guiding program for silicon-based civilization.

From our experience in the crypto space, the premise of decentralization is that everyone will abide by the rules, will check and balance each other, and will understand the risks. This ultimately leads to elite politics. So what do you think of these viewpoints? Just share your thoughts.

Vitalik: Yes, I am thinking about where to start answering.

Because the field of AI is very complex. For example, five years ago, no one would have predicted that the US would have the best closed-source AI in the world, while China would have the best open-source AI. AI can enhance everyone's abilities, and sometimes it can also enhance the power of certain countries.

However, AI can also have a somewhat democratizing effect. When I use AI myself, I find that in areas where I am already among the top thousand in the world, such as in some ZK development fields, AI actually helps me very little in ZK; I still need to write most of the code myself. But in areas where I am a novice, AI can help me a lot. For example, in developing Android apps, I had never done it before. I created an app ten years ago using a framework and wrote it in JavaScript, then converted it into an app; apart from that, I had never written a native Android app.

Earlier this year, I conducted an experiment to see if I could write an app using GPT, and it was completed within an hour. This shows that the gap between experts and novices has been significantly reduced with the help of AI, and AI can also provide many new opportunities.

Yan: To add a point, I really appreciate the new perspective you provided. I previously thought that with AI, experienced programmers would learn faster, while it would be unfriendly to novice programmers. However, in some ways, it does enhance the abilities of novices. It may be a form of equality rather than division, right?

Vitalik: Yes, but now a very important question that also needs to be considered is what effects the combination of some technologies we are developing, including blockchain, AI, cryptography, and other technologies, will have on society.

Yan: So you still hope that humanity will not just be under elite rule, right? You also hope to achieve a Pareto optimality for the entire society. Ordinary people become super individuals through the empowerment of AI and blockchain.

Vitalik: Yes, yes, super individuals, super communities, super humans.

5. Expectations for the Ethereum Ecosystem and Advice for Developers

Yan: OK, then we come to the last question. What are your expectations and messages for the developer community? What would you like to say to the Ethereum community developers?

Vitalik: For these Ethereum application developers, it’s time to think.

Now there are many opportunities to develop applications in Ethereum, and many things that were previously impossible can now be done.

There are many reasons for this, such as:

First: Previously, Layer1's TPS was completely insufficient, but now this problem is gone;

Second: Previously, privacy issues could not be solved, but now they can;

Third: Because of AI, the difficulty of developing anything has decreased. It can be said that although the complexity of the Ethereum ecosystem has increased somewhat, through AI, everyone can better understand Ethereum.

So I think many things that failed before, including ten years ago or five years ago, may now succeed.

In the current blockchain application ecosystem, I think the biggest problem is that we have two types of applications.

The first type can be said to be very open, decentralized, secure, and particularly idealistic (applications). But they only have 42 users. The second type can be said to be casinos. The problem is that these two extremes are both unhealthy.

So we hope to create some applications,

First, users will like to use them, and they will have real value.

Those applications will be better for the world.

Second, there will be some real business models, for example, economically sustainable, so they do not need to rely on limited funds from foundations or other organizations. This is also a challenge.

But now I think everyone has more resources than before, so if you can find a good idea and execute it well, your chances of success are very high.

Yan: Looking back, I think Ethereum has actually been quite successful, continuously leading the industry and working hard to solve the problems encountered in the industry under the premise of decentralization.

Another point that resonates deeply is that our community has always been non-profit. Through Gitcoin Grants in the Ethereum ecosystem, as well as OP's retroactive rewards and airdrop rewards from other projects, we have found that building in the Ethereum community can receive a lot of support. We are also thinking about how to ensure the community can operate sustainably and stably.

Building on Ethereum is truly exciting, and we hope to see the true realization of the world computer soon. Thank you for your valuable time.

warnning Risk warning
app_icon
ChainCatcher Building the Web3 world with innovations.