Vitalik's latest long article: A sequel to the Ethereum Evolution, four key improvements of L2
Original Title: “Ethereum has blobs. Where do we go from here?”
Author: Vitalik Buterin
Translation: jk, Odaily Planet Daily
On March 13, the Dencun hard fork was activated, enabling one of Ethereum's long-awaited features: proto-danksharding (also known as EIP-4844, also known as blobs). Initially, this fork reduced the transaction fees for rollups by over 100 times, as blobs are almost free. Over the past day, we have finally seen a surge in blob usage, with the blobscriptions protocol starting to utilize them, activating the fee market as well. Blobs are not free, but they are still much cheaper than calldata.

Left image: Thanks to Blobscriptions, blob usage has finally reached the target of 3 per block. Right image: This was followed by blob fees "entering price discovery mode." Source: https://dune.com/0x Rob/blobs.
This milestone represents a key shift in Ethereum's long-term roadmap: with blobs, Ethereum's scalability is no longer a "zero to one" problem, but rather a "one to many" problem. From here on, important scalability work, whether increasing the number of blobs or improving rollups' ability to utilize each blob, will continue, but it will be more incremental. The fundamental changes related to scalability that affect how Ethereum operates as an ecosystem have increasingly lagged behind us. Moreover, the focus has slowly been shifting, and will continue to slowly shift from L1 issues such as PoS and scalability to issues closer to the application layer. The key question this article will explore is: Where does Ethereum go next?
The Future of Ethereum Scalability
In recent years, we have witnessed Ethereum gradually transform into an L2-centric ecosystem. Major applications have begun to shift from L1 to L2, payments have started to default to L2, and wallets have begun to build their user experience around the new multi-L2 environment.
From the beginning, a key part of the rollup-centric roadmap has been the concept of an independent data availability space: a special portion of space in a block that is inaccessible to the EVM, which can store data for layer two projects like rollups. Because this data space is not accessible by the EVM, it can be broadcasted and verified separately from a block. Ultimately, it can be verified through a technique called data availability sampling, which allows each node to verify whether the data is correctly published by randomly checking a few small samples. Once implemented, the blob space can be significantly expanded; the ultimate goal is to provide 16 MB of data space per time slot (approximately 1.33 MB/second).

Data availability sampling: Each node only needs to download a small portion of data to verify the overall data availability.
EIP-4844 (i.e., blobs) does not provide us with data availability sampling. However, it does establish a basic framework from which data availability sampling can be introduced and the number of blobs can be increased behind the scenes, all without any involvement from users or applications. In fact, the only required "hard fork" is simply a straightforward parameter change.
From here, two directions that will need to continue developing are:
- Gradually increasing blob capacity, ultimately achieving a panorama of data availability sampling that provides 16 MB of data space per time slot;
- Improving L2 to better utilize the data space we have.
Bringing DAS to Reality
The next phase could be a simplified version of DAS, called PeerDAS. In PeerDAS, each node stores a significant portion of the entire blob data (e.g., 1/8), and nodes maintain connections with many peers in a p2p network. When a node needs to sample a specific data fragment, it will ask one of the known peers responsible for storing that data fragment.

If each node needs to download and store 1/8 of all data, then theoretically PeerDAS allows us to scale blobs by 8 times (actually 4 times, as we lose 2 times due to redundancy from erasure coding). PeerDAS can be rolled out over time: we can have a phase where specialized stakers continue to download the full blobs, while individual stakers only download 1/8 of the data.
Additionally, EIP-7623 (or an alternative like 2D pricing) can be used to set stricter limits on the maximum size of execution blocks (i.e., "regular transactions" in a block), making it safer to simultaneously increase blob targets and L1 gas limits. In the long run, more complex 2D DAS protocols will allow us to comprehensively enhance and further increase blob space.
Improving L2 Performance
Today, layer two (L2) protocols can be improved in four key areas.
1. More efficient use of bytes through data compression

My data compression overview diagram can still be viewed here;
Naively, a transaction occupies about 180 bytes of data. However, there are a range of compression techniques that can reduce this size in several stages; through optimized compression, we may ultimately reduce the data amount per transaction to below 25 bytes.
2. Use L1's optimistic data techniques to secure L2 only in special cases

Plasma is a class of technology that allows you to keep data on L2 under normal circumstances while providing security equivalent to Rollup for some applications. For EVMs, Plasma cannot protect all coins. However, Plasma-inspired constructions can protect most coins. Moreover, constructions that are much simpler than Plasma can significantly improve today's validiums. Those L2s that are reluctant to put all data on-chain should explore such technologies.
3. Continue to improve execution-related limits
Once the Dencun hard fork was activated, the cost of rollups set to use the blobs it introduced was reduced by 100 times. The usage of the Base rollup immediately surged:

This, in turn, led Base to hit its internal gas limit, causing fees to unexpectedly surge. This led to a broader realization that Ethereum's data space is not the only thing that needs to be expanded: rollups themselves also need to scale.
Part of this is parallelization; rollups can achieve something similar to EIP-648. But equally important is storage, as well as the interaction effects between computation and storage. This presents a significant engineering challenge for rollups.
4. Continue to improve security
We are still far from a world where rollups are truly protected by code. In fact, according to L2 Beat, only one of these five, Arbitrum, is fully EVM-compatible, even reaching what I call "stage one."

This needs to be addressed head-on. While we currently cannot be sufficiently confident in the code of a complex optimistic or SNARK-based EVM verifier, we are absolutely capable of getting halfway there, with a security council that can only change the code's behavior under high thresholds (for example, I propose 6-of-8; Arbitrum is implementing 9-of-12).
The standards of the ecosystem need to become stricter: so far, we have been lenient and accepted any project that claims to be "on the road to decentralization." By the end of the year, I believe our standards should be raised, and we should only consider those projects that at least reach stage one as rollups.
After this, we can cautiously move towards stage two: a world where rollups are truly supported by code, and the security council can only intervene if the code is "obviously contradictory" (for example, accepting two incompatible state roots, or two different implementations giving different answers). One path to safely reach this goal is to implement multiple provers.
What Does This Mean for the Development of Ethereum?
At the ETHCC in the summer of 2022, I gave a report describing the current state of Ethereum's development as an S-curve: we are entering a very rapid transition period, after which, as L1 consolidates and development refocuses on users and the application layer, development will slow down again.

Today, I would say we are clearly in the deceleration phase on the right side of this S-curve. As of two weeks ago, the two biggest transformations for the Ethereum blockchain - the switch to proof of stake and the reconstruction into blobs - have been completed. Future changes will still be significant (e.g., Verkle trees, single-slot finality, account abstraction within the protocol), but their intensity is not as great as proof of stake and sharding. In 2022, Ethereum was like a plane changing engines mid-flight. In 2023, it changed its wings. The transition to Verkle trees is the remaining truly significant change (we already have a testnet); the others are more like changing the tail wing.
The goal of EIP-4844 is to make a significant one-time change to set long-term stability for rollups. Now that blobs have been launched, future upgrades to full danksharding with 16 MB blobs, and even the transition of cryptographic technology to STARKs on 64-bit goldilocks fields, can occur without requiring rollups or users to take any further action. It also reinforces an important precedent: the development process of Ethereum is executed according to a long-standing, well-known roadmap, and applications built with the vision of a "new Ethereum" (including L2) are provided with a long-term stable environment.
What Does This Mean for Applications and Users?
The first decade of Ethereum has largely been a training phase: the goal has always been to get Ethereum L1 off the ground, with applications primarily occurring within a small group of enthusiasts. Many argue that the lack of large-scale applications over the past decade proves that cryptocurrencies are useless. I have always opposed this view: almost every non-financial speculative crypto application relies on low fees—therefore, when we face high fees, we should not be surprised that what we mainly see is financial speculation.
Now that we have blobs, this key limitation that has been holding us back begins to dissolve. Fees have finally dropped significantly; my statement from seven years ago that the cost of each transaction on the internet of money should not exceed five cents has finally come true. We are not completely out of the woods yet: if usage grows too quickly, fees may still increase, and we need to continue working to scale blobs (and separately scale rollups) in the coming years. But we see the light at the end of the tunnel… uh… dark forest.

For developers, this means one simple thing: we have no more excuses. Until a few years ago, we set a low standard for ourselves, building applications that were clearly not capable of large-scale use, as long as they worked as prototypes and were reasonably decentralized. Today, we have all the tools we need, and in fact, most of the tools we will have, to build applications that are both crypto-punk and user-friendly. Therefore, we should go out and do it.
Many are rising to this challenge. The Daimo wallet explicitly describes itself as Venmo on Ethereum, aiming to combine the convenience of Venmo with the decentralization of Ethereum. In the decentralized social space, Farcaster is doing well in combining true decentralization (for example, check out this guide on how to build your own alternative client) with excellent user experience. Unlike the previous "social finance" craze, the average Farcaster user is not here to gamble—passing the key test for the sustainable development of crypto applications.

This post was sent via the main Farcaster client Warpcast, and this screenshot is from the alternative Farcaster + Lens client Firefly.
These successes are what we need to build upon and expand into other application areas, including identity, reputation, and governance.
Applications Being Built or Maintained Today Should Use 2020s Ethereum as a Blueprint
The Ethereum ecosystem still has a large number of applications operating around a workflow that fundamentally belongs to "2010s Ethereum." Most ENS activity still occurs on layer one (L1). Most token issuances also happen on layer one, without seriously considering ensuring that bridged tokens are available on layer two (L2) (for example, see this ZELENSKYY memecoin fan's appreciation for the coin's continued donations to Ukraine, but complaints that L1 fees make it too expensive). Beyond scalability, we are also lagging in privacy protection: POAPs are all publicly on-chain, which may be the right choice for certain use cases but is very suboptimal for others. Most DAOs and Gitcoin Grants still use fully transparent on-chain voting, making them highly susceptible to bribery (including post-hoc airdrops), which has been shown to severely distort contribution patterns. Today, ZK-SNARKs have been around for years, yet many applications still have not started using them correctly.
These are hardworking teams that must deal with a large existing user base, so I do not blame them for not simultaneously upgrading to the latest technological wave. But soon, this upgrade will need to happen. Here are some key differences between "a workflow that fundamentally belongs to 2010s Ethereum" and "a workflow that fundamentally belongs to 2020s Ethereum":

Essentially, Ethereum is no longer just a financial ecosystem. It is a full-stack alternative to much of the realm of "centralized technology," even providing some things that centralized technology cannot offer (e.g., applications related to governance). We need to build with this broader ecosystem in mind.
Conclusion
Ethereum is undergoing a decisive transformation, transitioning from an era of "rapid progress on L1" to an era where L1 progress will still be very significant but somewhat more moderate, with less disruption to applications.
We still need to complete scalability. This work will take place more behind the scenes, but it remains important.
Application developers are no longer just building prototypes; we are building tools for millions of users. Throughout the ecosystem, we need to adjust our mindset accordingly.
Ethereum has upgraded from being "just" a financial ecosystem to a more thorough independent decentralized technology stack. Throughout the ecosystem, we also need to fully adjust our mindset in this regard.








