Explore potential opportunities in modular storytelling

This article is machine translated
Show original
This article provides an in-depth analysis of modular narrative and lays out the potential opportunities arising from its evolution.

Source: Ethereum

Written by: LBank Labs research team FF

TL;DR

In recent years, modular blockchain has become a hot trend in the infrastructure space, with many protocols joining and investing in this wave. As a leading smart contract platform, Ethereum has always advocated a modular narrative and has been exploring a Rollup-centric roadmap to address challenges in scalability and efficiency. However, it is now necessary to reconsider the overall direction of the modular narrative and the reasons behind its choice, as modular blockchain also brings with it some concerns and new challenges. On the positive side, more challenges mean more opportunities.

This article provides an in-depth analysis of modular narrative and lays out the potential opportunities arising from its evolution.

  • The first part reflects the shift from traditional monolithic blockchain architecture to Ethereum-led modular design and discusses the choice between monolithic and modular approaches.
  • Part II highlights the key components of modular blockchain and provides an in-depth analysis of each layer. Importantly, we also address underlying issues that are often overlooked or unmentioned when advocating the modular narrative, which leaves room for innovation and the development of new protocols.

The inevitable choice of Ethereum’s modular narrative

Single vs. Modular Blockchain

When discussing narratives, they are often a carefully packaged collection of technical terms, and "modularity" is no exception. In the early days of smart contract platforms, we called miners validators, who operated nodes to maintain the blockchain network. However, each node actually consists of multiple modules that perform different tasks, such as collecting user transactions, executing transactions, updating status, proposing blocks, voting on proposals, etc. This simple yet efficient setup is what we now call a single blockchain.

What happens if a single node cannot handle all these tasks? In traditional IT architectures, tasks are typically assigned to different groups of computers. In the context of blockchain, there are usually two ways to solve this problem. The first is horizontal expansion, which introduces more computers to share the workload. Each computer only needs to handle a small part of the tasks. This is called "sharding" in the blockchain. The second method is vertical expansion. Different groups of computers are responsible for different types of tasks, and each group only needs to handle specific tasks. This is called "stratification" in blockchain.

In a blockchain environment, modules that were previously contained within a single node are now separated into different layers. The images provided by Celestia show that a monolithic approach is more general, while a modular approach is more specialized.

The road to scaling Ethereum

As mentioned earlier, the need for scaling arises when a node cannot handle all tasks on the blockchain. During the DeFi summer, Ethereum reached its capacity limit and high fees became an obstacle to attracting new users, so Ethereum was nicknamed the "noble chain." This is partly due to the large user base on the Ethereum chain and the outdated architecture and design that cannot meet the needs of crypto users. However, it is important to note that crypto users only make up a small portion of internet users, which hinders the large-scale adoption of Ethereum.

Currently, Ethereum generates a block every 12 seconds, and the block space is 30M Gas. Assuming that all transactions in a block are transfers with a minimum Gas limit of 21,000, the theoretical maximum TPS (transactions per second) is approximately 120. However, since the actual transactions in the block are mainly contract calls, the actual TPS is much lower, averaging about 15. In comparison, new alternative Layer 1 solutions can achieve thousands of TPS, which is why we rarely hear about modular designs in their ecosystems because they don’t need to scale.

Therefore, it is clear that Ethereum needs to scale, unveiling the idea of ​​a modular narrative.

Rollup-centric roadmap

As stated above, we conclude that scaling is not a “to do or not to do” question for Ethereum, but an inevitable choice. By taking a modular approach, Ethereum can integrate multiple layers, each with a specific purpose, improving scalability, efficiency, and overall performance.

Although there are two different scaling methods, Ethereum mainly chooses vertical scaling. However, Ethereum has been hesitant between sharding and layering. This can be seen from Vitalik's blog:

  • 2020/10/20 (Rollup): Ethereum roadmap centered on Rollup
  • 2021/4/7 (Sharding): Why Sharding is Great: Revealing the Technical Attributes
  • 2021/5/23 (Sharding): Limitations of blockchain scalability
  • 2021/12/06 (Rollup and Sharding): Endgame

Since Ethereum finalized the layer-oriented Rollup Center roadmap on December 20, 2020, Vitalik has still repeatedly talked about sharding in subsequent articles. Because of the Rollup-centric roadmap, the ultimate goal is to combine sharding and layering to form a hybrid scaling solution. Layering is more straightforward, with Rollup acting as the execution layer to relieve the pressure on the Ethereum mainnet. Sharding is the ultimate goal of blockchain expansion, including data sharding and transaction sharding. Technical burden and historical baggage make it difficult for Ethereum to fully implement sharding. Therefore, Ethereum has chosen a tricky approach, claiming that it will become Rollup’s settlement layer and data availability layer, ultimately enabling data sharding. This narrative has a fashionable name called "modularity".

In fact, Ethereum acknowledged the challenge and deliberately eschewed sharding in favor of Rollup-centric scaling. Reaching the ultimate goal will be a long journey, so Ethereum decided to keep users engaged and meet short-term needs. There are also many architectural trends that could fit into this roadmap, such as adapting the Ethereum Virtual Machine to better support fraud proof verification.

Short-term goal: Embrace Rollup

In the short term, Ethereum's primary focus is to serve Rollup as a reliable and neutral infrastructure. Rollup is a second-layer solution and the main expansion solution for Ethereum. It improves performance, improves efficiency and saves costs through off-chain transaction processing. Rollup aggregates multiple transactions into one transaction that is settled on the Ethereum mainnet, significantly increasing the throughput of the Ethereum network and enabling a large number of low-cost transactions.

By migrating users and applications to Rollup, Ethereum’s transactions per second (TPS) are expected to increase significantly. The current TPS is expected to be around 3000, which represents a significant improvement in scalability compared to the current state.

At the same time, Ethereum aims to maintain scalability potential within its ecosystem while ensuring a seamless user experience. Rollup significantly improves performance and cost-effectiveness, making it a key component of Ethereum’s modular roadmap. As Vitalik said in his blog: "Whether we like it or not, everyone has adapted to a Rollup-centric world, and by then, continuing down this path will be better than trying to get everyone back The base chain is easier. There is no obvious benefit in going back to the base chain, and the scalability will be reduced by 20-100 times.” The goal behind the modular narrative is to keep users within the Ethereum ecosystem, which is where legitimacy becomes paramount important reasons. We explain this further in the following sections.

Long-term goal: data sharding

In the long term, Ethereum aims to enhance scalability, efficiency, and overall network performance through a multi-phase roadmap. This includes leveraging sharded Ethereum 2.0 for data storage, optimizing aggregation, and exploring innovative solutions to challenges in the blockchain ecosystem. These efforts will unlock Ethereum’s full scalability potential. Once Rollup transitions to a sharded ETH2.0 chain for data storage, the theoretical maximum number of transactions per second (TPS) can reach approximately 100,000.

However, it is important to note that all of these assumptions must be implemented to become reality. Vitalik admitted on his blog: "In my opinion, when phase 2 finally arrives, basically no one will care about it." This is why the data sharding plan Danksharding is still in its early stages and not yet fully defined. Therefore, Ethereum launched an initial version called Proto-Danksharding, which has nothing to do with sharding.

Throughout Ethereum’s vision, the transition from a world computer to a global settlement layer reflects the reality that computing power, storage fees, and costs are limited and expensive on Ethereum. As a result, Ethereum has chosen to focus primarily on base layer scaling, specifically increasing the amount of data it can accommodate, rather than optimizing on-chain computation or IO operations.

Modularity Layer: Components and Opportunities

While the concept of data availability is primarily discussed within the Ethereum ecosystem, it was first proposed by Mustafa Al-Bassam, co-founder and CEO of Celestia, in his article Fraud and Proof of Data Availability. Alberto Sonnino, a research scientist at Mysten Labs, and Vitalik are co-authors of the paper. Since then, researchers have discussed the topic of modularity and layering extensively in various forums.

According to Celestia, the modular layer consists of execution layer, settlement layer, data availability layer and other components, each of which helps improve scalability and efficiency. In this narrative, Celestia's goal is to act as a data availability layer.

From a high-level perspective, the traditional single blockchain can be divided into four levels: smart contract layer, execution layer, settlement layer and data availability layer. Each layer plays a vital role in the modular narrative. The consensus layer, which involves reaching agreement on the order of transactions, is often combined with a settlement layer or a data availability layer.

Layering away from a monolithic blockchain allows each layer to be free to develop and try innovations. In the following sections, we explore each layer individually, analyze potential directions, enumerate observed opportunities, and explain our findings.

Smart contract layer

The smart contract layer consists of programmable and self-executing contracts that run on top of the blockchain. These contracts enable the automation, verification, and execution of agreements without the need for intermediaries. They are encoded according to predefined rules and conditions, ensuring transparency, security and trust in digital transactions.

However, in the modular narrative, the soul of smart contracts – composability – is sacrificed. Composability is what drives DeFi’s prosperity. Currently, smart contracts are deployed and run on different execution layers, which places a burden on developers and users. Developers must deploy contracts repeatedly, while users need to connect to different execution layers.

Although we are still in an era of fierce competition, composability is an issue that cannot be ignored and cannot be avoided. Developers and users each face two opportunities.

For developers, an aggregation layer of smart contracts across different execution layers can provide the necessary tools, frameworks, and development environments to seamlessly build applications on different execution layers. Standardized smart contract templates and libraries can simplify the development process and promote innovation. This enables cross-layer compatibility and enhances the developer experience.

For users, the smart contract layer is their interface for interacting with the blockchain. They mainly focus on execution engines, consensus mechanisms, and data storage. They just want a good product and experience, regardless of its form and implementation. Two approaches are currently being explored. The first is the omni-directional layer, which combines the liquidity or functionality of different execution layers into a product for users. The second is intent-centered, focusing on understanding user needs, processing the complex logic behind them, and returning the results to the user. Although the starting point is different, the end goal of both methods is the same.

  • Opportunity #1: The aggregation layer of smart contracts involves development tools and new layers that can help developers build applications on top of all these execution layers.
  • Opportunity #2: Omni-layer protocols and intent-centric approaches involving AA extensions help users experience the product seamlessly.

Execution layer

The execution layer is responsible for executing transactions and updating the state of the blockchain. Its main task is to ensure that only valid transactions are executed, i.e. transactions that result in valid state machine transitions. Currently, the most commonly used execution layer is the Ethereum Virtual Machine (EVM), which is widely used in EVM-compatible chains such as zkEVM. The reason behind this is the desire to attract traffic from Ethereum by simply copying and pasting the ecosystem. Over time, however, this appeal has waned.

At the same time, we can see that virtual machines have made significant progress. Generally speaking, these advances can be divided into two categories: creating more efficient and innovative virtual machines, and modifying the EVM.

In the first category, the idea is very straightforward. The EVM is an outdated virtual machine that is difficult and unnecessary to modify. And once the EVM is modified, compatibility has been destroyed. Therefore, many protocols have chosen the extreme trade-off of replacing the EVM with a new virtual machine to unleash the full potential of the smart contract platform.

One approach is to design specialized virtual machines for specific programming languages, such as Cairo VM in Starknet, or Move VM in Sui and Aptos. Dedicated virtual machines offer the benefits of optimized architecture and improved performance. The trade-off, however, is the need to build its own developer community to encourage more developers to build on top of it.

Another approach is to adopt a general-purpose virtual machine such as WebAssembly (WASM) or RISC-V, which can support multiple languages ​​and is more familiar to traditional developers. WASM is known for its high performance and security and is used in popular protocols such as Polkadot, Solana, and Near. Therefore, applying WASM at the execution layer is an easy choice. Examples include zkWASM developed by Fluent, Eclipse migrating Solana VM to Ethereum, and Nitro's SVM in the Cosmos ecosystem. Risc0 is a practical case of RISC-V VM, which has gained attention and has considerable development momentum.

In the second category, the goal is to modify existing EVMs without sacrificing compatibility. There are three possible approaches, all aimed at parallelizing the EVM. The earliest attempts were to integrate DAGs into the EVM through projects such as Fantom, but this approach has fallen out of favor recently. The second attempt at parallelization came with the launch of Aptos, which introduced the open source block-STM, a parallel execution engine for smart contracts. In short, this approach assumes that all transactions are conflict-free and first processes transactions in parallel before identifying and re-executing conflicting transactions. Many alternative Tier 1 solutions have directly upgraded their execution engines to integrate this approach, such as Avalanche. It would be interesting to see similar attempts on Ethereum. Additionally, some protocols are trying to build parallelized EVMs from scratch, such as Monad, which is becoming increasingly popular in major markets.

Overall, we're excited about these bold ideas and innovations at the executive level. After all, technological advancement is crucial to pushing the boundaries of blockchain.

  • Opportunity #3: More efficient and innovative virtual machines

3.1 Language-specific virtual machines

- Cario VM, such as Starknet

- Move VM, such as Movement Labs

3.2 General virtual machines: WASM, Risc-V

- Ewasm

- zkWasm, e.g. Fluent

-Risco

- Solana VM, such as Eclipse, Nitro

  • Opportunity #4: Modify the current virtual machine for parallelization.

4.1 DAG, such as Fantom

4.2 Optimistic parallelization: Block-STM

4.3 Parallelizing EVM, such as Monad

settlement layer

The settlement layer provides an environment for the execution layer to verify evidence, resolve fraud disputes, and build bridges between different execution layers. Simply put, the settlement layer is the proof system that security relies on. Currently, there are two main types of Rollup: Optimistic Rollup and zk-Rollup. Optimistic Rollup relies on fraud proofs to ensure the validity of transactions, while zk-Rollup uses zero-knowledge proofs for efficient and secure transaction verification.

Although there were early disputes between the OP protocol and the ZK protocol, there is no need to delve into the historical entanglements. Let's focus on the current situation.

Arbitrum is the leading protocol that uses fraud proofs and has the highest Total Value Locked (TVL) on the market. It has completed a fraud proof system, but it has not yet been used on mainnet, so the results are uncertain. If we need to handle a dispute on L1, Rollup's status will essentially be suspended, meaning the blockchain will be unavailable for up to 7 days. Even in the traditional Internet industry, system failure for 7 days is unacceptable. Arbitrum cannot risk losing users, so it does not allow the submission of unpermissioned attestations, only whitelisted participants can submit attestations.

Optimism, the second largest rollup, explicitly admits in its documentation that it does not currently support fraud proof functionality. This is because they know that the average user does not prioritize security. It is now clear that fraud proofs are only a temporary solution to Optimistic Rollup, while zero-knowledge proofs are the ultimate goal.

It can be concluded that zero-knowledge proofs will undoubtedly dominate the settlement layer in the future. As technology advances and many zkRollups are launched on mainnet, Op Rollups will inevitably transition to zero-knowledge proof solutions. Optimism itself is actively seeking help from the zk protocol to build a zero-knowledge proof system.

Based on this clear roadmap, we can identify two opportunities. First, progress in standardizing Rollup proof systems and exploring ZKP technology offers significant prospects for innovation in the settlement layer. The standard will come from community consensus and broad adoption. Currently, OP Stack leads the market, attracting well-known entities such as Base and Binance. In previous articles, we have highlighted the advantages and first-mover advantages of OP Stack. Now that it is transitioning to zk, the standard it chooses is likely to become the market standard. Two protocols, Mina and Risc0, are building proof systems for the OP Stack. Eventually, one of them is expected to capture the majority of OP Stack's market share. Other competitors mainly include the existing zkRollup. The degree of open source will determine their acceptability. In this space, there are two noteworthy contenders: Polygon zkEVM and Scroll. Polygon zkEVM is the first fully open source zkEVM and also provides a more customizable SDK called Polygon CDK for launching custom zkRollups. Scroll's zkEVM originates from a repository shared with PSE, the Ethereum Foundation's internal zk team. Both zkRollups have their own audience and are recognized by the community. Who will be the ultimate winner in the future will be an interesting question.

The second opportunity arises from the broader ZK field. Once a standard gradually gains social consensus, its affiliates will attract traffic and generate capital inflows. While we won’t delve into the details of this topic here, we will discuss it in a future article. However, we will mention some examples to provide inspiration. Hardware acceleration is crucial for ZooKeeper since the generation of ZooKeeper proofs remains a bottleneck for most protocols. Specific acceleration algorithms and hardware can speed up the process and lower the barrier to entry. Additionally, in the context of Ethereum’s modular narrative, co-processors may be needed to handle Ethereum’s complex calculations.

  • Opportunity #5: Rollup Proven System Standards

- 5.1 Optimism Foundation Selection: Mina, Risc0

- 5.2 Open source zkEVM: Polygon zkEVM, Scroll and PSE

  • Opportunity #6: Affiliates in the zkp space

- 6.1 hardware acceleration, such as Ingonyama, Cysic

- 6.2 Coprocessors such as zkVM

Data Availability Layer Data Availability Layer

The data availability layer is responsible for ensuring the availability and accessibility of transaction data on the Ethereum blockchain. It plays a key role in the security and transparency of the blockchain by allowing anyone to inspect and verify the transaction ledger, as well as reconstruct the Rollup chain. As such, it is an important battleground for Ethereum to establish its place in the modular narrative.

so-called legality

After clarifying the strategic position in modularization, it is easier to understand why Ethereum repeatedly emphasizes the importance of legitimacy. This concept was first mentioned in Vitalik’s 2021 blog post “The Most Important Scarce Resource Is Legitimacy.” He also discusses this concept further in his article: Phase 1 Completed – eth2 as a Data Availability Engine – Sharding – Ethereum Research.

In short, using Ethereum as a data availability layer has legitimacy, while not using Ethereum lacks legitimacy. In fact, the trends and marketing influence of the Ethereum community do play a role. Let's take a look at all the Rollups listed on L2beat. L2beat is a protocol primarily based on Ethereum. Although the stage bar (safety level: Stage 0 < Stage 1 < Stage 2) indicates that they are not very safe, they are still of concern. The most extreme case is Fuel, which chose Celestia as its data availability layer and didn't attract much attention or capital inflows despite building the most secure rollup. Therefore, the truth behind the so-called legitimacy is that Ethereum is trying to block competitors in the data availability layer in order to maintain its position.

Overtaking on a curve

Regardless of the influence of the Ethereum Foundation, is it possible for other competitors to surpass Ethereum? Is it possible that Ethereum also made mistakes in its upgrade?

Of course, as mentioned earlier, Celestia is a serious competitor to Ethereum in terms of the data availability (DA) layer.

From a technical perspective, Celestia combines Data Availability Sampling (DAS) and Namespace Merkle Trees (NMT). Celestia adapted Tendermint by adopting Cosmos' technology stack. First, it adopts a two-dimensional Reed-Solomon encoding scheme to enable erasure coding of block data, which forms the basis of DAS. This allows light nodes with limited resources to only sample a small portion of the block data, thus reducing the barrier. Second, Celestia replaces the regular Merkle tree used by Tendermint to store block data with a Namespace Merkle Tree (NMT). This modification allows the execution and settlement layers to download only the necessary data. NMT is a Merkle tree ordered by namespace identifier, with the hash function modified so that each node in the tree includes the namespace scope of all its descendants.

In the case of Ethereum, its Data Availability (DA) roadmap is progressing through incremental steps in development. Currently, Rollups from the execution layer submit data through the calldata mechanism, which stores data from external function calls. On L1, there is no difference between data submission and regular transactions.

In the long term, there is no specific timetable for Danksharding, the ultimate goal of shared DA, and upgrades are often delayed. Additionally, there are no specifications available for Danksharding. In order to solve the urgent need of expensive transaction fees on L2 and meet the short-term needs of Rollup, Ethereum proposed Proto-Danksharding, also known as EIP 4844.

Despite the similar words in the name, Proto-Danksharding has nothing to do with sharding technology. In summary, this solution stores compressed data in additional space at a lower cost.

Data compression is based on KZG (Kate-Zaverucha-Goldberg), a surrogate proof that fits polynomial equations to the data. By leveraging KZG, Rollup no longer needs to come up with raw data or data differences. In contrast, fixed-size KZG promises are sufficient for verifying correctness and are much smaller in size. Since KZG is essentially a zero-knowledge technology based on secret random strings, the EIP-4844 KZG ceremony was open to the public and tens of thousands of people participated and contributed.

Ethereum has set up an additional space called BLOB specifically for Rollup to store transaction data. The pricing of BLOB is also cheaper than ordinary calldata, but the dynamic adjustment mechanism still follows EIP 1155. To put it in perspective, Ethereum allows a block to contain up to 16 BLOBs, with each BLOB containing 4,096 fields. Each field is 32 bytes, so a BLOB can store up to 2MB of data.

To use a popular analogy, Ethereum equipped with BLOB can be regarded as a sidecar motorcycle, but it has two key characteristics. First, data stored in BLOBs cannot be accessed by the EVM. Secondly, after a certain period of time, this data will be deleted. You can imagine that Ethereum itself is a constantly running motorcycle, and BLOB is like a detachable sidecar. Under this mechanism, Ethereum acts as a temporary storage layer, which is why transactions after Proto-Danksharding are said to be much cheaper.

State pruning allows reducing the size of the blockchain and improving performance. These optimizations aim to make Ethereum more lightweight and scalable while maintaining its security and decentralization. However, for execution layers, their global state still needs to be stored somewhere. Some rely on their own DA committees, such as zkSync which proposed zkPorter very early. Polygon also has its own DA layer called Avail. Others may seek a dedicated DA layer.

So if the modular narrative continues, we will be optimistic about the DA layer. Although Ethereum uses "legitimacy" to attract most rollups, it cannot and does not intend to host the state of all execution layers. In addition, the Ethereum Cancun upgrade has been postponed multiple times, creating a favorable time window for other DA layers to enter the market.

It’s no wonder that Celestia plans to launch their mainnet at the end of this month. We'll be watching Celestia closely to see if it can break through the legal logjam. Once Celestia breaks through the siege, it will open up a larger market.

As a guide to investment opportunities, we will first focus on the goal of building some of the layers for the Ethereum ecosystem. These layers will initially be led by Ethereum. Otherwise, they may not be recognized by Ethereum due to their legality and have difficulty attracting developers and users like alternative L1s. Among all these layers, DA is the most challenging part.

Next, we’ll evaluate whether the modular approach is strictly limited to Ethereum and whether Celestia can lead the universal modular narrative wave. Since Celestia leverages the Cosmos stack, it will also bring an influx of funds into the Cosmos ecosystem, especially those projects where Celestia builds execution and settlement layers, such as Fuel on top of the execution layer.

Another area that will benefit is RaaS, where the broad modular narrative will encourage more protocols to adopt Rollup, similar to how SaaS (Software as a Service) transformed traditional Internet services. The business model of RaaS is clear: they charge service fees from the agreement. Provide stronger business development with cheaper prices and better services, thereby gaining more market share. Furthermore, their success is closely tied to the ecosystem in which they operate, so it is likely to see them expand into multiple ecosystems.

  • Opportunity #7: Modular layer

- 7.1 Partial layers built for the Ethereum ecosystem.

- Execution layer: Rollup

- Settlement layer: Risc0, Mina

- Data availability layer: Celestia, EthStorage, Zero Gravity

- 7.2 Partial layers built for modular storytelling.

- Execution layer: Fuel

- Settlement layer

  • Opportunity #8: RaaS tools tightly tied to the ecosystem.

to be continued

So far, we’ve extensively discussed the concept of modular storytelling powered by Ethereum and explored the underlying realities behind this interesting name. Given Ethereum’s status as the largest smart contract platform, compliance with market regulations is crucial. However, it is important not to limit itself to its narrative, as the internet represents a much larger market than cryptocurrencies. If the crypto industry aims to achieve widespread adoption, it is inevitable that other players will emerge in this market. In our upcoming articles, we will delve into the vast world of smart contract platforms.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
1
Comments