Imagining the world after parallel EVM: Reshaping the landscape of dApps and user experience

This article is machine translated
Show original

Franklin once said a famous quote: "In this world, the only things you can't escape are death and taxes."

The original title of this article was Death, Taxes, and Parallel EVMs.

When parallel EVM becomes an inevitable trend in the crypto world, what will a crypto world that uses parallel EVM look like?

Reforge Research discussed this concept from the perspective of technology and application. The following is a translation of the full text.

introduce

In today’s computer systems, making things faster and more efficient often means completing tasks in parallel, rather than sequentially. This phenomenon is called parallelization, and is catalyzed by the advent of multi-core processor architectures in modern computers. Tasks that were traditionally performed in a step-by-step manner are now approached through the lens of simultaneity, maximizing the capabilities of the processors. Similarly, in blockchain networks, this principle of executing multiple operations at once is applied at the transaction level, although rather than leveraging multiple processors, it leverages the collective validation power of the numerous validators in the network. Some early implementation examples include:

  • In 2015, Nano (XNO) implemented a block-lattice structure where each account has its own blockchain, allowing for parallel processing and eliminating the need for network-wide transaction confirmations.
  • In 2018, a paper on the blockchain network of the Block-STM (Software Transactional Memory) parallel execution engine was published, Polkadot approached parallelization through a multi-chain architecture, and EOS launched their multi-threaded processing engine.
  • In 2020, Avalanche introduced parallel processing for its consensus (rather than a serialized EVM c-chain), and Solana introduced a similar innovation called Sealevel.

For the EVM, transactions and smart contract execution have always been sequential since its inception. This single-threaded execution design limits the throughput and scalability of the entire system, especially during peak network demand. As network validators face increased workloads, the network inevitably slows down, and users face higher costs, competing to bid for their transactions to be prioritized in a crowded network environment.

The Ethereum community has long explored parallel processing as a solution, initially starting with Vitalik’s EIP in 2017. The original intent was to achieve parallelization through traditional shard chains, or sharding. However, the rapid development and adoption of L2 rollups, which are simpler and offer more immediate scalability benefits, has shifted Ethereum’s focus away from sharding and toward what is now known as danksharding. With danksharding, shards are primarily used as a layer for data availability, rather than for parallel execution of transactions. However, with a full implementation of danksharding yet to be achieved, attention has turned to several key alternative parallelized L1 networks that stand out for EVM compatibility, notably Monad, Neon EVM, and Sei.

Given the legacy of software systems engineering and the scalability success of other networks, parallel execution of the EVM is inevitable. While we are confident in this transition, the future beyond this remains uncertain but highly promising. The implications for the world’s largest smart contract developer ecosystem, currently boasting over $80 billion in total locked value, are significant. What happens when gas prices plummet to fractions of a cent due to optimized state access? How expansive will the design space become for application layer developers? Here is our take on what a post-parallel EVM world might look like.

Parallelization is a means, not an end

Scaling blockchains is a multi-dimensional problem, and parallel execution paves the way for more critical infrastructure developments, such as blockchain state storage.

For projects working on a parallel EVM, the main challenge is not only to enable computations to run concurrently; but to ensure that state access and modification are optimized in a parallelized environment. At the heart of the problem lies two main issues:

  • Ethereum clients and Ethereum itself use different data structures for storage (B-tree/LSM-tree vs. Merkle Patricia Trie), which results in poor performance when embedding one data structure inside another.
  • With parallel execution, the ability to do asynchronous input/output (async I/O) for transactional reads and updates is critical; processes can become deadlocked waiting for each other, wasting any speed gains.

Any additional computational tasks such as adding a large number of additional SHA-3 hashes or calculations are minor compared to the cost of retrieving or setting stored values. To reduce transaction processing time and gas prices, the infrastructure of the database itself must be improved. This goes beyond simply adopting traditional database architectures as an alternative to raw key-value storage (i.e. SQL databases). Implementing the EVM state using a relational model adds unnecessary complexity and overhead, resulting in higher costs for 'sload' and 'sstore' operations compared to using a basic key-value store. The EVM state does not require features like sorting, range scans, or transactional semantics because it only performs point reads and writes, with writes occurring separately at the end of each block. Therefore, the requirements for these improvements should focus on addressing major considerations such as scalability, low-latency reads and writes, efficient concurrency control, state pruning and archiving, and seamless integration with the EVM. For example, Monad is building a custom state database from scratch, called MonadDB. It will leverage the latest kernel support for asynchronous operations while implementing the Merkle Patricia Trie data structure natively on disk and in memory.

We anticipate further reinvention of the underlying key-value database and significant improvements to the third-party infrastructure that supports much of blockchain storage capabilities.

Making pCLOB Great Again

As DeFi moves toward a higher fidelity state, CLOBs will become the dominant design approach.

Since their debut in 2017, automated market makers (AMMs) have become a cornerstone of DeFi, offering simplicity and a unique ability to bootstrap liquidity. By leveraging liquidity pools and pricing algorithms, AMMs have revolutionized DeFi and become the best alternative to traditional trading systems such as order books. Although central limit order books (CLOBs) are a fundamental building block in traditional finance, they ran into blockchain scalability limitations when introduced to Ethereum. They require a large number of transactions, as each order submission, execution, cancellation, or modification requires a new on-chain transaction. Due to the immaturity of Ethereum's scalability efforts, the costs associated with this requirement made CLOBs unsuitable in the early days of DeFi and led to the failure of early versions such as EtherDelta . However, even as AMMs gained popularity, they faced their own inherent limitations. As DeFi attracted more sophisticated traders and institutions over the years, these limitations became increasingly apparent.

Recognizing the superiority of CLOBs, efforts to incorporate CLOB-based exchanges into DeFi began to increase on other alternative, more scalable blockchain networks. Protocols such as Kujira , Serum (RIP), Demex , dYdX , Dexalot , and more recently Aori and Hyperliquid aim to provide a better on-chain trading experience relative to their AMM counterparts. However, with the exception of projects targeting specific areas (such as dYdX and Hyperliquid for perpetual contracts), CLOBs on these alternative networks face their own set of challenges in addition to scalability:

  • Fragmentation of liquidity: The network effects achieved by highly composable and seamlessly integrated DeFi protocols on Ethereum make it difficult for CLOBs on other chains to attract sufficient liquidity and trading volume, thus hindering their adoption and usability.
  • Meme Coin: Bootstrapping liquidity for on-chain CLOB requires limit orders, which is a more challenging chicken-and-egg problem for new and lesser-known assets like memes.

CLOB with blob

EVM

But what about L2? The existing Ethereum L2 stack has seen significant improvements in both transaction throughput and gas costs relative to mainnet, especially after the recent Dencun hard fork. By replacing gas-intensive calldata with lightweight binary large objects (blobs), fees have been significantly reduced. According to growthepie , as of April 1, Arbitrum and OP had fees of $0.028 and $0.064, respectively, while Mantle was the cheapest at $0.015. This is a big difference from before the Cancun upgrade, when calldata previously accounted for 70%-90% of costs. Unfortunately, this is not cheap enough, as the $0.01 follow-up/cancellation fee is still considered expensive. For example, institutional traders and market makers often have a high order-to-trade ratio, meaning they place a large number of orders relative to the number of trades actually executed. Even with today’s L2 fee pricing, paying for order submissions and then modifying or canceling those orders on multiple ledgers can have a significant impact on the profitability and strategic decisions of institutional players. Imagine the following example:

Company A: The standard benchmark per hour is 10,000 order submissions, 1,000 trades, and 9,000 cancellations or modifications. If the company operates on 100 ledgers in a single day, the total activity could easily result in over $150,000 in fees, even if one trade is less than $0.01.

pCLOB

EVM

With the advent of the parallel EVM, we expect a surge in DeFi activity, primarily sparked by the viability of on-chain CLOBs. But not just any CLOBs - programmable central limit order books (pCLOBs). Given that DeFi is inherently composable and can interact with an unlimited number of protocols, a vast number of trading permutations can be created. Leveraging this phenomenon, pCLOBs can enable custom logic in the order submission process. This logic can be called before or after an order is submitted. For example, a pCLOB smart contract can include custom logic to:

  • Validate order parameters (such as price and quantity) based on predefined rules or market conditions
  • Perform real-time risk checks (e.g., ensuring sufficient margin or collateral for leveraged trades)
  • Apply dynamic fee calculations based on any parameter (e.g. order type, volume, market volatility, etc.)
  • Execute conditional orders based on specified trigger conditions

A big step cheaper than existing transaction designs.

The concept of just-in-time (JIT) liquidity illustrates this point well. Liquidity does not sit idle on any single exchange, it will be generating yield elsewhere until the moment an order is matched and liquidity is extracted from the underlying platform. Who wouldn’t want to harvest every bit of yield on MakerDAO before looking for liquidity for a trade? Mangrove Exchange’s innovative “offer-is-code ” approach hints at the potential. When a quote in an order is matched, the portion of code embedded within it will perform the sole mission of finding the liquidity requested by the order taker. That said, challenges remain in terms of L2 scalability and cost.

The parallel EVM also significantly enhances pCLOB's matching engine. pCLOB can now implement a parallel matching engine that utilizes multiple "channels" to simultaneously process incoming orders and perform matching calculations. Each channel can handle a subset of the order book, so there are no price-time priority restrictions and only execute when a match is found. The reduction in latency between order submission, execution, and modification allows for optimal order book updates.

Keone Hon, co-founder and CEO of Monad, said: Due to the ability of automated market makers to continuously make markets in the absence of liquidity, AMMs are expected to continue to be widely used for long-tail assets; however, for "blue chip" assets, pCLOB will dominate.

In a discussion we had with Keone, co-founder and CEO of Monad, he believed that we can expect multiple pCLOBs to gain traction in different high-throughput ecosystems. Keone stressed that these pCLOBs will have a significant impact on the larger DeFi ecosystem due to the impact of lower fees.

Even with just a handful of these improvements, we expect pCLOB to have a significant impact on improving capital efficiency and unlocking new categories within DeFi.

We need more apps, but first...

Existing and new applications need to be architected in a way that fully exploits the underlying parallelism.

With the exception of pCLOB, none of the current decentralized applications are parallel, and their interactions with the blockchain are sequential. However, history has shown that technology and applications naturally evolve to take advantage of new advances, even if they were not originally designed with those advances in mind.

Steven Landers, blockchain architect at Sei, said: "When the first iPhone came out, the apps designed for it looked a lot like bad computer apps. It's a similar situation here. We're adding multi-core to the blockchain, which will lead to better apps."

The evolution of e-commerce from displaying magazine catalogs on the internet to the existence of a strong two-sided marketplace is a classic example. With the advent of the parallel EVM, we will witness a similar transformation of decentralized applications. This further highlights a key limitation: applications that do not take parallelism into account will not benefit from the efficiency gains of the parallel EVM. Therefore, it is not enough to have parallelism at the infrastructure layer without redesigning the application layer. They must be architecturally consistent.

State Competition

Even without making any changes to the application itself, we still expected a modest 2-4x performance improvement, but why stop there when you can get even more? This shift introduced a key challenge: the application needed to be fundamentally redesigned to accommodate the nuances of parallel processing.

Steven Landers, blockchain architect at Sei, said: If you want to take advantage of throughput, you need to limit contention between transactions.

More specifically, conflicts arise when multiple transactions from decentralized applications attempt to modify the same state at the same time. Resolving conflicts requires serialization of conflicting transactions, which negates the benefits of parallelization.

There are many methods for conflict resolution that we will not discuss at this time, but the number of potential conflicts encountered during execution is largely up to the application developer. In the scope of decentralized applications, even the most popular protocols, such as Uniswap, do not consider or implement such limitations. 0xTaker, co-founder of Aori , discussed with us in depth the major state disputes that will occur in the parallel world. For an AMM, due to its peer-to-pool model, many participants may target a single pool at the same time. From a few to more than 100 transactions will compete for the state, so AMM designers will have to carefully consider the distribution and management of liquidity in the state to maximize the benefits of pooling.

Steven, a core developer at Sei, also stressed the importance of considering contention in multithreaded development, noting that Sei is actively researching the implications of parallelization and how to ensure that resource utilization is fully captured.

Performance predictability

Yilong, co-founder and CEO of MegaETH, also stressed to us the importance of decentralized applications seeking performance predictability. Performance predictability refers to the ability of decentralized applications to execute transactions consistently over a period of time, regardless of network congestion or other factors. One way to achieve this is through application-specific chains, however, while application-specific chains provide predictable performance, they sacrifice composability.

Aori co-founder 0xTaker said: Parallelization provides a way to experiment with local fee markets to minimize state disputes.

Advanced parallelism and multi-dimensional fee mechanisms can enable a single blockchain to provide more deterministic performance for each application while maintaining overall composability.

Solana has a nice fee market system that is localized so that if multiple users access the same state they will pay more (surge pricing) rather than bidding against each other on a global fee market. This approach is particularly beneficial for loosely connected protocols that require performance predictability and composability. To illustrate this concept, consider a highway system with multiple lanes and dynamic tolling. During rush hour, the highway can allocate dedicated express lanes to vehicles willing to pay higher tolls. These express lanes ensure predictable and faster travel times for those who prioritize speed and are willing to pay a premium. At the same time, the regular lanes are open to all vehicles, maintaining the overall connectivity of the highway system.

Think of all the possibilities

While redesigning protocols to align with underlying parallelization may seem challenging, the achievable design space in DeFi and other verticals has expanded significantly. We can expect to see a new generation of applications that are more sophisticated, efficient, and focused on use cases that were previously impractical due to performance limitations.

EVM

Keone Hon, co-founder and CEO of Monad, said: "Back in 1995, the only internet plan was $0.10 per 1MB of data downloaded, and you carefully chose the sites you visited. Imagine going from that time to unlimited, and notice what people do and what becomes possible."

It’s possible that we’ll return to a scenario similar to the early days of centralized exchanges, a war for user acquisition, where DeFi applications, especially decentralized exchanges, will offer referral programs (i.e. points, airdrops) and superior user experience as weapons. We see a world where any reasonable amount of interaction with games on-chain could actually become a thing . Hybrid orderbook-AMMs already exist, but instead of having CLOB sorters as standalone nodes and decentralized through governance, they are moved on-chain, allowing for improved decentralization, lower latency, and enhanced composability. Social interactions that are fully on-chain are now possible as well. Frankly, anything that involves a large number of people or agents doing some kind of action at the same time is now on the table.

In addition to humans, intelligent agents will likely dominate on-chain transaction flow more than they currently do. AI has been a part of the game for a while, with arbitrage bots and the ability to autonomously execute trades, but their participation will increase exponentially. Our theory is that any form of on-chain participation will be augmented by AI in some way. Latency requirements for agents to conduct transactions will be more important than we envision today.

Ultimately, technological advancement is just a foundational enabling factor. The ultimate winner will be determined by those who can attract users and initiate trading volume/liquidity better than their peers. The difference is that now developers have more resources at their disposal.

Crypto UX Sucks, Now It Won’t Be So Suck

User Experience Unification (UXU) is not only possible, but necessary, and the industry is certainly working towards achieving it.

Today’s blockchain user experience is fragmented and cumbersome, with users having to navigate between multiple blockchains, wallets, and protocols, waiting for transactions to complete, and running the risk of security breaches or hacker attacks. The ideal future is one where users can seamlessly interact with their assets securely without having to worry about the underlying blockchain infrastructure. This process, which we call User Experience Unification (UXU), is about transitioning from the current fragmented user experience to a unified, simplified experience.

Fundamentally, improving blockchain performance, especially by reducing latency and fees, can significantly help solve user experience problems. Historically, performance improvements have tended to positively impact all aspects of our digital user experience. For example, faster internet speeds have not only enabled seamless online interactions, but have also driven demand for richer, more immersive digital content. The advent of broadband and fiber optic technologies has facilitated low-latency streaming of high-definition video and real-time online gaming, raising user expectations for digital platforms. This constant pursuit of depth and quality has spawned companies’ continued innovation in developing the next big, compelling innovation - from advanced interactive web content to sophisticated cloud-based services to virtual/augmented reality experiences. Improving internet speeds has not only improved the online experience itself, but has also expanded the scope of user demands.

Similarly, improvements in blockchain performance will not only enhance the user experience directly by reducing latency, but will also enhance it indirectly by enabling the rise of protocols that unify and improve the overall user experience. Performance is a key element of their existence. In particular, the more performant and lower gas fees of these networks, especially the parallel EVM, means that the process of getting in and out will be more frictionless for end users, thus attracting more developers. In a conversation with Sergey, co-founder of the Axelar interoperability network, he envisions a world that is not only truly interoperable, but more symbiotic.

Sergey said: If you have complex logic on a high-throughput chain (e.g., parallel EVM), and the chain itself can "absorb" the complexity and throughput requirements of that logic due to its high performance, then you can use interoperability solutions to export that functionality to other chains in an efficient way.

Felix Madutsa , co-founder of Orb Labs , said: As scalability issues are resolved and interoperability between different ecosystems increases, we will witness the emergence of protocols that bridge the Web3 user experience with Web2. Some examples include the second generation of intent-based protocols, advanced RPC infrastructure, chain abstraction capabilities, and open computing infrastructure enhanced by artificial intelligence.

Other aspects

As performance requirements increase, the oracle market will become lively.

Parallel EVM means that performance demands on oracles will increase, a vertical that has been severely lagging over the past few years. The growth in demand at the application level will activate a market full of sub-par performance and security, thus improving the performance of the DeFi portfolio. For example, market depth and trading volume are two strong indicators for many DeFi primitives, such as money markets. We expect large incumbents like Chainlink and Python to adapt relatively quickly as new players challenge their market share in this new era. After speaking with senior members of Chainlink, our thoughts are consistent: "The consensus at Chainlink is that if parallel EVM becomes dominant, we may want to reshape our contracts to capture value from it (e.g. reduce inter-contract dependencies so that transactions/calls are not unnecessarily dependent and therefore not MEVed) but because parallel EVM is designed to improve transparency and throughput of applications already running on EVM, it should not affect network stability."

This shows that Chainlink understands the impact of parallel execution on their product, and as previously highlighted, in order to take advantage of parallelization they will have to reshape their contracts.

It's not just an L1 party; the parallel EVM L2 wants to join in the fun too.

From a technical perspective, creating a high-performance parallel EVM L2 solution is easier than developing L1. This is because in L2, the setup of the sorter is simpler than the consensus-based mechanisms used in traditional L1 systems (such as Tendermint and its variants). This simplicity stems from the fact that the sorter in a parallel EVM L2 setup only needs to maintain the order of transactions, as opposed to consensus-based L1 systems where many nodes must agree on the order.

More specifically, we expect optimistic parallel EVM L2 based rollups to dominate their zero-knowledge counterparts in the near term. Ultimately, we expect it to be only a matter of time before we see a transition from OP based rollups to zk-rollups via a general purpose zero-knowledge framework such as RISC0, rather than the traditional approaches used in other zk-rollups.

So, Rust is better at this point?

The choice of programming language will play an important role in the development of these systems. We prefer Rust, Ethereum's Rust implementation Reth, over any other alternatives. This preference is not random, as Rust has many advantages over other languages, including memory safety without garbage collection, zero-cost abstractions, and a rich type system.

In our opinion, the competition between Rust and C++ is becoming an important competition in the new generation of blockchain development languages. Although this competition is often overlooked, it should not be ignored. The choice of language is crucial because it affects the efficiency, security, and versatility of the systems built by developers.

EVM

Developers are the ones who make these systems a reality, and their preferences and expertise are critical to the direction of the industry. We firmly believe that Rust will ultimately succeed. However, migrating from one implementation to another is far from easy. It requires significant resources, time, and expertise, which further emphasizes the importance of choosing the right language from the beginning.

In the context of parallel execution, we would be remiss not to mention Move. While Rust and C++ often take the spotlight, Move has several features that make it equally suitable:

  • Move introduces the concept of a "resource", a type that can only be created, moved, or destroyed, but not copied. This ensures that resources are always uniquely owned, preventing common problems that can arise in parallel execution, such as race conditions and data races.
  • Formal Verification and Static Types: Move is a statically typed language with an emphasis on safety. It includes features such as type inference, ownership tracking, and overflow checking that help prevent common programming errors and vulnerabilities. These safety features are especially important in the context of parallel execution, where errors can be more difficult to detect and reproduce. The language's semantics and type system are based on linear logic, similar to Rust and Haskell, which makes it easier to reason about the correctness of Move programs, so formal verification can ensure that concurrent operations are safe and correct.
  • Move advocates a modular design approach, where smart contracts are composed of smaller, reusable modules. This modular structure makes it easier to understand the behavior of individual components and can facilitate parallel execution by allowing different modules to execute simultaneously.

Future considerations: EVM is not secure and needs improvement

While we paint a very optimistic picture of the on-chain universe after the parallel EVM, it all means nothing if the issues around EVM and smart contract security are not addressed.

EVM

Unlike network economics and consensus security, hackers have exploited smart contract security vulnerabilities in Ethereum’s DeFi protocol to illegally obtain more than $1.3 billion in 2023. As a result, users prefer walled CEXs or hybrid “decentralized” protocols with centralized validator sets that sacrifice decentralization in exchange for a perceived more secure (and better performing) on-chain experience.

EVM

The inherent lack of security features in the EVM design is the root cause of these vulnerabilities.

Analogizing to the aerospace industry, where strict safety standards make air travel very safe, we see a stark contrast in blockchain’s approach to security. Just as people value their lives above all else, the security of their financial assets is also paramount. Key practices such as thorough testing, redundancy, fault tolerance, and strict development standards underpin the aviation industry’s safety record. These key features are currently missing from the EVM, and in most cases, other VMs as well.

One potential solution is to adopt a dual VM setup, where a separate VM, like CosmWasm , monitors the real-time execution of EVM smart contracts, much like antivirus software functions inside an operating system. This structure enables advanced inspections, such as call stack inspections, specifically aimed at reducing hacking incidents. However, this approach requires significant upgrades to existing blockchain systems. We expect new, better-positioned solutions like Arbitrum Stylus and Artela to successfully implement this architecture from the start.

Existing security primitives in the market tend to be reactive to imminent or already attempted threats, such as checking the memory pool or smart contract code audits/reviews. While these mechanisms help, they fail to address underlying vulnerabilities in VM design. A more productive and proactive approach must be taken to reshape and enhance the security of blockchain networks and their application layers.

We advocate for a complete overhaul of the blockchain VM architecture to embed real-time protection and other critical security features, possibly through a dual VM setup to align with industries that have already successfully used this practice (e.g. aerospace). Going forward, we are eager to support infrastructure enhancements that emphasize a preventative approach, ensuring that advances in security match industry advances in performance (i.e. parallel EVMs).

in conclusion

The advent of the Parallel EVM marks an important turning point in the evolution of blockchain technology. By enabling concurrent execution of transactions and optimizing state access, the Parallel EVM opens up a new era of possibilities for decentralized applications. From the resurgence of programmable CLOBs to the emergence of more complex and high-performance applications, the Parallel EVM lays the foundation for a more unified and user-friendly blockchain ecosystem. As the industry embraces this paradigm shift, we can expect to see a wave of innovation that pushes the boundaries of what is possible in decentralized technology. Ultimately, the success of this shift will depend on whether developers, infrastructure providers, and the broader community can adapt and align with the principles of parallel execution, leading the technology to seamlessly integrate into our daily lives.

The advent of the parallel EVM has the potential to reshape the landscape of decentralized applications and user experiences. By addressing the scalability and performance limitations that have long hindered growth in key verticals such as DeFi, it opens the door for complex, high-throughput applications to flourish without sacrificing the trilemma.

Achieving this vision requires more than just infrastructure advances. Developers must fundamentally rethink the architecture of their applications to align with the principles of parallel processing, minimize state contention, and maximize performance predictability. Even so, as we look ahead to a bright future, we must emphasize that security is a critical priority, in addition to scalability.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments