Multicoin Capital partner Kyle Samani elaborated on 7 reasons why modular blockchains are overvalued.
Original: The Hidden Costs of Modular Systems
By Kyle Samani , Partner at Multicoin Capital
Compiled by: Luffy, Foresight News
Cover: Photo by Nathan Watson on Unsplash
In the past two years, the blockchain scalability debate has focused on the central topic of "modularization and integration".
Note that discussions in cryptocurrency often conflate "single" and "integrated" systems. The technical debate on integrated versus modular systems spans a long 40-year history . Far from being a new debate, this conversation in the cryptocurrency space should be framed through the same lens as history.
When considering modularity versus integration, the most important design decision a blockchain can make is the extent to which it exposes the complexity of the stack to application developers. The customers of the blockchain are application developers, so final design decisions should take their stance into consideration.
Today, modularity is largely hailed as the primary way blockchains scale. In this post, I will question this assumption from first principles, unravel the cultural myths and hidden costs of modular systems, and share my conclusions from thinking about this debate over the past six years.
Modular systems increase development complexity
By far the biggest hidden cost of modular systems is the added complexity of the development process.
Modular systems greatly increase the complexity that application developers must manage, both within the context of their own applications (technical complexity) and in the context of interactions with other applications (social complexity).
In the context of cryptocurrencies, modular blockchains theoretically allow for greater specialization, but at the cost of creating new complexities. This complexity (both technical and social in nature) is being passed on to application developers, ultimately making it harder to build applications.
For example, consider OP Stack. As of now, it seems to be the most popular modular framework. The OP Stack forces developers to choose between adopting the Law of Chains (which introduces a lot of social complexity), or fork and manage separately. Both options introduce significant downstream complexity to the builder. If you choose to fork, will you get technical support from other ecosystem players (CEX, fiat onramps, etc.) who have to bear the cost to comply with new technology standards? If you choose to follow the Law of Chains, what rules and constraints will be imposed on you today and tomorrow?

Modern operating systems (OS) are large complex systems containing hundreds of subsystems. Modern operating systems handle layers 2-6 in the diagram above. This is a classic example of integrating modular components to manage the complexity of the stack exposed to application developers. Application developers don't want to deal with anything below layer 7, which is why an operating system exists: the operating system manages the complexity of the layers below so that application developers can focus on layer 7. Therefore, modularity should not be an end in itself, but a means to an end.
Every major software system in the world today—cloud backends, operating systems, database engines, game engines, etc.—is highly integrated and simultaneously composed of many modular subsystems. Software systems tend to be highly integrated to maximize performance and reduce development complexity. The same is true for blockchain.
Incidentally, Ethereum is reducing the complexity that arose during the 2011-2014 Bitcoin fork era. Modularity proponents often emphasize the Open Systems Interconnection (OSI) model, arguing that data availability (DA) and execution should be separated; however, this argument is widely misunderstood. A proper understanding of the problem at hand leads to the opposite conclusion: OSI is an argument for an integrated system rather than a modular system.
Modular chains cannot execute code faster
By design, a common definition of a "modular chain" is the separation of data availability (DA) and execution: one set of nodes is responsible for DA, while another set (or sets) of nodes is responsible for execution. Node collections don't have to have any overlap, but they can.
In practice, separating DA and execution does not inherently improve the performance of either; rather, some hardware somewhere in the world must execute DA, and some hardware somewhere must implement execution. Separating these functions does not improve the performance of any of them. While separation can reduce computational cost, it can only be reduced through centralized execution.
To reiterate: Regardless of the modular or integrated architecture, some hardware somewhere has to do the work, and separating DA and execution onto separate hardware does not inherently speed up or increase total system capacity.
Some argue that modularity allows multiple EVMs to run in parallel in a Rollup fashion, enabling execution to scale horizontally. While this is theoretically correct, this view actually emphasizes the limitations of the EVM as a single-threaded processor, rather than the fundamental premise of separating DA and execution in the context of scaling the overall throughput of the system.
Modularity alone does not improve throughput.
Modularity increases transaction costs for users
By definition, each L1 and L2 is an independent asset ledger with its own state. These separate pieces of state can communicate, albeit with longer transaction latencies and more complex situations for developers and users (via cross-chain bridges like LayerZero and Wormhole).
The more asset ledgers, the more fragmented the global state of all accounts. This is scary for chains and users across multiple chains. Fragmentation of state can have a series of consequences:
- Reduced liquidity, leading to higher transaction slippage;
- More total Gas consumption (cross-chain transactions require at least two transactions on at least two asset ledgers);
- Increased cross-asset ledger double calculation (thereby reducing the total throughput of the system): When the price of ETH-USDC changes on Binance or Coinbase, arbitrage opportunities will appear on each ETH-USDC pool of all asset ledgers (you can easily imagine A world where every time the ETH-USDC price moves on Binance or Coinbase, there are 10+ transactions on various asset ledgers. Keeping prices consistent in a fragmented state is an extremely inefficient use of block space ).
It’s important to realize that creating more asset ledgers significantly increases costs across all of these dimensions, especially as it relates to DeFi.
The main input to DeFi is on-chain state (i.e. who owns which assets). When teams launch appchains/rollups, they naturally create state fragmentation, which is very detrimental to DeFi, whether it's the developers managing the complexity of the app (bridges, wallets, latency, cross-chain MEV, etc.), or the users (Slippage, settlement delays).
The most ideal conditions for DeFi are: assets are issued on a single asset ledger and traded within a single state machine. The more asset ledgers, the more complexity application developers must manage and the higher the cost to users.
App Rollups Won't Create New Monetization Opportunities for Developers
AppChain/Rollup proponents argue that incentives will steer app developers to develop Rollups rather than build on L1 or L2 so that apps can capture MEV value themselves. However, this thinking is flawed, as running an application rollup is not the only way to capture MEV back to application layer tokens, nor is it the best way in most cases. Application layer tokens only need to encode logic in smart contracts on the universal chain to capture MEV back to their own tokens. Let's consider a few examples:
- Liquidation: If a Compound or Aave DAO wants to capture a portion of the MEV flowing to a liquidation bot, they can simply update their respective contracts so that a portion of the fees currently flowing to the liquidator goes to their own DAO, no new chain/Rollup required .
- Oracles: Oracle tokens can capture MEV by providing back running services. In addition to price updates, oracles can be bundled with any arbitrary on-chain transaction that is guaranteed to run immediately after a price update. Therefore, oracles can capture MEV by providing back running services to searchers, block builders, etc.
- NFT minting: NFT minting is rife with scalping bots. This can be easily mitigated by simply coding the redistribution of declining profits. For example, if someone tries to resell their NFT within two weeks of its minting, 100% of the revenue will go back to the issuer or DAO. This percentage may change over time.
There is no universal answer to capturing MEV to application layer tokens. However, with a little thought, application developers can easily capture MEV back into their own tokens on the Universal Chain. Launching an entirely new chain is simply not necessary and introduces additional technical and social complexity for developers and more wallet and liquidity headaches for users.
Application Rollup cannot resolve cross-application congestion
AppChains/Rollups are believed by many to ensure apps are not affected by gas spikes caused by other on-chain activity such as popular NFT minting. This view is partly true, but mostly wrong.
This is a historical problem, rooted in the single-threaded nature of the EVM, not because DA and execution are not separated. All L2s pay fees to L1, and L1 fees can increase at any time. During the memecoin craze earlier this year, transaction fees on Arbitrum and Optimism briefly exceeded $10. More recently, Optimism fees have also spiked following the launch of Worldcoin.
The only solution to deal with fee spikes is to: 1) maximize L1 DA, 2) refine the fee market as much as possible:
If L1's resources are constrained, usage spikes in individual L2s will be passed to L1, which will impose higher costs on all other L2s. Therefore, AppChain/Rollup cannot be immune to Gas spikes.
Coexistence of numerous EVM L2s is just a crude way of trying to localize the fee market. It's better than putting the fee market in a single EVM L1, but doesn't solve the core problem. When you realize that the solution is a localized fee market , the logical endpoint is a per-state fee market (rather than a per-L2 fee market).
Other chains have come to this conclusion. Solana and Aptos naturally localize the fee market. This required years of extensive engineering work for the respective execution environments. Most proponents of modularity grossly underestimate the importance and difficulty of building local fee market engineering.

Localization fee market, source: https://blog.labeleven.dev/why-solana
By launching multiple chains, developers do not unlock real performance gains. When there are applications driving transaction volumes, the cost of all L2 chains will be affected.
flexibility is overrated
Proponents of modular chains argue that modular architectures are more flexible. This statement is obviously true, but does it really matter?
For six years, I've been trying to find application developers for whom a generic L1 doesn't provide meaningful flexibility. But so far, beyond three very specific use cases, no one has been able to articulate why flexibility matters and how it directly helps with scaling. Three specific use cases where I find flexibility important are:
Applications that take advantage of the "hot" state. The hot state is the state necessary to coordinate some set of operations in real time, but it will only be submitted to the chain temporarily and will not exist forever. A few examples of thermal states:
- Limit orders in DEXs such as dYdX and Sei (many limit orders end up being canceled).
- Coordinate and identify order flow in real time in dFlow (dFlow is a protocol that facilitates a decentralized order flow marketplace between market makers and wallets).
- An oracle such as Pyth, which is a low-latency oracle. Python runs as a chain of independent SVMs. Python generates so much data that the core Python team decided it would be best to send high frequency price updates to a standalone chain, then use Wormhole to bridge prices to other chains as needed.
Modify the consensus chain. The best examples are Osmosis (where all transactions are encrypted before being sent to validators), and Thorchain (where transactions within a block are prioritized based on fees paid).
Somehow the infrastructure of the Threshold Signature Scheme (TSS) needs to be exploited. Some examples of this are Sommelier, Thorchain, Osmosis, Wormhole, and Web3Auth.
With the exception of Pyth and Wormhole, all examples listed above are built using the Cosmos SDK and run as standalone chains. This speaks volumes for the applicability and scalability of the Cosmos SDK for all three use cases: hot state, consensus modification, and Threshold Signature Scheme (TSS) systems.
However, most of the items in the three use cases above are not applications, they are infrastructure.
Python and dFlow are not applications, they are infrastructure. Sommelier, Wormhole, Sei, and Web3Auth are not applications, they are infrastructure. Among them, there is only one specific type of user-facing applications: DEX (dYdX, Osmosis, Thorchain).
For six years, I have been asking Cosmos and Polkadot proponents about the use cases that result from the flexibility they provide. I think there is enough data to make some inferences:
First, infrastructure examples shouldn't exist as Rollups because they either generate too much low-value data (such as hot state, and the whole point of hot state is that data isn't committed back to L1), or because they do something that is intentionally tied to the asset ledger. state update related functionality (for example, all TSS use cases).
Second, the only type of application I've seen that would benefit from changing the design of the core system is a DEX. Because DEXs are flooded with MEVs, and Universal Chains cannot match the latency of CEXs. Consensus is the basis of transaction execution quality and MEV, so changes based on consensus will naturally bring many innovation opportunities to DEX. However, as mentioned earlier in this article, the main input to a spot DEX is the asset being traded. DEXs compete for assets and thus for asset issuers. Under this framework, a stand-alone DEX chain is unlikely to succeed, as asset issuers' primary consideration when issuing assets is not DEX-related MEV, but general smart contract functionality and the incorporation of this functionality into developers' respective applications .
However, derivatives DEXs do not need to compete for asset issuers. They mainly rely on collateral such as USDC and oracle machines to feed prices, and essentially must lock user assets to mortgage derivatives positions. So, in the sense of standalone DEX chains, they are most likely applicable to derivatives-focused DEXs like dYdX and Sei.
Let's consider the common integrated L1 applications that exist today, including: games, DeSoc systems (such as Farcaster and Lens), DePIN protocols (such as Helium, Hivemapper, Render Network, DIMO, and Daylight), Sound, NFT exchanges, and more. None of these have particularly benefited from the flexibility afforded by amending consensus, and their respective asset ledgers have a fairly simple, obvious, and common set of requirements: low fees, low latency, access to spot DEXs, access to stablecoins, and access to fiat channels, For example CEX.
I believe we now have enough data to say to some extent that the vast majority of user-facing applications have the same general requirements as those enumerated in the previous paragraph. While some applications can optimize other variables at the margin with custom features in the stack, the trade-offs from these customizations are usually not worth it (more bridging, less wallet support, less indexing / query program support, fiat channel reduction, etc.).
Rolling out a new asset ledger is one way to achieve flexibility, but it rarely adds value and almost always introduces technical and social complexity with little ultimate benefit to application developers.
Extended DA does not require re-mortgaging
You'll also hear modular proponents talk about rehypothecation in the context of scaling. This is the most speculative argument made by proponents of modular chains, but one worth discussing.
It roughly states that due to rehypothecation (eg, through systems like EigenLayer), the entire crypto ecosystem can rehypothecate ETH an unlimited number of times, enabling an unlimited number of DA layers (eg, EigenDA) and execution layers. Therefore, while ensuring the value-added of ETH, scalability is solved from all aspects.
Despite the enormous uncertainty between the status quo and the theoretical future, we take for granted that all stratification assumptions work as advertised.
The current DA of Ethereum is about 83 KB/s. With the introduction of EIP-4844 later this year, that speed can roughly double to about 166 KB/s. EigenDA can add an additional 10 MB/s, but requires a different set of security assumptions (not all ETH will be re-hypothecated to EigenDA).
In comparison, Solana currently offers a DA of about 125 MB/s (32,000 shreds per block, 1,280 bytes per shred, 2.5 blocks per second). Solana is much more efficient than Ethereum and EigenDA. Furthermore, Solana's DA expands over time according to Nelson's Law .
There are many ways to scale DA through re-collateralization and modularization, but these mechanisms are simply not necessary today and introduce obvious technical and social complexities.
Built for application developers
After many years of thinking, I've come to the conclusion that modularity should not be a goal in itself.
A blockchain must serve its customers (i.e. application developers), therefore, a blockchain should abstract infrastructure-level complexity so that developers can focus on building world-class applications.
Modularity is great. But the key to building winning technologies is figuring out which parts of the stack need to be integrated and which parts are left to others. As it stands, chains that integrate DA and execution inherently provide a simpler end-user and developer experience, and will ultimately provide a better foundation for best-in-class applications.
Disclaimer: As a blockchain information platform, the articles published on this site only represent the personal views of the authors and guests, and have nothing to do with Web3Caff's position. The content of this article is for information sharing only, and does not constitute any investment advice or offer, and please abide by the relevant laws and regulations of your country or region.
