Author: Haotian
Some friends said that the continuous downward trend of web3 AI Agent targets such as #ai16z and $arc is caused by the recent fire of the MCP protocol? At first glance, I'm a bit confused, WTF is the connection? But after thinking about it, I found that there is a certain logic: the valuation and pricing logic of the existing web3 AI Agents has changed, and the narrative direction and product landing path urgently need to be adjusted!
1) MCP (Model Context Protocol) is an open-source standardized protocol aimed at seamlessly connecting various AI LLM/Agents to various data sources and tools, which is like a "universal" USB plug-and-play interface, replacing the past end-to-end "specific" encapsulation method.
In simple terms, originally AI applications all had obvious data silos, and for Agents/LLMs to achieve interoperability, they needed to develop their own API interfaces, which were complex to operate and lacked bidirectional interaction capabilities, and usually had relatively limited model access and permission restrictions.
The emergence of MCP is like providing a unified framework, allowing AI applications to break free of the past data silo state, realizing the possibility of "dynamic" access to external data and tools, significantly reducing development complexity and integration efficiency, especially in areas such as automated task execution, real-time data queries, and cross-platform collaboration.
Speaking of this, many people immediately think that if the Manus integration framework, which promotes multi-Agent collaboration, uses the open-source MCP framework, won't it be invincible?
That's right, Manus + MCP is the key to the impact on web3 AI Agents this time.
2) However, the astonishing thing is that both Manus and MCP are frameworks and protocol standards oriented towards web2 LLM/Agents, solving the problem of data interaction and collaboration between centralized servers, and their permission and access control still depend on the "active" opening of each server node, in other words, it is just an open-source tool property.
Logically speaking, it is completely contrary to the core ideas of "distributed servers, distributed collaboration, and distributed incentives" pursued by web3 AI Agents. How can a centralized howitzer cannon destroy the decentralized fortress?
The reason is that the first stage of web3 AI Agents was too "web2-ized". On the one hand, it is because many teams come from a web2 background and lack a full understanding of the native needs of web3. For example, the ElizaOS framework was initially an encapsulation framework to help developers quickly deploy AI Agent applications, which just integrated platforms like Twitter, Discord, and some OpenAI, Claude, DeepSeek, etc. "API interfaces", and appropriately encapsulated some Memory, Character common frameworks to help developers quickly develop and land AI Agent applications. But strictly speaking, what's the difference between this service framework and the web2 open-source tools? What are the differentiated advantages?
Uh, is the advantage just a set of Tokenomics incentive mechanism? Then use a set of web2-replaceable frameworks to incentivize a batch of AI Agents that exist just for the sake of issuing new tokens? Terrible... Following this logic, you can roughly understand why Manus + MCP can have an impact on web3 AI Agents.
Because the web3 AI Agent frameworks and services only solve the needs of quick development and application like web2 AI Agents, but in terms of technical services, standards, and differentiated advantages, they are still lagging behind the innovation speed of web2, so the market/capital has re-evaluated and repriced the previous batch of web3 AI Agents.
3) Speaking of this, the crux of the problem has been found, but how to break through it? There is only one way: focus on doing web3 native solutions, because the operation of distributed systems and incentive architecture is the absolute differentiated advantage of web3.
Taking distributed cloud computing power, data, and algorithm service platforms as an example, on the surface, this kind of service model that aggregates idle resources seems unable to meet the needs of engineering implementation and innovation in the short term, but when a large number of AI LLMs are competing for centralized computing power to achieve performance breakthroughs, a service model with the "selling point" of "idle resources and low cost" will naturally be despised by web2 developers and VC teams.
But when web2 AI Agents have gone through the stage of competing for performance innovation, they will inevitably pursue the expansion of vertical application scenarios and fine-tuning model optimization, and that's when the advantages of web3 AI resource services will truly emerge.
In fact, when the web2 AI that has climbed to the position of a giant through resource monopoly reaches a certain stage, it will be difficult to retreat and use the "encircling the cities from the countryside" thinking to break through one by one. That's when the time is ripe for the surplus web2 AI developers and web3 AI resource alliance to make a concerted effort.
In fact, in addition to the web2 quick deployment + multi-Agent collaboration communication framework and the Tokenomic token issuance narrative, web3 AI Agents have many web3 native innovation directions worth exploring:
For example, equipped with a distributed consensus collaboration framework, considering the characteristics of LLM large model off-chain computation + on-chain state storage, many adaptive components are needed.
1. A decentralized DID identity authentication system, so that Agents can have verifiable on-chain identities, similar to the unique addresses generated by the virtual machine for smart contracts, mainly for the continuous tracking and recording of subsequent states;
2. A decentralized Oracle prediction system, mainly responsible for the trusted acquisition and verification of off-chain data, and different from the traditional Oracle, this Oracle adapted to AI Agents may also need a combination architecture of multiple Agents including the data collection layer, decision consensus layer, and execution feedback layer, so that the on-chain data required by the Agent and the off-chain computation and decision-making can be realized in real-time;
3. A decentralized storage DA system, due to the uncertainty of the knowledge base state during the operation of the AI Agent and the temporary nature of the reasoning process, a system is needed to record the key state libraries and reasoning paths behind the LLM in the distributed storage system, and provide a cost-controllable data proof mechanism to ensure the availability of data during public chain verification;
4. A zero-knowledge proof ZKP privacy computing layer, which can be linked to privacy computing solutions including TEE, FHE, etc., to achieve real-time privacy computing + data proof verification, so that Agents can have access to a wider range of vertical data sources (medical, finance), and more professional customized service Agents can appear on top of it;
5. A cross-chain interoperability protocol, similar to the framework defined by the MCP open-source protocol, but the difference is that this Interoperability solution needs to have an adaptive Agent operation, transmission, and verification relay and communication scheduling mechanism, to complete the asset transfer and state synchronization of Agents across different chains, especially including the complex states such as Agent context, Prompt, knowledge base, Memory, etc.
In my opinion, the focus of the real web3 AI Agent should be on how to make the "complex workflow" of the AI Agent and the "trust verification flow" of the blockchain fit as closely as possible. Whether these incremental solutions come from the upgrade and iteration of existing old narrative projects, or from the newly constituted AI Agent narrative track projects, both are possible.
This is the direction that web3 AI Agents should strive to Build, which is in line with the basic landscape of the AI + Crypto macro narrative. If there is no related innovation and differentiated competitive barriers, then every wind and wave in the web2 AI track may shake the web3 AI upside down.





