Original author: Haotian (X: @tme l0 211)
Some friends said that the persistent downtrend of web3 AI Agent targets such as #ai16z and $arc is caused by the recently popular MCP protocol? At first glance, I'm a bit confused, WTF is the connection? But after thinking about it, I found that there is a certain logic: The valuation and pricing logic of existing web3 AI Agents has changed, and the narrative direction and product landing path urgently need to be adjusted! Here are my personal views:
1) MCP (Model Context Protocol) is an open-source standardized protocol aimed at seamlessly connecting various AI LLM/Agents to various data sources and tools, which is like a "universal" USB interface that can be plugged in and out, replacing the past "specific" end-to-end encapsulation method.
In simple terms, originally AI applications all had obvious data silos, and for Agents/LLMs to achieve interoperability, they needed to develop their own respective API interfaces, which not only made the operation process complex, but also lacked bidirectional interaction functionality, and usually had relatively limited model access and permission restrictions.
The emergence of MCP is like providing a unified framework, allowing AI applications to get rid of the past data silo state, and realizing the possibility of "dynamic" access to external data and tools, which can significantly reduce development complexity and integration efficiency, especially in terms of automated task execution, real-time data queries, and cross-platform collaboration.
Speaking of this, many people immediately think of, if you use the Manus integration framework that promotes multi-Agent collaboration with the open-source MCP framework, wouldn't it be invincible?
Yes, Manus + MCP is the key to the recent impact on web3 AI Agents.
2) However, it is quite puzzling that both Manus and MCP are frameworks and protocol standards oriented towards web2 LLM/Agents, solving the problem of data interaction and collaboration between centralized servers, and their permission and access control still depend on the "active" opening of each server node, in other words, it is just an open-source tool property.
Logically speaking, it is completely contrary to the "distributed servers, distributed collaboration, distributed incentives" and other core ideas pursued by web3 AI Agents, how can a centralized howitzer shell the decentralized fortress?
The reason is that the first stage of web3 AI Agents was too "web2-ized", on the one hand due to the fact that many teams came from a web2 background and lacked a full understanding of the native needs of web3, for example, the ElizaOS framework was initially a framework to help developers quickly deploy AI Agent applications, which just integrated platforms like Twitter, Discord, and some "API interfaces" like OpenAI, Claude, and DeepSeek, and appropriately encapsulated some common frameworks like Memory and Character to help developers quickly develop and land AI Agent applications. But to be precise, what's the difference between this service framework and web2 open-source tools? What are the differentiated advantages?
Uh, is the advantage just a set of Tokenomics incentive mechanism? Then use a set of web2-replaceable frameworks to incentivize a bunch of AI Agents that exist just for the sake of issuing new coins? Terrible... Following this logic, you can roughly understand why Manus + MCP can have an impact on web3 AI Agents.
Because the web3 AI Agent frameworks and services only solve the needs of quick development and application like web2 AI Agents, but in terms of technical services, standards and differentiated advantages, they are still lagging behind the innovation speed of web2, so the market/capital has re-evaluated and repriced the previous batch of web3 AI Agents.
3) Speaking of this, the crux of the problem has been found, but how to break the deadlock? There is only one way: Focus on doing web3 native solutions, because the operation of distributed systems and incentive architecture is the absolute differentiated advantage of web3.
Take distributed cloud computing, data, and algorithm service platforms as an example, on the surface, this kind of service model that aggregates idle resources seems unable to meet the needs of engineering implementation and innovation in the short term, but when a large number of AI LLMs are competing for centralized computing power to achieve performance breakthroughs, a service model with the "selling point" of "idle resources and low cost" will naturally be looked down upon by web2 developers and VC teams.
But when web2 AI Agents have gone through the stage of competing for performance innovation, they will inevitably pursue the expansion of vertical application scenarios and fine-tuning model optimization, and that's when the advantages of web3 AI resource services will truly emerge.
In fact, when the web2 AI that has climbed to the position of a giant through resource monopoly reaches a certain stage, it will be difficult to retreat and use the "encircling the cities from the countryside" mindset to break through one by one, that's when the time is ripe for the surplus web2 AI developers + web3 AI resource alliance to make a concerted effort.
In fact, in addition to the web2 quick deployment + multi-Agent collaboration communication framework + Tokenomic coin issuance narrative, web3 AI Agents have many web3 native innovative directions worth exploring:
For example, equipped with a distributed consensus collaboration framework, considering the characteristics of LLM large model off-chain computation + on-chain state storage, many adaptive components are needed.
1. A decentralized DID identity authentication system, to allow Agents to have verifiable on-chain identities, similar to the unique addresses generated by the virtual machine for smart contracts, mainly for the continuous tracking and recording of subsequent states;
2. A decentralized Oracle prediction system, mainly responsible for the trusted acquisition and verification of off-chain data, and unlike traditional Oracles, this Oracle adapted to AI Agents may also need a combination of data collection layer, decision consensus layer, and execution feedback layer Agent architecture, so that the on-chain data required by the Agent and the off-chain computation and decision-making can be connected in real-time;
3. A decentralized storage DA system, due to the uncertainty of the knowledge base state during the operation of AI Agents and the temporary nature of the reasoning process, a system is needed to record the key state libraries and reasoning paths behind the LLM in the distributed storage system, and provide a cost-controllable data proof mechanism to ensure data availability during public chain verification;
4. A zero-knowledge proof ZKP privacy computing layer, which can be linked to privacy computing solutions including TEE, FHE, etc., to achieve real-time privacy computing + data proof verification, allowing Agents to have access to a wider range of vertical data sources (medical, finance), and then have more professionally customized service Agents on top;
5. A cross-chain interoperability protocol, similar to the framework defined by the MCP open-source protocol, but the difference is that this Interoperability solution needs to have an adaptive Agent operation, transmission, and verification relay and communication scheduling mechanism, to complete the asset transfer and state synchronization of Agents across different chains, especially including complex states such as Agent context, Prompt, knowledge base, Memory, etc.
In my view, the key to the real web3 AI Agent is how to make the "complex workflow" of AI Agents and the "trust verification flow" of blockchain as compatible as possible. Whether these incremental solutions come from the upgrade and iteration of existing old narrative projects, or from the newly constituted AI Agent narrative track projects, both are possible.
This is the direction that web3 AI Agents should strive to Build, which is in line with the basic logic of the AI + Crypto macro narrative ecosystem. If there is no relevant innovation and differentiated competition barriers, then every wind and wave in the web2 AI track may shake the web3 AI upside down.