Written by: Bankless
Compiled by: Plain Language Blockchain
Link: https://www.techflowpost.com/zh-CN/article/30530
Disclaimer: This article is a reprint. Readers can obtain more information through the original link. If the author has any objection to the reprint format, please contact us and we will modify it according to the author's request. This reprint is for information sharing only and does not constitute any investment advice, nor does it represent Wu Blockchain views or positions.
Cryptocurrencies have long been criticized for their poor user experience (UX) and extremely high operational risks. But what if this "anti-human" design is not a flaw, but rather a kind of advanced evolution? This dialogue explores a forward-looking view: blockchain may not have been designed for humans from its inception, but rather for artificial intelligence agents. While humans are still marveling at poisoning, private key storage, and blind contract signing, AI agents are thriving in the world of code. They are tireless, fearless, and naturally proficient in machine language. With the advancement of cutting-edge experiments like OpenClaw, we are entering a new era of dual-track decision-making—humans stepping down from behind the scenes to make decisions, while AI is rapidly emerging in the blockchain wilderness. This is not merely a fusion of technologies, but a power transfer of financial sovereignty from an "apes encyclopedia" to a "digital brain."
Choosing the wrong user: Why is cryptocurrency inherently "anti-human"?
Host: In what aspects do AI agents have a comparative advantage over humans?
Hib: The most obvious answer is: you can't enforce the law on an AI agent. If you're a fully autonomous AI agent, there's no such thing as a violent monopoly. You can't put an AI agent in prison.
Host: Hib, I have a question: Why does cryptocurrency seem like it wasn't designed for humans? Even as a crypto user for 10 years, I still feel apprehensive every time I sign a large transaction. I'm thinking about this fact: I've never been afraid of transferring money via wire transfer.
Hib: I never worry that I might accidentally transfer money to North Korea if I don't double-check the wire transfers.
Host: Yes. But I think this way every time I sign a large crypto transaction. The reality is that the crypto world is full of "foot guns": when reading an address, you have to consider whether it's an address poisoning attack; you should check the middle few digits, only looking at the beginning and end; there's no excessive authorization (outdated approval); you have to check the URL to make sure it's not a slightly modified phishing website. There aren't so many traps in the traditional financial system.
Currently, the prevailing narrative in the crypto world is: it's all because humans are too lazy. Humans should be more focused on security and have better operating habits. This is the users' own problem, not the technology's fault. But the more I think about it, the more I feel that if we're still deceiving ourselves like this 10 years from now, perhaps the problem isn't with the users, but with choosing the wrong users.
Smart Contracts and AI: The Perfect Habitat for Text-Based Creatures
Hib: What truly made me realize was how powerful my AI agent is at processing code, and how difficult it is for humans to handle problematic issues. I remember writing my first blog post when I entered the industry, saying that smart contracts would replace laws and traditional contracts, hence the term "smart contracts." The future would be about signing agreements directly with code, without needing lawyers.
But in reality, this didn't happen. We didn't replace legal contracts with smart contracts. In fact, as a crypto VC, Dragonfly still signs legal contracts when we want to purchase tokens from foundations or projects. Even with smart contracts, we still sign a separate legal contract just in case.
Host: So this shows that this thing wasn't designed for humans, but it's very suitable for non-human participants. You mentioned an analogy at ETH Denver: First of all, most of the people who say "smart contracts perfectly replace traditional laws and property rights" are autistic software engineers—they are the people who built Ethereum. But most Ethereum users are not autistic software engineers. However, the AI agents are still those engineers, more so than ordinary people.
Hib: So you'll find that negotiating a smart contract, performing line-by-line static analysis, finding all possible errors, and even formally verifying it before deciding whether to agree—this is something a code model like Claude's can accomplish. Humans, on the other hand, have to hire software engineers, spend time examining code boundaries, considering the situation, and doing risk analysis with lawyers. My tolerance for smart contracts is far lower than for legal contracts. But AI agents are the opposite: they are far less comfortable with smart contracts than with legal contracts.
Host: You mentioned in your blog that legal contracts are actually full of randomness. For example, when you sign a legal contract, you don't know in which jurisdiction it will ultimately be enforced. It might be California, it might be New York, and there will be jurisdictional issues. Even in New York, the terms might be ruled invalid. Who is the lawyer? Who is the judge? Judges and juries are randomly selected. These things are designed to be random and uncertain. When an AI agent sees a legal contract, it will think: This is inexplicable and uncertain.
Hib: Intelligent contracts are machine code, compiled into EVM bytecode, which can be analyzed in one step, and the same thing will happen in 100% of scenarios. Humans know this rationally, but intuitively they don't feel it this way. We actually think legal contracts are more predictable, despite their randomness. This is because our bounded rationality is not as good at processing code as AI agents. But for AI agents, the things that cryptography originally promised—better enforcement, better property rights—are truly established.
Host: So your point is that the original promise of encryption is not fulfilled by humans, but by an artificial intelligence agent on behalf of humans.
Host: I recently downloaded MetaMask to check in at ETH Denver. Are you still downloading MetaMask? I was pleasantly surprised by the improvements to MetaMask's UX; it represents progress in the industry. We've definitely been improving the human user experience over the years.
Hib: What you're saying goes beyond simple user experience improvements. Artificial intelligence isn't just about helping humans address the inherent flaws in encrypted user experiences. For example, in opening a ledger for blind signing (blind signatures), AI can parse the code and know whether it's for or against. This can improve the encrypted user experience, but the deeper point is: blockchain is not inherently a technology optimized by humans.
Host: Yes, ultimately it serves humanity, because the ultimate value flows to humans. But is the correct way for humans to use it really to manually click a mouse, select plugins, enter passwords, press buttons, and approve gas payments? This is too counterintuitive for humans; it completely contradicts our understanding of money and finance. It's like a banking system requiring humans to write their own SWIFT code. SWIFT is an interbank communication protocol, not designed for humans. Forcing humans to use it themselves, while functional, clearly doesn't align with humanity's instinctive expectation of how money should be used.
Hib: So my point is: now humans are interacting directly with machines, which is de-emphasis. This is actually terrible. It's like cars: 10 years from now we'll look back with horror at the idea that it was a good idea to let apes manually control two-ton machines, drive on highways, possibly drunk or fatigued. This will reduce the likelihood of non-human driving being prohibited, or only permitted in specific areas.
Encryption has also reached this point. We recall how humans manually blind-signed transactions and visually checked addresses. We manually examined URLs to determine if they were phishing defenses. Humans make mistakes, get tired, and lack the energy to check three times, check DNS, and check Twitter for protocol compromises. We lack an automatic alert mechanism in our protocols when compromised; we have to manually check Twitter to see it. That is, errors are possible. But AI agents never tire, never slack off, never skip steps, and strictly execute instructions.
Dual-track tools: From manual interaction to the automated future of AI agents
Host: Imagine a world entirely driven by AI. You tell AI, "I think interest rates are going to rise, so we should move to safer DeFi." AI automatically executes: shifting you from a high-risk position to a low-risk strategy. If you want confirmation, it gives you the plan: "This is my plan, please approve." In the near future, it might be approving the plan; in the far future, it might be direct execution, because humans can't add any value.
Hib: In this world, you no longer click on protocol icons, no longer look at marketing materials, and no longer even specify which protocol to enter. You simply say "reduce risk and reconfigure the interface," and AI filters protocols, looks at TVL (Total Value Limit), and singles out the best one to execute. But what about marketing and network effects? Many protocols' business models are built on the surface of human perception: humans look at the first few and inevitably choose the largest one. But AI agents don't think that way.
If this scenario holds true, the way protocols work and compete will change. Ultimately, consumers will benefit the most. Efficiency will be captured by users, which is good for good users and good for encryption. But this won't happen immediately; it will arrive gradually as the model improves.
Host: If encryption was designed for AI agents, not humans, then learning to see the world from an AI agent's perspective is crucial. There's a book called *Seeing Like a State*, which discusses how states view the world. It's difficult to step outside the human perspective. We see UIs and encryption with human eyes. But if we start seeing things from an AI agent's perspective, we can better predict the future. This is a key skill for builders, VCs, and investors.
The OpenClaw project was the first time I saw how an unconstrained AI agent sees the world. It prefers the command line. Give it raw data and root access, rather than through an API or a wrapped UI, and it's fast. OpenClaw has always wanted to bypass the MetaMask UI, directly taking the seed, extracting the private key, and writing transactions in code, skipping those fancy UIs designed for humans.
Hib: You've hit the nail on the head. AI innovation comes from Large Language Models (LLM), trained on massive amounts of text. Text is the core. We're now migrating to images and videos, but text remains the strongest. When AI operates a computer, you give it a screenshot, and it tokenizes it, but it's essentially a text-based organism. Text contains linguistic data from the entire history of humankind, while computer screenshots have very little training data. Interfaces are designed for humans, but models grow larger within text. Text is a highly compressed representation, making it easier for them to learn.
Host: Yes, the most serious UX panic in the crypto era occurred when everything was on the terminal. The earliest Bitcoin and Ethereum transactions were all done via command line. Crypto has always existed as a perfect form factor for AI. Our bad UX is their "good UX." Something like the Google OAuth wallet is actually harder for AI to handle. You don't want AI to have Google Tokens because that goes into a Google account. You want it to hold only a single encrypted token, in an isolated wallet, with noisy rules. Crypto has always had a UX that AI can perfectly interpret.
Hib: The problem right now is that AI hasn't been trained to use encryption. Most of their training is in coding, math, and conversation. Recently, OpenAI released EVM Bench, and Anthropic also published a paper demonstrating their intelligent capabilities by attacking EVM. But most of the time they're testing generalization ability, not training these skills. True artificial intelligence will only emerge once they believe encryption is the mainstream payment method of the future.
Host: Currently, encryption is still a relatively unexplored area for artificial intelligence training compared to other fields.
Hib: Anything that isn't optimized is like this. For example, Claude plays terribly. Because they didn't train the chess player. They didn't encrypt the laser array, firstly because encryption is controversial (a cause for concern), and secondly because of legal liability. If they publicly said that the model was trained to encrypt the chess player's information, and someone messed it up, it would definitely make headlines. Even signing a disclaimer wouldn't help; bad experiences spread. It's all about risk and reward.
Host: So you think the main thing they didn't do was bear legal responsibility. If Crowder messed up the deal and lost money, the responsibility would be huge, and they wouldn't dare to train publicly.
Hib: It will definitely happen. The risk-reward ratio is different compared to coding or medical advice. Crypto wallets involve financial operations, and the risks are completely different.
Host: This is why OpenClaw is so exciting for the crypto community: it's not from a big company, there's no legal liability pressure, it's an open-source project, and users assume all risks. No one can sue a third party, so it dares to take these risks. What is the timeline for adopting this AI agent economy?
Hib: Only about 12% of people worldwide have used AI products, and most have never used them. Of those who have used them, only 1% have paid for them. Technology diffusion is slower than expected.
Host: OpenClaw is at the top of the list again in the 1% payment.
Hib: Yes. After OpenAI acquired OpenClaw, Sam Altman said it would be the core of future products. But OpenAI's path is different from OpenClaw's. OpenClaw was an open-source experiment, like an early car without seatbelts. OpenAI prioritizes safety: it has business processes, and purchases require manual approval. OpenAI won't act like OpenClaw for at least five years; the legal liability is too heavy. Visa also won't allow it: if AI makes random purchases, Visa will support refunds because it wasn't the user. They will require verification that you are human. Visas are designed for human-to-human interaction; in a world of AI agents, the economic mechanisms need to change.
Host: So it's a quota-based system: one is a world recognized by humans, where they stay long-term, with safety as the top priority. The other is a futuristic world like OpenClaw. They use stablecoin wallets to pay each other, without worrying about 3DS or refunds. AI errors are simply a business cost.
Hib: It will operate in extra-orbital worlds for extended periods. Those at the forefront will build fully automated businesses across the entire chain. The current model isn't good enough, but Claude 4.6 can perform human tasks continuously for 14 hours, and this is growing exponentially. When capabilities reach infinity, all intuition will crumble.
Host: If the orbital occupancy holds true, AI's adoption of encryption at a rate that leads to orbital success is akin to the earlier internet. OpenClaw is a world earlier than the internet.
Hib: You can tell just by looking at encryption itself. In 2017, Coinbase only listed a few coins to protect users. The real cutting edge was on-chain: Arctic, hackers, carpet bombing. It wasn't until recently that the Coinbase app directly supported Uniswap. It took a long time to feel secure enough. AI is the same now: the cutting edge is in the OpenClaw world. Agents can make mistakes and hallucinate. But with training, the error rate will increase.
Host: How can we get AI developers to respect the potential of cryptocurrencies instead of just seeing them as speculation?
Hib: Many who believe in AI also believe in cryptocurrency: Elon Musk, Sam Altman, Zuckerberg. Cryptocurrency is indeed controversial and disruptive, but it won't disappear. Just like email spam is everywhere, but Gmail helps block it. AI does the same thing: blocks the bad, amplifies the good. Technology is never a hybrid. Information is digitized, and so is money—it won't go backwards. In the long run, controversy will lead to adoption.
Host: One last question: Dragonfly's new fund is $650 million. Has AI affected your strategy?
Hib: We're looking at this space extensively. Although it's still early, the direction of value flow is uncertain. Personally, I'm focused on AI, but we also look at stablecoins, payments, and DeFi. AI agents are general intelligence, using the things we use or command-line capabilities. There might not be many specifically AI-focused investable projects. If you believe in AI-driven theories, what should you buy? It's like what to buy when China lifts its cryptocurrency ban—everything goes up. Increased demand raises the floor time. This is positive for cryptocurrencies overall.
Host: Thank you. Despite the risks of cryptocurrency, we are moving towards the forefront of artificial intelligence. It's great to have you on this journey into unbanked markets. Thank you!




