In the next phase, "reliability," "governance capabilities," and "distribution capabilities" will become more important competitive dimensions.
Written by: Cuy Sheffield, Vice President and Head of Crypto at Visa
Compiled by: Saoirse, Foresight News
As cryptocurrencies and AI mature, the most important shift in these two fields is no longer "theoretically feasible," but rather "reliably implementable in practice." Currently, both technologies have crossed key hurdles and achieved significant performance improvements, but their adoption rates remain uneven. The core development trend in 2026 stems precisely from this gap between "performance and adoption."
Below are some core themes I have been focusing on for a long time, along with my initial thoughts on the direction of technological development, areas of value accumulation, and even "why the ultimate winner may be very different from the industry pioneers".
Topic 1: Cryptocurrencies are transitioning from a speculative asset class to a premium technology.
The first decade of cryptocurrency development was characterized by "speculative advantage"—its market is global, continuous, and highly open, and its extreme volatility makes cryptocurrency trading more dynamic and attractive than traditional financial markets.
However, at the same time, its underlying technology is not yet ready for mainstream applications: early blockchains were slow, costly, and lacked stability. Aside from speculative scenarios, cryptocurrencies have almost never surpassed existing traditional systems in terms of cost, speed, or convenience.
This imbalance is now beginning to reverse. Blockchain technology has become faster, more economical, and more reliable, and the most attractive application for cryptocurrencies is no longer speculation, but rather in the infrastructure sector—especially in settlement and payment. As cryptocurrencies mature into a more sophisticated technology, speculation will gradually lose its central role: it won't disappear entirely, but it will no longer be the primary source of value.
Theme 2: Stablecoins are a clear achievement of cryptocurrencies in terms of "pure utility".
Unlike previous cryptocurrency narratives, stablecoins' success is based on specific, objective standards: in specific scenarios, stablecoins are faster, cheaper, and have wider coverage than traditional payment channels, while also seamlessly integrating into modern software systems.
Stablecoins do not require users to believe in cryptocurrencies as an "ideology," and their applications often occur "implicitly" within existing products and workflows. This allows institutions and companies that previously believed the cryptocurrency ecosystem was "too volatile and not transparent enough" to finally clearly understand its value.
It can be said that stablecoins have helped cryptocurrencies re-anchor themselves to "utility" rather than "speculativeness" and have set a clear benchmark for "how cryptocurrencies can be successfully implemented".
Theme 3: When Cryptocurrency Becomes Infrastructure, "Distribution Capability" is More Important than "Technological Novelty"
In the past, when cryptocurrencies primarily served as "speculative tools," their "distribution" was endogenous—new tokens only needed to "exist" to naturally accumulate liquidity and attention.
As cryptocurrency becomes infrastructure, its application scenarios are shifting from the "market level" to the "product level": it is embedded in payment processes, platforms, and enterprise systems, and end users are often unaware of its existence.
This shift is highly beneficial to two types of entities: first, businesses with existing distribution channels and reliable customer relationships; and second, institutions with regulatory approvals, compliance systems, and risk control infrastructure. "Protocol novelty" alone is no longer sufficient to drive the large-scale adoption of cryptocurrencies.
Theme 4: AI intelligent agents have practical value, and their impact is extending beyond the field of coding.
The practicality of AI agents is becoming increasingly apparent, but their role is often misunderstood: the most successful agents are not "autonomous decision-makers" but "tools to reduce coordination costs in workflows".
Historically, this has been most evident in the field of software development—agent tools have accelerated the efficiency of coding, debugging, code refactoring, and environment setup. However, in recent years, this "tool value" has been spreading significantly to many more fields.
Take tools like Claude Code as an example. Although it is positioned as a "developer tool," its rapid popularity reflects a deeper trend: intelligent agent systems are becoming "interfaces for knowledge work," rather than being limited to the field of programming. Users are beginning to apply "agent-driven workflows" to research, analysis, writing, planning, data processing, and operational tasks—tasks that are more like "general professional work" than traditional programming.
The real key is not "ambience coding" itself, but the core pattern behind it:
- The user is entrusting a "target intent", not "specific steps";
- Intelligent agents manage "contextual information" across files, tools, and tasks;
- The work mode has shifted from "linear progression" to "iterative and dialogic".
In various knowledge work, intelligent agents excel at collecting context, executing limited tasks, reducing process handover, and accelerating iteration efficiency, but they still have shortcomings in "open judgment", "responsibility attribution", and "error correction".
Therefore, most intelligent agents currently used in production scenarios still need to be "limited in scope, subject to supervision, and embedded in the system," rather than operating completely independently. The real value of intelligent agents lies in "reconstructing knowledge workflows," rather than "replacing labor" or "achieving complete autonomy."
Theme 5: The bottleneck of AI has shifted from "intelligence level" to "trust level".
The intelligence level of AI models has improved rapidly. Today, the limiting factor is no longer "single language fluency or reasoning ability", but "reliability in real systems".
The production environment has zero tolerance for three types of problems: first, AI "illusions" (generating false information); second, inconsistent output results; and third, opaque failure modes. Once AI is involved in customer service, financial transactions, or compliance processes, "generally correct" results are no longer acceptable.
The establishment of "trust" requires four foundations: first, traceability of results; second, the ability to remember; third, verifiability; and fourth, the ability to proactively expose "uncertainty." Until these capabilities are sufficiently mature, the autonomy of AI must be limited.
Theme 6: Systems Engineering Determines Whether AI Can Be Implemented in Production Scenarios
Successful AI products treat the "model" as a "component" rather than a "finished product"—its reliability stems from "architectural design," not "prompt word optimization."
The "architectural design" here includes state management, control flow, evaluation and monitoring systems, as well as fault handling and recovery mechanisms. This is why the development of AI today is increasingly resembling "traditional software engineering" rather than "cutting-edge theoretical research."
Long-term value will favor two types of entities: system builders and platform owners who control workflows and distribution channels.
As agent tools expand from the coding domain to research, writing, analysis, and operational processes, the importance of "systems engineering" will become even more apparent: knowledge work is often complex, dependent on state information, and context-intensive, making agents that can reliably manage memory, tools, and iterative processes (rather than agents that can only generate outputs) more valuable.
Theme 7: The contradiction between open models and centralized control gives rise to unresolved governance issues.
As AI systems become more powerful and their integration with the economic field deepens, the question of "who owns and controls the most powerful AI models" is generating a core conflict.
On the one hand, research and development in cutting-edge AI fields remains "capital-intensive" and is increasingly concentrated due to factors such as "acquisition of computing power, regulatory policies, and geopolitics." On the other hand, open-source models and tools are continuously iterating and optimizing under the impetus of "extensive experimentation and convenient deployment."
This coexistence of centralization and openness raises a series of unresolved issues: dependency risk, auditability, transparency, long-term bargaining power, and control over critical infrastructure. The most likely outcome is a "hybrid model"—cutting-edge models drive technological breakthroughs, while open or semi-open systems integrate these capabilities into "widely distributed software."
Theme 8: Programmable Money Spurs New Smart Agent Payment Flows
As AI systems play a role in workflows, their need for "economic interactions" increases—such as paying for services, calling APIs, paying other intelligent agents, or settling "usage-based interaction fees."
This demand has brought "stablecoins" back into the spotlight: they are seen as "machine-native currencies" that are programmable, auditable, and can be transferred without human intervention.
Take x402, a "developer-oriented protocol," as an example. Although it is still in the early experimental stage, its direction is very clear: payment flows will run in the form of "APIs" rather than traditional "checkout pages"—which will enable "continuous and refined transactions" between software agents.
Currently, this field is still in its infancy: transaction volume is small, user experience is rudimentary, and security and permission systems are still being improved. However, infrastructure innovation often begins with such "early explorations."
It is worth noting that its significance is not "autonomy for the sake of autonomy", but rather "when software can complete transactions through programming, new economic behaviors will become possible".
Conclusion
Whether it's cryptocurrency or artificial intelligence, the early stages of development favored "eye-catching concepts" and "technological novelty"; however, in the next stage, "reliability," "governance capabilities," and "distribution capabilities" will become more important competitive dimensions.
Today, technology itself is no longer the main limiting factor; the key is to "embed technology into actual systems".
In my view, the defining characteristic of 2026 will not be "a breakthrough technology," but rather "the steady accumulation of infrastructure"—infrastructure that, while operating silently, is also quietly reshaping "the way value is transferred" and "the way work is carried out."




