On February 23, 2026, what should have been a calm Monday turned into a brutal single-day plunge for IBM's stock, the worst since October 2000. The stock closed down 13.2%, wiping out approximately $40 billion in market capitalization within hours. The trigger wasn't a disastrous earnings report or regulatory crackdown, but a product announcement: AI startup Anthropic announced that its Claude Code tool could modernize the COBOL programming language running on IBM systems—COBOL being IBM's highly profitable "moat" business. Three days later, a similar scenario unfolded in the complete opposite direction. On February 26, Jack Dorsey's fintech company Block announced layoffs of approximately 4,000 employees, nearly 50% of its workforce, also citing AI-driven efficiency improvements. However, the market reaction was drastically different—Block's stock price jumped over 24% in after-hours trading. In his letter to shareholders, Dorsey frankly stated, "I believe that most companies will come to the same conclusion and make similar structural adjustments within the next year." Two events, the same driving factor—AI; two drastically different market reactions—one a sharp drop, the other a surge. What exactly happened behind the scenes? The answer may point to a deeper proposition: AI is redefining "what constitutes a valuable asset." For listed company executives, investors, and decision-makers in traditional enterprises, understanding this revaluation logic is no longer a forward-looking strategic consideration, but a matter of immediate survival.

I. Same AI, Different Market Judgments
To understand the contrast between these two events, it's essential to first examine their respective asset structures. IBM's stock plunge, ostensibly due to the technological threat posed by its Claude Code tool, was in reality a repricing of its core asset model by the market. COBOL, a programming language born in the late 1950s, still powers approximately 95% of ATM transactions globally and numerous core systems in critical sectors such as finance, aviation, and government. Anthropic wrote in his blog, "Hundreds of billions of lines of COBOL code run in production environments every day, powering critical systems. Despite this, the number of people who understand COBOL is decreasing year by year." Modernizing COBOL systems has long been a complex and costly undertaking, a key competitive advantage for IBM. However, Anthropic claimed, "With the power of AI, teams can modernize the COBOL codebase in just a few quarters, without spending years." The subtext the market heard was that IBM's reliance on labor-intensive system maintenance revenue and mainframe service revenue was being eroded by AI technology. Interestingly, IBM's stock price rebounded 2.68% the following day. Wall Street analysts like Wedbush and Evercore ISI quickly stepped in to support the market, calling the plunge an "unfounded overreaction." Their reasoning went straight to the heart of the matter: enterprise customers wouldn't immediately abandon their mainframe systems simply because a new AI tool can translate legacy code. There's a huge gap between translating code syntax and modernizing systems with deep hardware-software integration. IBM itself responded on the same day, arguing that the challenge of modernization isn't the COBOL language, but the IBM Z platform—translated code barely captures the actual complexity; the platform's value comes from decades of hardware-software integration, something code translation can't transfer. Consider the Block case. Also involving large-scale layoffs and AI-driven growth, the market reacted with a 24% increase. The key lies in Block's changing asset structure. Since 2024, Block has been restructuring its business model and staffing, while heavily investing in AI tools to improve operational efficiency, including developing its own tool called Goose. Block's CFO, Amrita Ahuja, emphasized in explaining the layoffs, "We are taking bold and decisive action, but it is based on our strength." This "strength" is supported by data: gross profit for the full year of 2025 reached $10.36 billion, a year-on-year increase of 17%. This strong financial performance provides the company with a buffer to proceed with a large-scale restructuring at this time. The market's interpretation is clear: Block is not passively shrinking under the impact of AI, but actively optimizing its asset structure—exchanging less "human capital" for higher "technology capital" output efficiency. The 50% layoff while raising full-year guidance signifies that the value of each unit of human output is being amplified by AI.
II. In the AI era, four types of assets are being repriced.
These two cases reveal an emerging trend: AI is becoming a "repricing machine" for asset value. Different types of assets exhibit drastically different value curves under AI's evaluation framework. The first category is human capital-intensive assets . The value of "information processors" such as IBM's COBOL maintenance team, traditional analysts, and programmers is being diluted by AI. Anthropic, in introducing Claude Code, mentioned that the tool can identify "risks that would take human analysts months to discover." This doesn't mean that humans are no longer important, but rather that the value of jobs relying on information asymmetry and procedural knowledge is being compressed by technology. However, it's important to be cautious about AI replacing "information processing" rather than "value creation." Futurum Group analyst Mitch Ashley pointed out in a research report that successful COBOL modernization projects require multiple dimensions, including business scope definition, technology assessment, data migration planning, behavioral equivalence verification, observability, and organizational change management; code translation is only one part of this. The human abilities to navigate complex systems, understand the essence of business, and make strategic judgments remain scarce. The second category is data assets , which are becoming the high ground of value in the AI era. With the rapid development of generative AI, the value attributes of data are being reshaped. A study published in *PLOS One* by Tang et al. points out that generative AI is changing how data is acquired, processed, and utilized. The value of data assets depends not only on their intrinsic quality and relevance but also on their application scenarios, transformation capabilities, and market demand within the generative AI framework. This means that the uniqueness, continuity, and governability of data are becoming core value dimensions. A dataset may be extremely valuable in one scenario but useless in another. Companies that can provide exclusive, continuous, and high-quality data for AI model training are gaining new pricing power. The third category is algorithm and model assets . The EVMbench, launched by OpenAI in collaboration with Paradigm to evaluate AI's ability to detect, patch, and exploit smart contract vulnerabilities, itself demonstrates that algorithms are becoming quantifiable assets. Model weights, algorithmic frameworks, and training methodologies are becoming identifiable, controllable, and monetizable intangible assets. The fourth category is traditional tangible assets , which are undergoing differentiation. Physical assets that rely on "information asymmetry" and "human intermediaries" face depreciation pressure, while physical assets with "AI-resistant" attributes—such as energy facilities, scarce resources, and core infrastructure—remain relatively stable in value. The reason is simple: AI can analyze and optimize the operation of these assets, but it cannot replace their physical existence and value-carrying function.
III. From "Asset Revaluation" to "AI Immunity"
Based on the above analysis, enterprises need a systematic framework to determine whether their assets will appreciate or depreciate in the AI era. RWA Research Institute proposed an "AI-immune" asset identification framework, comprising three core characteristics. The first characteristic is non-encodeability . This refers to value elements that are difficult for AI to fully learn or replicate. While the COBOL code itself can be translated by AI, the transaction processing capabilities, quantum-secure encryption, and 8.99% reliability of the Z-series mainframes running COBOL systems, built at the chip level, are things that AI tools cannot replicate. Research by Futurum Group points out that "code translation cannot capture actual complexity; platform value comes from decades of hardware and software integration." Similarly, offline scenario control, tacit industry knowledge, and complex relationship networks—elements that are difficult to "encode"—constitute the first line of defense for assets. The second characteristic is a data moat . Does the enterprise possess exclusive, continuous, and governable data assets? Does it merely use publicly available data, or can it generate data that others cannot access? CITIC Bank has begun exploring the use of large-scale models to assess the value of data assets and is attempting to "include data assets on its balance sheet." The underlying logic is that in the AI era, data is not only the raw material for production but also the asset itself. However, not all data has a moat—publicly available online data is quickly "digested" by AI models, while only companies with exclusive data sources can obtain a premium under the AI valuation framework. The third characteristic is the resilience of AI empowerment . Can the asset itself be enhanced rather than replaced by AI? This is the key to distinguishing between an IBM-style shock and a Block-style transformation. IBM's core business—maintaining the legacy COBOL system—is what AI "replaces"; while Block's business model—payments and financial services—can be "empowered" by AI. In fact, IBM itself has developed Watsonx Code Assistant for Z, a dedicated tool that allows clients to securely refactor and modernize legacy code directly on the platform while preserving enterprise-level security. When assets can synergize with AI rather than antagonize it, their value increases. Conversely, AI-vulnerable assets also exhibit three characteristics: reliance on "information processing" as their core value, substitutability through standardized processes, and lack of data generation and accumulation capabilities. By comparing these three characteristics, companies can conduct "stress tests" on their asset portfolios.
IV. New Opportunities for RWA: What Assets Are Worthy of Tokenization?
Extending this framework to the RWA (Real-World Asset Tokenization) field, a clear conclusion emerges: RWA is not about "any asset being put on-chain," but rather about selecting hard assets that can weather the AI cycle amidst the wave of AI revaluation. In March 2026, the total value of on-chain RWA exceeded $25 billion, nearly quadrupling from the previous year. However, the Hong Kong Web3.0 Standardization Association, in its RWA industry white paper released in August 2025, explicitly stated that "the idea that everything can be RWA is a false proposition." Assets that successfully achieve large-scale implementation need to meet three major hurdles : value stability, clear legal ownership, and verifiable off-chain data . Combining this with the "AI immunity" framework, we can further refine this to: assets worthy of tokenization are primarily those whose value remains stable during AI revaluation . The first category is physical assets with "AI immunity" characteristics . These include energy assets, infrastructure, and scarce resources. The value of these assets does not depend on information processing but rather on their physical existence and actual utility. New energy RWA (such as charging piles and photovoltaic assets) and computing power assets like GPUs, mentioned in the white paper, fall into this category. Among them, GPU computing power assets, with their "rigid demand" from the AI industry and their trustworthy "digital genes," are becoming ideal anchor assets for RWA. The second category is programmable data assets . Assets with exclusive data sources and the ability to automatically monetize through smart contracts possess both a "data moat" and "AI-enabled flexibility." The white paper categorizes data, along with intellectual property and carbon credits, as intangible assets. However, it's important to note that not all data can become assets—only data that is continuously generated, verifiable, and authorizable has the foundation for tokenization. The third category is hybrid assets , combining "non-encodeable" physical control with "programmable" digital rights. For example, the ownership of commercial real estate can be tokenized, but the actual operation, maintenance, and leasing of the property—the control of these offline scenarios—remains in the hands of professional institutions. This "physical + digital" dual-layer structure utilizes the liquidity advantages of blockchain while retaining the "AI-immune" offline value anchor. Conversely, two types of assets require cautious approach to tokenization in the AI era. One type is financial assets that heavily rely on human intermediaries, whose value is easily compressed by AI; the other type is standardized assets without a data moat, which lack bargaining power under the AI valuation framework.
V. Action Guidelines: From Cognition to Decision Making
IBM's $40 billion loss signals an era—assets reliant on information asymmetry and manpower are being revalued by AI. Block's counter-trend rise heralds another era—companies embracing AI and optimizing their asset structures are gaining market repricing. For decision-makers in listed companies and traditional enterprises, this is not merely technological anxiety, but a fundamental restructuring of the asset value system. CEOs need to answer an unavoidable question: How much is my asset portfolio worth in the eyes of AI? Based on this analysis, three actionable suggestions can be proposed. First, immediately initiate an "AI stress test" for assets. Evaluate each core business unit against the three characteristics of the "AI immunity" framework—non-codeability, data moat, and AI-enabled resilience. Identify which businesses are most vulnerable to value depreciation under AI's impact and which may benefit from AI's amplifying effect. Second, establish a dynamic asset portfolio management mechanism. In the context of AI revaluation, asset allocation is no longer a static "buy and hold" strategy. Companies need to consciously increase the proportion of "AI-immune" assets while developing transformation or divestiture plans for AI-vulnerable assets. This is not solely the responsibility of the finance department; it requires collaboration among the strategy, technology, and business departments. Third, the RWA strategy needs to be re-examined. Before considering asset tokenization, the underlying assets should be screened using an "AI-immune" framework. The core value of RWA is not "on-chain" itself, but rather achieving better liquidity and pricing efficiency for high-quality assets through tokenization. If the underlying assets themselves depreciate in the AI era, then tokenization only accelerates the loss of value. Finally, it is important to note that according to Document No. 42 jointly issued by eight Chinese departments, any form of token issuance and tokenized trading is strictly prohibited within mainland China . The RWA tokenization discussed in this article refers only to asset digitization practices within overseas compliance frameworks. When exploring related businesses, companies must strictly adhere to the regulatory red line of "strictly prohibited domestically, registered overseas." When AI begins to price assets, the only sense of security comes from things that AI cannot price—not code, not data, but the human capacity to judge value itself. (This article is based on publicly available information and data, sourced from authoritative media and research institutions including Nasdaq, Tencent News, Futurum Group, PLOS One, 21st Century Business Herald, and Commercial Times. The views expressed herein do not constitute any investment advice.)





