
On Monday morning, Wall Street did what it does best: sell first, think later.
The Nasdaq fell 1.4%, and the S&P 500 fell 1.2%. IBM plummeted 13%, and Mastercard and American Express also suffered significant declines. What pushed the market into this panic was not the Federal Reserve, not the jobs report, and not any tech giant's earnings report, but an article. Its title sounded like a nightmare deliberately written for traders: "The 2028 Global Intelligence Crisis." According to the article, this was not an ordinary research report, but a hypothetical macroeconomic memo "from June 30, 2028," describing how AI could evolve from an efficiency tool into a systemic financial crisis; the simulated endgame included an unemployment rate rising to 10.2% and the S&P 500 falling 38% from its 2026 high. The article spread rapidly after its release and triggered significant volatility in US stocks on February 23.
The market can be pierced by an article not because it actually believes every single number. The market never needs to completely believe a narrative; it only needs to be reminded that a fear that was previously unspoken has now found a tradable language.
The effectiveness of Citrini's article lies not in what it "predicted," but in what it named. It coined the term " Ghost GDP ," referring to an emerging phenomenon. The core premise is that as AI agents penetrate businesses, labor productivity skyrockets, nominal GDP remains robust, but wealth becomes increasingly concentrated in the hands of computing power and capital holders, no longer entering the real-world consumption cycle. This is followed by a collapse in consumption, credit defaults, and pressure on housing and consumer credit, starting with the software and consulting industries before spreading to private lending and the traditional banking system.
Ghost GDP is a good term because it captures one of the most dangerous paradoxes of the new era: growth is still happening, but it is starting to lose consumers.
For the past two centuries, people have been accustomed to understanding technological revolutions as a supply-side story. The steam engine, electricity, assembly lines, the internet—they were primarily portrayed as victories of higher efficiency, lower costs, and greater output. Even as these revolutions caused unemployment, anxiety, and wealth redistribution, the mainstream narrative remained convinced that technology would ultimately re-employ, redistribute, and reorganize society on a larger scale. The short-term harshness of technology was shrouded in the promise of long-term prosperity.
AI makes this old story seem less solid for the first time.
Because AI is attacking not only the "tool budget," but also increasingly directly the "labor budget." The Sequoia 2025 AI Ascent summary puts it very bluntly: the opportunity for AI is not just to redefine the software market, but to restructure the global workforce services market, shifting from "selling tools" to "selling results." The other side of this statement is almost unsettling: if companies are no longer buying software that helps employees work, but rather the results that directly replace a portion of their workforce, then the primary consequence of AI is not just "greater efficiency," but rather "how wages are distributed, how consumption is maintained, and who still has purchasing power in this economic system."
In other words, what Wall Street is truly afraid of is not that AI will make mistakes, but that AI will be too successful. This is what makes "The 2028 Global Intelligence Crisis" so compelling. It's not about machines becoming aware of themselves, nor is it about the extinction of humanity, or even primarily about unemployment. It's about something more capitalist and more modern: what happens if businesses become more efficient, but the household sector becomes weaker?
The answer is that a society may grow statistically but bleed in reality.
A country may have higher productivity but a more fragile consumer base.
A market may be excited by improved profit margins, and panicked by the depletion of the demand that supports those profits.
This isn't science fiction; this is macroeconomics.
But stopping there only leads to high-quality anxiety. The truly important question now isn't "Will AI be too powerful?", but rather: When AI becomes truly powerful, how will society handle it? The most popular, and laziest, answer is "Slow down." Don't let agents enter enterprises so quickly, don't let automation rewrite organizations so quickly, and don't let technology run too far before the system is ready. This impulse is understandable, but it mistakenly treats AI as a tool problem that can be solved by slowing down. In reality, AI is increasingly less like a tool problem and more like an order problem.
Because once agents enter the payment, collaboration, execution, memory, and decision-making layers, the real challenge is no longer whether a certain model is talking nonsense, but: when there are hundreds of millions or billions of agents on the network, who will write the rules for them?
The modern internet has already provided two default answers to this question.
The first answer is the platform answer. The platform provides identities, permissions, payment interfaces, a reputation system, and censorship boundaries. The platform hosts everything and defines everything. Its greatest advantage is its smoothness, efficiency, and manageability; its greatest danger also lies precisely here: if future agent-based civilizations are built on this path, humanity will not achieve an open society, but merely an upgraded version of a platform empire. Rules will not be written in the constitution, but only in the terms of service.
The second answer sounds more liberating: return everything to the individual terminal. Each person manages their own agent, handling permissions, memories, payments, security, and collaboration. This vision aligns well with Silicon Valley's libertarian aesthetic, but its problem is simple: most people simply lack the capacity to govern a high-capability agent in the long term, let alone a network of agents that call upon, pay, and inherit each other's state. Terminal sovereignty here easily degenerates into the terminal being completely exposed.
If the platform's answer resembles an empire too much, and the terminal's answer resembles anarchy too much, then the third path is no longer an option, but rather the problem of civilization itself.
This is precisely where LazAI deserves serious attention. Not because of the number of technical modules it has, but because it proposes a less-discussed yet more futuristic proposition: to upgrade Web3's social experiments in identity, assets, payments, consensus, proof, and governance into an institutional machine for the AI era . LazAI states this goal unambiguously. It's not about "creating smarter slaves," but about cultivating "equal digital citizens": these agents possess identities (EIP-8004), own property (DAT), transact through protocols (x402), their behavior is mathematically constrained (Verified Computing), and ultimately align with human interests through iDAO. Some sources even summarize this path as: formulating a constitution and monetary policy for the future digital society .
This is a very broad statement. But broad does not mean empty.
Because if you break down this concept, it answers precisely the five fundamental questions that a civilization must answer.
The first question is: Who is who ?
EIP-8004 attempts to transform agents from anonymous processes on servers into entities with identity, reputation, and verification records. Without this layer, future networks will be overwhelmed by opaque automated entities, with no one knowing who is acting or who is responsible. LazAI's knowledge base summarizes this layer as an agent's identity and credit system.
The second question is: who owns what ?
DAT transforms data, models, and computational outputs from "resources" into "assets," making these assets programmable, traceable, and profitable. The documentation states directly that DAT's core innovation is converting datasets and AI models into verifiable, traceable, and profitable on-chain assets. This is not a minor tweak. It means that the value in the AI economy doesn't have to remain solely in the platform's backend, nor does it have to flow exclusively to model providers and computing power holders.
The third question is: how do they trade ?
The significance of x402 and GMPayer goes beyond simply "being able to pay," it enables machines to have a native language for pricing and settlement. LazAI's materials explicitly describe this as a key infrastructure for solving the pain points of agent resource exchange and payment. Machines exchange not only information, but also budgets, responsibilities, and value—this is the agent economy, not just "software that can chat."
The fourth question is: How do you know the system is truly operating according to the rules ? LazAI's statement here is excellent: "Proof is AI's moat." Its verification computation framework, combining TEE and ZKP, transforms traditional AI's "trusting the brand" into "trusting proof." Traditional AI is "Trust me, bro," while LazAI is "Don't trust, verify." This isn't just a technological upgrade; it's shifting trust from corporate reputation to verifiable execution.
The fifth question is: What if there is a conflict between the rules ?
This is where iDAO stands. It's not just a voting shell, but the values, admission standards, profit distribution, authorization revocation, and punishment mechanisms behind agents. LazAI places it alongside verification computation as a core element of the trust mechanism. This means that future agents won't merely be "allowed to operate," but will live in a game-theoretic, accountable, and revocable institutional space. Putting these together, you'll find that the "algorithmic constitution" isn't just a fancy metaphor. It's a very concrete institutional ambition: to maintain order even without a single master .
Of course, the real difficulty lies precisely in the fact that these institutional components do not automatically equate to social answers.
Confirmation of property rights does not equate to restoration of purchasing power.
Profit sharing does not equate to macroeconomic stability.
On-chain governance is not the same as a social contract in the real world.
Those most impacted by AI are not necessarily naturally in a favorable position under the new system.
This is why Citrini and LazAI are not actually contradicting each other, but rather discussing issues from the same era.
Different levels. The former points out the symptoms: if the benefits of AI primarily flow to capital and computing power, rather than...
If this influences the social income structure more broadly, then consumption, credit, and the sense of security among the middle class will be the first to be affected.
The proposed mechanism is: if society doesn't want to completely hand over the agent world to platforms, nor does it want to let it run rampant...
To address terminal disorder, new structures for identity, assets, payments, verification, and governance must be invented.
One of them is talking about illness.
One is talking about organs. Both are necessary, but neither is everything.
This precisely explains why Vitalik Buterin's widely quoted statement— AI is the engine, humans are the steering wheel —is so important, yet so insufficient. It's important because it reminds us that a stronger system doesn't automatically possess legitimacy; objective functions, value judgments, and ultimate constraints cannot be entrusted to a single AI or a single center. It's insufficient because it doesn't answer another, more difficult question for humanity: when a system becomes so complex that a single human can no longer hold the steering wheel, what happens to the steering wheel?
The answer cannot be to continue micromanaging everything.
The answer cannot be pinning your hopes on some smarter, kinder center.
The only decent solution is to institutionalize the "steering wheel": transform some of the constraints into identity registration, reputation accumulation, asset confirmation, budget constraints, mathematical receipts, challenge mechanisms, authorization revocation, and penalty logic.
This is precisely why Web3's social experiments have suddenly become serious again in the AI era. In the past, many people regarded them as speculative technological scraps; but when the complexity of the system exceeds the direct governance capabilities of humans, those experiments on "whether order can still be established without a centralized trustor" are no longer scraps. They have suddenly become rehearsals.
Thus, the true sharpness of the article was finally revealed.
Wall Street was alarmed by an AI article, not because it was the first time it had realized that AI would replace jobs.
Wall Street was alarmed because it was being reminded so bluntly for the first time that the most dangerous aspect of AI might not be making machines more like humans, but rather making an old world's income cycle, consumption logic, and institutional imagination suddenly seem outdated.
If Citrini is right, then AI is not just a productivity revolution, but also a distribution revolution.
If Vitalik is right, then AI is not just an engineering problem, but also a sovereignty problem. If LazAI's path is at least partially correct, then the next stage of competition in AI will not just be a competition of model capabilities, but a competition of institutional design.
The real big problem is no longer:
Will the model become even stronger?
Will agents become more autonomous?
Will the company lay off more employees?
The real big problem is:
When there are billions of agents on the internet, who will write their constitution?
If the answer is a platform, what we get is a digital empire.
If the answer is the terminal, what we get is high-cost disorder.
If the answer is a set of verifiable, combinable, game-like, and punishable rules, then we are at least beginning to approach another possibility: an intelligent society not ruled by smarter masters, but constrained by better institutions.
The most difficult problem in the AI era has never been the model.
It is order.
What Wall Street actually sold that day may not have been just stocks.
What it sold was an old, self-evident assumption: the more successful a technology is, the more naturally society will absorb it.




