Tokens/Dollars are becoming a new benchmark for pricing cloud AI services.
Article author: Akasha2049
Article source: AkashaBot
From Ownership to Usage Rights: How Huang's Formula is Restructuring the Entire AI Industry
He walked onto the stage wearing a leather jacket.
On the screen behind me was a formula.
Revenue = Tokens per Watt × Available Gigawatts.
Applause from the audience.
I stared at both sides of the equals sign and felt something moving.
It's not a chip, not a product, not a market.
It is the coordinate system itself.
A new civilization has just chosen its unit of measurement.
Opening: Transactions in Three Eras
Thirty years ago, Bill Gates sold you a CD.
You take it home and put it on your bookshelf. It's yours forever—if Microsoft goes bankrupt tomorrow, your Windows will still run. Ownership means sovereignty. The asset is in your hands; no one can take it away.
Fifteen years ago, Marc Benioff told you something else. You don't need to own it, he said. Just pay monthly. The software is in the cloud; access it when you need it, and turn it off when you don't. Simpler, more flexible, and with lower upfront investment.
What Benioff didn't say was: You'll never finish paying. The meter keeps spinning. Ownership has been replaced by a permanent liability disguised as convenience. You've traded assets for a monthly bill.
Last week, Huang Renxun mentioned something else.
He didn't sell you software. He didn't offer a subscription. He stood on a stage in San Jose and presented a formula:
Revenue = Tokens per Watt × Available Gigawatts
No products. No license. No number of seats.
There is only one production equation.
Efficiency multiplied by physical capacity. Output is Token—the atom of AI computation, the smallest unit of machine-generated intelligence, the basic particle for reasoning to be measured, priced, and industrialized.
Note what's missing on the right side of the equals sign.
Ownership. The formula doesn't contain the word "ownership." There are no assets. There is no accumulation. Only production, consumption, and flow.
This is the transformation. It's not just from software to AI, or from on-premises deployment to the cloud. It's a deeper transformation: from an economy built on "ownership" to an economy built on "use".
The 20th century was built on ownership. The token economy ended it.
This will change everything that is priced in the old units—almost everything currently.
Part One: The Death of Ownership Economics
One, three pillars collapsed one by one.
The economics of ownership is based on three premises, each so natural and so ancient that we have long since stopped noticing that they are premises.
The first pillar: You have your tools.
Software is a capital asset. You buy a license, depreciate it over three years, and own the productivity it represents. Enterprise software is a moat—not just because of migration costs, but because ownership itself is a permanent claim. "We have SAP" means something: investment, commitment, and infrastructure that outlives any employee.
In the token economy, this pillar is not bent, it is broken.
You don't buy an AI agent. You invoke it. You consume tokens to initiate its inference, complete tasks, and receive outputs. When the task ends, the relationship ends. Your balance sheet shows no assets, only a consumption record. The agent that completed ten thousand tasks for you last quarter is, in an accounting sense, completely identical to an agent you never used. The moment you stop paying, the capability disappears. Not because the contract expired—because nothing was yours anymore.
The tool doesn't belong to you. It never did. You rented a capability using tokens, and it's gone once you're done.
The second pillar: You own your data.
"Data is the new oil" was a defining metaphor of the 2010s. Companies spent billions accumulating proprietary datasets, training their own models, and building data moats that competitors could only replicate over years. The logic was impeccable: accumulate raw materials, and you control production.
But the era of reasoning has changed the value equation of existing data in a way that almost no one is clearly discussing.
In the training era, historical data is everything. The quantity and quality of the dataset determine the upper limit of the model's capabilities. Possessing data is a direct proxy for possessing intelligence.
The era of inference—the era that Huang Renxun declared had decisively arrived—has seen a shift in value computing. Real-time inference on fresh contexts often outperforms pattern matching on stale historical data. An agent capable of real-time searching, synthesizing, and inferring often surpasses a model trained on a proprietary database from the previous year. Accumulated advantages are eroding. Inference efficiency advantages are dominating.
This doesn't mean data becomes worthless. It means the relationship between "owning" data and "owning" intelligence is no longer linear. You can have trillions of bytes of proprietary data and still lose to a competitor with a more efficient Token/Watt and a more accurate inference stack.
The moat is not data. The moat is the assumption that data accumulation is irreversible. That assumption is now being questioned.
The third pillar: You own your model.
For a time, training a cutting-edge model was the ultimate expression of ownership economics applied to AI. Spend hundreds of millions of dollars, assemble a world-class research team, collect proprietary data, run training on thousands of GPUs—and ultimately, you possess something no one else has. An asset. A competitive weapon. Yours.
The way this pillar collapsed was more subtle than the other two, and it's where most analysts fall short.
The argument is not that models are unimportant. Cutting-edge models—Claude, GPT-4, Gemini Ultra, top-tier inference systems—still represent real differences in capability and still underpin real pricing power. When you need a system that can infer within a 200,000-token context, maintain logical coherence across multi-hour agent workflows, and generate outputs that senior analysts would be willing to endorse, cutting-edge models are not commodities. You pay a premium because the cost of failure is too high, while cutting-edge models fail less frequently.
The conclusion is more specific:
The intermediate layer model is dying.
It's not a cutting-edge model. It's not an open-source small model. It's an intermediate layer.
A model that has the capability to resemble a real product but lacks the capability to support cutting-edge pricing. Its operating costs are too high to perform extensive product inference; its capabilities are too weak to secure cutting-edge contracts. It is squeezed from both ends.
In the era of usage rights, sufficiency cannot create an advantage in tokens/watts. It can only create a price squeeze coming from two directions simultaneously.
The model's capabilities have transformed from a moat into an admission ticket. Those in the middle paid the admission fee, only to find there were no seats available for them inside the venue.
II. What is the formula really saying?
Let's return to Huang Renxun's equation, because it deserves a more careful reading than the media has given it.
Revenue = Tokens per Watt × Available Gigawatts
Financial media interpreting it as demand forecasting is not wrong—this is Nvidia's argument: as global power capacity expands and AI factories are built, revenue grows proportionally to token production efficiency. More gigawatts, more tokens, more revenue. Clean industrial logic.
But this formula contains a philosophical statement that has been almost unexamined.
Jensen Huang chose to measure output in tokens. Not the number of model calls, not the number of API requests, not "AI interaction"—it's tokens, the atomic units that generate intelligence. He chose to measure efficiency in watts. Not the cost of each query, not latency—it's watts, the raw energy consumed.
The implicit claim is that intelligence is a manufactured commodity. Its production method is the same as that of electricity and steel. Raw materials (energy) go in, and outputs (Tokens) come out. The ratio between the two—Tokens per Watt—is the fundamental measure of competitive advantage.
This is proof of the death of a belief in the software age: that intelligence is primarily an information problem. It isn't. It's a manufacturing problem. The question isn't "Who has the best algorithm?" The question is "Who can produce the most inference with the fewest joules?"
But what the formula doesn't say—and this omission is important—is: whose intention is being served?
Tokens are produced. Tokens are consumed. Revenue is generated. The equation is balanced. But it never asks anywhere: What do users really want? Is the intent behind token consumption clear? Is the output worth the electricity? Did the person at the other end of the inference chain get what they came looking for?
This formula describes the supply side of the smart economy with remarkable precision. It makes absolutely no mention of the demand side.
This is the gap. And the gap is the real argument of this article.
We will return here.
Part Two: New Rules in the Economics of Use Rights

III. Three Rules to Replace the Old Logic
The economics of use rights is not just a new pricing model. It is a different set of competition rules—biased towards different capabilities, different moats, and different organizational structures in the economics of ownership.
Rule 1: Pay for traffic, not for ownership.
In the economics of ownership, the relationship between buyers and sellers is fundamentally about transfer. Money flows in one direction, assets flow in another. Once the transaction is complete, the relationship ends in principle. You own that thing. The seller receives payment. End.
In the economics of usage rights, the relationship never ends. Every token consumed is a transaction. The meter keeps running. The more you use, the more you pay—the more value you extract, the more value the provider captures. This isn't buying and selling; it's a perpetual exchange.
This has a profound impact on how companies structure themselves. In the SaaS era, enterprise software companies were "transfer machines"—moving licenses from their own inventory to their customers' balance sheets. In the token era, they become "traffic machines"—needing to maintain and scale the rate at which tokens are consumed. Revenue is not a function of the number of customers, but of how many tokens those customers consume.
In this model, growth isn't like signing a new contract. It's like deepening the usage of existing accounts. The question shifts from "How do we close this deal?" to "How do we increase traffic?"
Rule Two: Efficiency is the new moat.
In the era of ownership, the most defensible competitive position is built on accumulation: accumulating data, accumulating customer relationships, and accumulating migration costs. The longer you stay, the harder it is to leave. Network effects reinforce ownership advantages. The rich get richer because they have more.
In the economics of usage rights, the most defensible competitive position is efficiency: the ability to produce more tokens per watt with lower latency and higher reliability. This is Nvidia's entire bet. The company that can produce the most intelligence with the fewest joules will be able to offer the lowest prices with the highest profit margins—or, depending on market segmentation, the highest prices with competitive profit margins.
Tokens/Watt isn't an engineering metric residing in data center operations spreadsheets. It's a business model metric. It determines who can profitably serve the large, low-margin token goods market while simultaneously serving the small, high-margin frontier inference market. It determines who survives and who is squeezed out when token prices fall—and they inevitably will.
The moat is no longer what you've accumulated. The moat is the efficiency with which you convert energy into intelligence.
Rule 3: Scheduling capability replaces accumulation capability.
Perhaps this is the most profound change in the rules. In the economics of ownership, strategic advantage accumulates for those who can accumulate the most—the most data, the most talent, the most computing power, and the most customers. Accumulation is the game itself.
In the economics of the right to use, strategic advantage accumulates among those who can allocate resources most effectively. The question isn't "How much do you have?" but rather "How intelligently can you deploy your existing resources?"
This applies to every level. At the infrastructure level: Who can schedule heterogeneous computing power across GPU types, cooling systems, and network topologies to maximize Tokens/Watt? At the software level: Who can schedule inference jobs to maximize throughput while minimizing latency? At the individual level: Who can guide the AI Agent with sufficiently clear intent to extract the maximum value from the Token budget?
The word "discipline" deserves emphasis. An orchestra does not own the music. It does not manufacture instruments. What it does—and irreplaceably does—is to translate the composer's intentions into harmonious sound. The value of a conductor lies not in what they have, but in what they can make happen.
This is the new competitive landscape. It selects capabilities that are completely different from those of the old landscape.
IV. The Fundamental Shift in the Competition Axis

The left column describes the game most large tech companies have been playing for the past two decades. They're very good at it. They've built organizations, incentive structures, acquisition strategies, and engineering cultures optimized for it.
The right column describes games that few large tech companies have played. The required skills differ. The metrics differ. The winning organizational structures differ.
This is why the token economy is truly disruptive—not because it makes existing products obsolete (though it will), but because it makes existing organizational capabilities obsolete. World-class companies, with all their accumulated advantages, are starting from scratch, and these advantages are subtly misaligned with the new rules.
This transition is not happening in ten years. It is happening now.
Part Three: Winners and Losers
V. Four Types of Winners
In any systemic change, the first question is: who formulates the new rules that are in their own interest?
Winner ①: Energy and cooling infrastructure
The token economy, at its physical foundation, is an energy economy. Tokens require electricity. More tokens require more electricity. Better tokens—lower latency, higher throughput—require not only more electricity, but also better electricity: more precise delivery, more efficient cooling, and more reliable allocation.
Companies like Vertiv, providing thermal management and power systems for high-density data centers, are experiencing something unparalleled in the software age: they are key inputs to intelligent manufacturing. In ownership economics, cooling systems are cost centers. In the token economy, they are production infrastructure. This distinction makes sense for valuation.
As AI factories push rack density to 150 kilowatts—compared to 10-15 kilowatts in traditional data centers—liquid cooling systems become a non-negotiable condition. Not a luxury feature, but an operational prerequisite. Vertiv's backlog of over $15 billion isn't a sales achievement, but a measure of how quickly the physical infrastructure of the token economy needs to expand.
This is the structurally safest position in the entire AI value chain. Vertiv doesn't care which AI model wins. It doesn't care which cloud service provider dominates. What it cares about is AI factories being built and operating at ever-increasing density. This trend has at least a decade of runway ahead.
Winner ②: The Monopolist in Advanced Chip Manufacturing
If Tokens/Watt is the fundamental competitive metric of the token economy, then the entity that controls the physical upper limit of Tokens/Watt performance possesses extraordinary structural power.
This upper limit is determined by semiconductor physics—how many transistors can be crammed into a square millimeter of silicon, and how efficiently these transistors can switch. Today, this upper limit is controlled by TSMC, whose 2-nanometer process represents the current cutting edge allowed by physics and manufacturing precision.
TSMC's capacity at its most advanced nodes, literally speaking, represents the global production capacity of the smart economy. It cannot be quickly replicated. Capital costs are in the tens of billions. Process know-how requires decades of accumulation. Supplier relationships, equipment, cleanroom specifications—each represents a combined advantage that no competitor can match in terms of scale.
Jensen Huang's demand forecast of $1 trillion by 2027 is essentially a question of TSMC's capacity constraints. The demand exists. The question is how quickly the physical supply chain can expand to meet it. TSMC's position in this dynamic is not that of a supplier in the traditional sense, but rather a natural monopolist in the most critical input of the world's fastest-growing economic activity.
Winner 3: Token scheduling software layer
The layer sitting between the physical infrastructure and the actual work is the scheduling layer: software that determines how inference jobs are scheduled, how computing resources are allocated, and how latency and throughput are managed in real time.
Nvidia's Dynamo—an operating system designed specifically for AI factories—represents its attempt at this layer. The logic is straightforward: if Nvidia controls not only the hardware but also the software that schedules it, it captures value on two levels simultaneously. Hardware revenue comes from chips. Software revenue comes from the scheduling layer. The combination is: better scheduling software makes Nvidia hardware perform better on the Tokens/Watt metric, making Nvidia hardware more attractive to buy.
This is the same vertical integration logic Apple applies to personal computers and smartphones. Controlling the metal and software stack. Allowing the gap between "our system" and "everyone else's system" to widen with each generation of integration.
Companies capable of building effective scheduling layers—whether it's Nvidia's Dynamo, a specialized inference optimization firm, or a cloud service provider developing proprietary scheduling systems—will control the profit structure of the token economy in ways that pure hardware providers cannot. Scheduling is where intelligent production efficiency translates into business model advantages.
Winner 4: Sovereign AI Infrastructure Builders
There is a fourth type of winner who has not received the analytical attention they deserve: the builders of sovereign AI infrastructure.
Every country that concludes it cannot rely on foreign token production capabilities becomes a customer of the entire AI manufacturing stack: chips, cooling, networks, scheduling software, basic models—everything. This is not a consumer market. This is a government procurement market, with the budget size, political priorities, and timeline stability inherent in government procurement.
Demand is structural. It doesn't depend on quarterly results or consumer behavior. It depends on geopolitical decisions that, once made, tend to persist across political cycles.
In this dimension, the token economy is not just a commercial revolution; it is becoming a geopolitical revolution. Every government that wants to produce tokens domestically is a long-term client of companies capable of building and operating national-scale AI factories.
Sixth, Fourth Type of Losers
Naming the losers during a systemic change is uncomfortable, but it is a necessary analysis. Discomfort is not a reason to avoid it.
Loser ①: Traditional SaaS pricing model
The per-user, per-month subscription seat model—regardless of how much each user actually does—was elegant in the pre-AI era. Predictable. Easy to budget. Aligns vendor incentives with customer retention.
In the age of AI, there is an inherent paradox that becomes more acute with each improvement in AI capabilities. The more powerful the AI agent, the more a single user can accomplish with fewer human actions. As AI takes over more workflows, the link between "number of users" and "value extracted" becomes decoupled. Companies that heavily utilize AI may extract five times the value from software platforms while requiring only half the number of seats, because AI handles the other half of the work.
This is good for customers. For per-seat SaaS providers, it's a matter of survival. The value delivered has increased, but the pricing mechanism hasn't captured any of that increase.
At GTC, Jensen Huang said, "Every SaaS company will become an Agent-as-a-Service company." This isn't a prediction; it's an observation about survival. Providers who figure out how to price by token consumption, by results, and by the value of delivery—not by occupied seats—will survive the transition. Those who continue to defend seat pricing because their financial models rely on this will experience a slow, structural revenue leak, which, internally, resembles a customer success problem.
The transition window is not unlimited. Companies that have already switched to usage-based pricing have a compound advantage. Those still debating whether to make the change are consuming their transition window.
Loser 2: Cloud service providers with low token efficiency
Token/Dollar is becoming the new benchmark for cloud AI services. It's not just about latency, it's not just about raw throughput. It's about the ratio: for every dollar spent on infrastructure, how much useful AI output do you get?
Cloud service providers with older generations of hardware, less optimized thermal infrastructure, or less sophisticated scheduling software will find themselves systematically underperforming on this metric. In a commodity market—where the large-scale production of tokens is becoming a commodity market—systematic underperformance on key metrics is a pricing problem that becomes complex over time.
Mid-sized cloud service providers that cannot justify their capital investment to remain at the forefront of tokens/Watt efficiency face a structural squeeze: their token production cost base is higher than that of leading competitors, forcing them to compress profit margins or lose customers to cheaper alternatives. Neither path looks good.
Loser 3: Hoarding knowledge workers
This one is harder to write because it describes a type of professional who is truly in trouble. But accuracy requires a clear statement of it.
In the era of ownership, knowledge work rewards accumulation. Accumulate professional knowledge. Build relationships. Accumulate institutional knowledge. Professionals who have been in the industry for twenty years—who understand the rules, key figures, historical context, and unwritten rules—have a structural advantage over any newcomer. The capital they accumulate may not be on the balance sheet, but it is real.
The token economy erodes this advantage in a specific way. Much of what constitutes a professional's accumulated capital—information gathering, document analysis, report synthesis, and communication drafting skills—is now tokenizable. An agent with well-crafted prompts and proper database access can accomplish these tasks at a speed and a fraction of the cost that humans cannot sustain.
This doesn't mean that accumulated expertise becomes worthless. It means that the types of expertise that survive in the token economy look different. Knowledge workers who can guide AI agents with high clarity of intent—those who can orchestrate token consumption towards valuable outcomes and evaluate AI output with genuine domain judgment—retain and potentially amplify their value. Knowledge workers whose primary value lies in information acquisition, data processing, or routine analysis face a genuine structural shift.
The important distinction is not "using AI versus not using AI." Rather: **Are you consuming tokens, or are you distributing tokens?**
Consumers are replaced. Schedulers become more valuable.
Loser 4: Intermediate Layer Model
As established in Part 1: This is not the entire model; it is an intermediate layer.
Frontier models retain pricing power because they can do things that nothing else can reliably do: complex multi-step reasoning, long-term contextual coherence, and truly fuzzy judgments. Customers pay a premium because the cost of failure is too high, while frontier models fail less.
Open-source small models remain viable because their tokens/wat efficiency is extremely high. They offer local deployment, no API costs, and extremely fast inference for narrow, well-defined tasks. Even with moderate capabilities, the economics hold true at scale.
The middle layer—a model capable enough to feel like a real product, but lacking the capability to support cutting-edge use cases and the efficiency to support commodity deployment—is stuck. It cannot win through capability, nor through efficiency. It competes through inertia and existing relationships, both of which are eroding it.
Model capabilities have become an entry ticket, not a moat.
Admission tickets are not assets. You pay once and are allowed entry. They won't accumulate for you.
Part Four: Deep Restructuring
VII. Salary Revolution
Jensen Huang said something at GTC that received far less attention than his hardware announcements, but it may have more implications for how the economy will actually function five years from now.
He said that every engineer at Nvidia will eventually receive an annual Token budget—worth about half of their cash compensation—on top of their base salary, specifically for deploying AI Agents as productivity multipliers.
"I'll give them about half of their base salary as tokens," he said, "so their productivity can be amplified tenfold."
This is not a welfare announcement. This is a new theory about labor.
In an ownership economy, employers purchase workers' time. Wages are the price per hour, implicitly meaning that the employer controls what happens within those hours. Time is the unit of labor. Wages are the price of time.
In the token economy, the equation has changed. Workers still sell their time—their presence, judgment, and domain knowledge. But they now also receive a budget for intelligent production capabilities: a token quota representing the ability to run AI agents, generate analytics, draft outputs, and process information at a speed that no human could sustain.
The new labor formula is roughly as follows:
Output = Clarity of Intent × Token Configuration × AI Efficiency
Note what this formula does. It makes an individual's value a function, not just their time, but how effectively they can guide the AI agent. The only variable controlled by humans in this formula—and the only one that isn't purely an infrastructure function—is intent clarity. Knowing what you want to accomplish, specifying it precisely enough for the agent to execute, and evaluating the output based on the true intent, not just literal instructions.
This is the ability to reprice upwards in a token economy. It's not execution, not information acquisition, not conventional analysis.
The ability to have clear, valuable intentions—and the ability to translate those intentions into effective agent scheduling.
For every knowledge worker, the question that should be on their minds now is: Which parts of my work can be adequately or better handled by an agent that consumes tokens? What remained after that audit were the professional assets worth developing. What appeared on that list were the risk exposures that needed to be managed.
VIII. Market Repricing: The True Meaning of the 1 Trillion Yuan Signal
At GTC, Jensen Huang raised Nvidia's demand forecast for the Blackwell and Vera Rubin cycles from $500 billion to $1 trillion by the end of 2027. This figure is large enough to instinctively raise headlines with skepticism. Let's take a closer look.
This figure is not primarily a forecast of Nvidia's revenue. It's a projection of the rate of investment in the physical infrastructure needed to meet the demand for service token production capacity. It states that the world will spend at least $1 trillion on smart manufacturing machines over the next two years.
For context: The global semiconductor industry is projected to generate approximately $890 billion in revenue by 2026. Jensen Huang claims that the demand for AI computing infrastructure alone will exceed the current total output of the global chip industry. This is a structural assertion about economic priorities, not a boast about Nvidia's market share.
This figure sends a signal to investors not primarily about Nvidia, but about the entire token production value chain. A trillion-dollar infrastructure requires ongoing maintenance, energy, cooling, software, and operational expertise. The operating costs of that infrastructure over a decade will dwarf its construction costs. Companies positioned to meet those ongoing operational needs—Vertiv, network equipment companies, cooling system manufacturers, scheduling software providers—are compound beneficiaries of a one-time investment cycle that generates decades of operating costs.
This analysis reveals an investment framework with four levels:
Layer 1: Energy and Physical Infrastructure — Commodity-like stability, low volatility, complex demand. An underestimated layer, yet structurally indispensable. Vertiv, Eaton, Schneider Electric. Token factories need electricity. Electricity needs infrastructure. Demand is not cyclical.
The second layer: Advanced chip manufacturing—high barriers to entry, a true monopoly in relevant market segments, directly exposed in the tokens/watt improvement cycle. TSMC, ASML. The physics of smart manufacturing is run by a very small number of companies.
Layer Three: Chip Design and Architecture — The highest direct exposure to the Tokens/Watt metric. Nvidia's competitive advantage here is real, but not permanent. AMD, Groq, and custom silicon from hyperscale cloud providers represent the real competition. This layer offers the highest potential for reward and the most competitive risk.
Layer Four: Scheduling Software – Highest Risk, Highest Potential for Asymmetric Returns. Companies that solve scheduling problems – intelligent scheduling, efficient inference routing, effective multi-agent coordination – may capture disproportionate value in ways that companies in all the layers below cannot. This is an early stage, difficult to predict, and may experience significant consolidation. But once established, the winners in this layer will possess the most enduring moat in the AI economy.
The valuation framework for AI companies is changing accordingly. Key metrics are no longer those valued by SaaS companies: ARR growth, net revenue retention, and the 40 rule. What will be important are: token consumption growth rate, relative tokens/Watt efficiency compared to competitors, capital expenditure efficiency, and gross margin per token produced.
A company with 200% annualized token consumption, continuously improving tokens/watt efficiency, and expanding gross margins is a fundamentally different asset from a company with 40% ARR growth while defending its seat pricing model against AI replacement pressures. On traditional dashboards, these numbers may appear similar. The underlying trajectories are opposite.
Part Five: Philosophical Rupture
9. Intention: The only thing that cannot be rented.
There have been three instances in economic history where resources that civilizations fought over—resources whose control determines power, wealth, and strategic advantage—have shifted.
The Industrial Revolution made capital a key resource. Machines, factories, railways—whoever owned the production facilities owned the economy. Capital could be accumulated, inherited, and deployed on a large scale. The greatest wealth of the nineteenth century was the wealth of accumulated capital.
The internet age has made time—specifically, human attention—a critical resource. Whoever can capture and direct human attention on a massive scale will be able to build the platform commerce that dominates the early 21st century. Time can be structured, monetized, and sold to advertisers. The greatest wealth of the early digital age was the wealth of accumulated attention.
The token economy makes intent a key resource.
It's not about capability. It's not about data. It's not about computing power—computing power is infrastructure, not a differentiating factor. It's about intent: the clarity of what you want to accomplish, the precision with which you can specify it, and the wisdom to know what is worth wanting.
This is the core paradox of the right-to-use economy.
In the use-rights economy, almost everything can be rented. Computing power can be rented in tokens. Storage can be rented in quadrillion-byte increments. Intelligence can be rented in terms of inference. Models can be rented in terms of API calls. You can rent a cutting-edge inference system, a code generation agent, a research assistant, or a document analyzer. You can assemble capabilities that would have required a team of experts ten years ago, all with a monthly token budget.
Almost everything can be rented.
Almost everything—except intention.
Intention cannot be rented because it is not fundamentally a capability. It is not something that can be produced by a model or expressed by a formula. Intention is the a priori condition that makes all capabilities meaningful. It is the direction before movement, the question before the answer, the purpose before the tool.
An agent that consumes 10,000 tokens and produces meaningless output, no matter how efficient it is, creates nothing of value. An agent that consumes 100 tokens and produces output perfectly serving a clearly understood purpose, does extraordinary work. The difference between these two scenarios is not model quality, nor infrastructure efficiency, but the clarity and quality of the human intent that initiates the token consumption.
This is why Huang Renxun's formula, despite its precision, is incomplete.
Revenue = Tokens per Watt × Available Gigawatts
This formula describes the supply side of the smart economy with clarity. It makes no mention of whether the intelligence being produced is worth producing.
The complete formula—the one that captures both sides of the ledger—is roughly:
Value = Clarity of Intent × Token Configuration × Available Computing Power
The first variable is one that no amount of infrastructure investment can increase. Nvidia can build better GPUs. TSMC can develop more advanced process nodes. The scheduling layer can become more sophisticated. All these improvements increase the efficiency of the intent to be served.
But the intention itself must come from somewhere. From someone. From someone who truly understands what matters—what matters, why, and what constitutes success.
Ultimately, this is something worth developing. It's not about token-consuming skills, nor is it about prompting engineering as a mechanical process. It's a deeper capability: knowing what you want in a way that's clear enough for the agent to execute; and distinguishing between output that truly serves the intent and output that merely appears to.
In a world where almost every ability can be rented, the rarest and most valuable thing is knowing why you're renting it.
It's not A, it's B.
It's not about having more computing power, but about knowing what to do with that computing power.
It's not about accumulating more tokens, but about knowing what intentions are involved in scheduling them.
This is the only true ownership in the economics of use rights: the intention, which is always your own.
10. That unseen hidden line
There is a ghost in this token economy machine.
Most people see Nvidia, TSMC, hyperscale cloud service providers, and AI factories. They see Jensen Huang's leather jacket and his $1 trillion prediction. They see the industrial logic of a new manufacturing economy.
They missed the following agreement.
Seventeen years ago, a man claiming to be Satoshi Nakamoto published a nine-page document describing a peer-to-peer electronic cash system. The core insight wasn't technical, though the technical execution was elegant. The core insight was philosophical: trust in a transaction doesn't require trust in any one party. It requires trust in a mathematical process controlled by no single party.
Code is law. Agreements outlive their participants. Promises backed by mathematics are more enduring than those backed by human institutions—because institutions decay, rotate, are acquired, go bankrupt, and change their minds. Mathematics doesn't.
The creator of this insight then did something unprecedented and unparalleled since: he disappeared. He anonymously gave the protocol to the world, and then vanished. The technology survived because it was designed to require no one to maintain it. No CEO, no board of directors, no PR team, no annual convention at a hockey stadium. Only the protocol, running.
Token economics is not cryptocurrency. The tokens we're discussing are unrelated to blockchain. But there is a structural resonance worth mentioning.
What Jensen Huang is building—the entire AI infrastructure layer is being built—is a new intelligent production protocol. It is not owned by any single company. It runs on physical infrastructure spanning continents, maintained by competing entities, none of which control the whole. Pricing is determined by market mechanisms. Anyone who can pay the token price can access it.
Intelligence, like currency before it, is becoming a protocol layer rather than a proprietary asset. You access it, but you don't own it. It operates on infrastructure you don't control. Its production is managed by mathematical efficiency, not by the decisions of any single organization.
Mathematics does not request permission. A more efficient process will prevail regardless of who runs it. The protocol will continue to function even after the company that built it is gone.
There's a certain Satoshi Nakamoto-esque quality here. Not in the technology. In the principles. In the realization that the most enduring systems are those designed from the outset to maintain themselves without a central authority. Trust encoded in mathematics—encoded in the physics of Tokens/Watt efficiency, encoded in the transparency of open benchmarks, encoded in the verifiability of reasoning results—lasts longer than any brand, any CEO, any annual keynote address.
The token economy is industrial in nature. But beneath that industry lies a protocol.
Once an agreement is established, it belongs to no one.
This isn't a weakness. This is how it survives.
Conclusion: Back to the formula
He left the stage.
The formula remains on the screen.
Revenue = Tokens per Watt × Available Gigawatts
I stared at it and thought: This is an equation about production. Precise, powerful, physical. It tells you how AI factories work, where the competitive axis is, and where capital will flow in the next decade.
What it didn't tell you was the person who started the chain.
Before a token is produced, someone decides to produce it. Before a deduction is run, someone determines a question is worth asking. Before an AI factory converts electricity into intelligence, a purposeful person initiates the process.
The formula describes the transformation. It does not describe the initial conditions.
Thirty years ago, the question was: What do you own?
Fifteen years ago, the question was: What did you subscribe to?
Today, the question is: How many tokens can you allocate?
But the question beneath all these problems—the one the formula doesn't ask, the one the infrastructure can't answer—is older and simpler:
What do you really want?
It's not about what you can produce. It's not about how efficiently you produce it. It's about the intent behind initiating the chain. What's worth consuming tokens for?
Complete formula:
Value = Clarity of Intent × Token Configuration × Available Computing Power
Nvidia, TSMC, and Vertiv, along with every AI factory on every continent, can improve the last two variables. They are doing so at an extraordinary speed and on an extraordinary scale, and the result will reshape the physical infrastructure of civilization.
The first variable is yours.
The token economy gives everyone access to extraordinary capabilities. It gives no one clarity on how to use them. It makes production cheap. It doesn't make intelligence cheap.
In a world where almost every capability can be rented by token, the rarest thing is knowing why you're renting it.
Jensen Huang's formula describes the world that is becoming.
The important formula is the one that describes what you become in it.
Tokens serve intent.
And the intention—always, still, irreducibly—is your own.
That's all.




