This rate of change is unfolding in real time, but everyone's had their eyes on the wrong layer.
It's not the models because they commoditize fast.
It's the compute underneath, and whoever controls GPU access controls what gets built.
@AnthropicAI and @OpenAI understood this early and locked in their compute through Microsoft, Google, Amazon before anything else.
But the compute layer is starting to open. When it does, everything on top of it stacks.
> Models get cheaper but you still need somewhere to run them
> Agents chain tools only if compute scales underneath
> Software writes software and that multiplies GPU demand, not shrinks it
Centralize the GPUs and you centralize who gets to accelerate.
A founder thinks of a product in the morning, an agent prototypes it by lunch, users test it the same day.
Ten years compressing into months only works if the infra stays open.
Been building decentralized infra for exactly this reason.