guys, i think this might be the next openclaw.
karpathy let an AI agent optimize his own neural net training code for 2 days. it ran 700 experiments autonomously. found 20 improvements he'd missed after months of manual tuning. 11% performance gain.
the agent found bugs. tuned hyperparameters. discovered missing regularization. planned its own experiments based on prior results.
what did karpathy do? "programming the program.md"
this is a man who's been doing this exact workflow by hand for 20 years. built tesla autopilot. and his reaction was "wild."
why is this openclaw-level?
because openclaw wasn't one robot learning one task. it was a framework for agents to take a whole suite of actions
same thing just happened for research/experimentation itself.
karpathy's already spinning up round 2 with multi-agent collaboration. he said it plainly: "all frontier labs will do this. it's the final boss battle."
but zoom out further. his real insight: "any metric you care about that is reasonably efficient to evaluate can be autoresearched by an agent swarm."
any metric you care about that is reasonably efficient to evaluate can be autoresearched by an agent swarm.
ads spend, supply chain, energy grid, drug discovery, trading strategy, etc... if it can be autoresearched, it will be autoresearched.
now we need the infrastructure for the swarm.
twitter.com/0xkydo/status/2031...