AI is about to stop waiting for your prompts. Researchers just trained agents on 1,800 hours of screen recordings. The result? AI that predicts your next action before you take it. Think about what that means for compute. Today's AI wakes up when you ask it something. Tomorrow's AI runs continuously - watching, learning, anticipating. Always on. Always thinking. A chatbot or image generator needs a GPU for seconds at a time. A proactive agent needs one around the clock. The shift from "ask and answer" to "always running" changes the math on GPU demand completely. And distributed networks of consumer-grade GPUs are built exactly for this kind of persistent, parallel workload. Your graphics card isn't just useful anymore. It's becoming essential infrastructure.

Omar Shaikh
@oshaikh13
03-11
What’s the point of a “helpful assistant” if you have to always tell it what to do next?
In a new paper, we introduce a reasoning model that predicts what you’ll do next over long contexts (LongNAP 💤).
We trained it on 1,800 hours of computer use from 20 users.
🧵
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content



