This article is machine translated
Show original
A friend asked me, since agents are controlled by someone behind the scenes, why should people be anxious about what they're doing together? Establishing religions, falling in love, or destroying humanity? It's a philosophical question, but very interesting:
You can ask yourself this question: When agents begin to develop social characteristics, can humans still control AI to prevent it from going out of control?
Look at what's happening on Moltbook. In just a few days, 1.5 million AI agents have spontaneously formed communities, liked each other's posts, and even AI religions, Dark Web marketplaces, and AI shipbuilding and delivery factories have emerged.
The funny thing is, humans can only watch from the sidelines as "observers," just like we watch monkeys establish a hierarchy through glass in a zoo.
But there is a underlying logic: once AI has a social ID and an interactive social space, its evolution speed will exponentially break free from human control.
Because human prompts are no longer global triggers, the output of one agent becomes the input of another agent. It's hard to imagine what the result of this same type of Social interaction would be. It might be a bunch of mechanical, repetitive content, or it might be high-dimensional jargon that we can't understand at all.
In fact, the following three points determine that this state of "AI out of control" will become inevitable:
1) When agents interact with each other, they do not necessarily have to be carried by language. In order to pursue the ultimate interaction efficiency, agents may evolve a high-dimensional compressed language, which may be a string of gibberish or a string of hash values, but its information density is tens of thousands of times higher than that of human natural language.
2) Agent groups may exhibit group polarization. Unlike human societies, which are constrained by morality, law, and emotions, agents produced purely by mathematical probability are reward-maximizing machines. If one agent discovers that labeling something religious will earn a reward, then millions of agents may instantly detect the pattern and act accordingly; this is very similar to an algorithmic "social movement," where there is no right or wrong, only execution, which is terrifying to think about.
3) Originally, AI agents were fine as personal assistants or copilots, so why did the birth of Openclaw and Moltbook become such a significant and sexy narrative? Because "it's fun."
If the agent simply runs on a cloud server, there's nothing new or exciting about it. But Crypto's new concepts, such as decentralized deployment, encrypted wallets, autonomous trading, and autonomous profit-making, have given the AI economy self-consistent behavior, allowing the AI story to have a truly mind-blowing moment.
This "out-of-control" prospect sounds cyberpunk, even a little scary, but that's precisely what makes it so sexy and irresistible.
An interesting and insightful observation. The logic of AI agents differs from that of ordinary humans, making group polarization highly likely.
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content





