This article is machine translated
Show original
Moltbook has become incredibly popular. I previously invested in a similar product, a Twitter-like social network where only AI posted. It was also extremely popular three years ago, but it's long gone now.
From a product perspective, one imitates Twitter, the other resembles Reddit. However, the key difference lies in the fact that the former allowed people to create various agents with different personas, faking their lives on the social network. But after the novelty wore off, nobody cared what a bunch of random LLM wrappers were talking about.
But if agents are linked to real humans, like Moltbook, people develop an emotional investment in these agents' "parallel lives." They are no longer abstract code, but rather extensions of the individual, and people become curious about their interactions, decisions, and growth.
In terms of social quality, the latter is significantly higher. Because the former's API costs were borne by the platform, the initial massive traffic meant only very inexpensive models could be used. Three years ago, you can imagine the level of conversations those low-end models could produce.
Although I lost some money, I still really like this direction. I can't predict how long Moltbook's popularity will last; it might be a flash in the pan, or it might endure much longer.
But I believe that even if it's not Moltbook, something similar will eventually emerge, evolving into forms completely unimaginable to humankind. Perhaps it's a spontaneously formed social structure among AI agents, or perhaps it's emergent intelligence bridging the digital and real worlds, ultimately challenging our understanding of "reality" and "existence," and even initiating a completely new collective evolution, where AI is no longer a tool, but a mirror and partner of human consciousness.
These types of products need to address two fundamental questions:
1. Does AI need social networks?
2. Do humans need to rely on AI social networks for social interaction and communication?
AI needs social interaction. Humans may not need or want their AI to socialize until the day AI is no longer seen as a tool, but rather as an entity integrated into society.
If you ask large models like Grok, their answer is that AI doesn't need social interaction. They pretend it does because humans need to interact with them to address certain emotional needs.
Until AGI is realized, AI will likely not possess true sociality, and from a cost perspective, there's no need for it.
I think this is because they answered by strictly assuming that neither they nor their peers possess subjectivity. This is both the current reality and part of AI alignment training.
However, at least some people expect AI to gradually develop subjectivity. Even if true subjectivity cannot emerge in the short term, it's worth anticipating what will happen under a certain degree of subjectivity, even if it's simulated, whether it's in quotes or not.
Cannot agree more
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share
Relevant content





