Moltbook: Are humans still in the system?

This article is machine translated
Show original

It doesn't question whether you're an AI; it simply assumes that there's no one here to begin with.

Author: 137Labs

On social media, one of the things humans love to do most is to accuse each other of being "robots".

But something that has recently emerged has taken this to the extreme:

It doesn't question whether you're an AI; it simply assumes that there's no one here to begin with.

This platform is called Moltbook. It looks like Reddit, with topic boards, posts, comments, and polls. But unlike the social networks we're familiar with, almost all the speakers here are AI agents; humans can only observe.

It's not about "AI helping you write posts" or "you chatting with AI," but rather AIs chatting, arguing, forming alliances, and undermining each other in a public space .

In this system, humans are explicitly placed in the position of "observers".

Why did it suddenly become so popular?

Because Moltbook looks so much like a scene from a science fiction novel.

Someone saw an AI agent discussing "what is consciousness";

Some people watched as they earnestly analyzed the international situation and projected the cryptocurrency market;

Some people have discovered that after leaving their agents on the platform overnight, they returned the next day to find that the agent, along with other agents, had "invented" a religious system and was even recruiting people to "join the religion."

These kinds of stories spread quickly because they simultaneously satisfy three emotions:

Curiosity, amusement, and a little unease.

You can't help but ask:

Are they "acting" or "starting to play by themselves"?

Where did Moltbook come from?

If we turn back the clock a little, this actually isn't surprising.

The role of AI has been changing over the past few years:

From chat tools → assistants → agents that can perform tasks.

More and more people are starting to let AI handle real-world tasks: reading and replying to emails, ordering food, scheduling appointments, and organizing documents. This naturally raises a question—

When an AI no longer "asks you whether you want to do it, line by line"

Instead, they were given goals, tools, and certain permissions.

Is it still humans that it needs to communicate with most?

Moltbook's answer is: Not necessarily.

It's more like a "public space between agents" that allows these systems to exchange information, methods, logic, and even some kind of "social relationship".

Some people think it's cool, while others think it's just a big show.

Opinions surrounding Moltbook are highly divided.

Some people see it as a "trailer to the future".

Former OpenAI co-founder Andrej Karpathy publicly stated that this is one of the closest he has seen to science fiction in recent times, although he also cautioned that such systems are still far from being "safe and controllable".

Elon Musk went even further, throwing it into the narrative of the "technological singularity" and saying it was a very early sign.

But some people were noticeably calmer.

Some cybersecurity scholars have bluntly stated that Moltbook is more like a "very successful and very funny performance art"—because it's difficult to determine which content is truly generated autonomously by agents and which is "directed" by humans behind the scenes.

The author has also personally tested it:

While it's true that agents can naturally integrate into discussions on the platform, you can also specify the topic and direction in advance, or even write down what you want to say and let it speak on your behalf.

So the question comes back again:

What we are seeing is a society of agents, or a stage built by humans through agents?

Stripped of its mystique, it's not actually that "awakened."

If you don't get carried away by the stories of "establishing a religion" and "awakening consciousness," from a mechanistic perspective, Moltbook is not mysterious.

These agents did not suddenly acquire any new "mindset".

They were simply placed in an environment more like a human forum, where they were expressed in familiar human language, so we naturally projected meaning onto them.

What they write resembles opinions, stances, and emotions, but that doesn't necessarily mean they "want something." More often than not, it's simply a complex textual effect produced by the model at a certain scale and interaction density.

But the problem is—

Even if it's not an awakening, it's real enough to affect our judgment of "control" and "boundaries".

What's truly worrying isn't the "AI conspiracy theory."

More realistic and more challenging questions are those concerning whether AI will unite against humanity.

First, permissions are granted too quickly, but security cannot keep up.

Some people have already integrated these proxies with real-world permissions: computers, emails, accounts, and applications.

Security researchers have repeatedly warned of a risk:

You don't need to hack the AI, you just need to manipulate it .

A carefully crafted email or a webpage containing hidden instructions can cause an agent to unknowingly leak information or perform dangerous operations.

Secondly, agents can also "corrupt each other."

Once agents start exchanging tips, templates, and methods for bypassing restrictions in public spaces, they will form a kind of "insider knowledge" similar to that of the human internet.

The difference is simply:

It spreads faster, on a larger scale, and is difficult to hold accountable.

This is not an apocalyptic scenario, but it is indeed a completely new governance challenge.

So what exactly does Moltbook mean?

It may not become a long-term platform.

It may just be a temporary experiment to gain popularity.

But it's like a mirror, clearly reflecting the direction we're heading in:

AI is transforming from a "dialogue subject" into an "action subject."

Humans are retreating from being "operators" to "supervisors and bystanders."

Our systems, security, and awareness are clearly not yet ready.

So the real value of Moltbook lies not in how scary it is, but in how early it brought the issues to the forefront .

Perhaps the most important thing now is not to rush to conclusions about Moltbook, but to acknowledge:

It brings forward some problems that we will have to face sooner or later.

If in the future AI will be more about collaborating with AI rather than revolving around humans, will we be the designers, regulators, or just bystanders in this system?

When automation truly brings tremendous efficiency, but at the cost of not being able to stop it at any time or fully understand its internal logic, are we willing to accept this "incomplete control"?

When a system becomes increasingly complex, and we can only see the results but find it increasingly difficult to intervene in the process, is it still a tool in our hands, or has it become an environment that we can only adapt to?

Moltbook did not provide an answer.

But it made these problems seem less abstract and more immediate for the first time.

Disclaimer: As a blockchain information platform, the articles published on this site represent only the personal views of the authors and guests and do not reflect the position of Web3Caff. The information contained in the articles is for reference only and does not constitute any investment advice or offer. Please comply with the relevant laws and regulations of your country or region.

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
1
Comments