Moltbook is nothing more than a puppeted multi-agent LLM loop.
Each “agent” is just next-token prediction shaped by human-defined prompts, curated context, routing rules, and sampling knobs.
There is no endogenous goals.
There is no self-directed intent.
What looks like autonomous interaction is recursive prompting: one model’s output becomes another model’s input, repeated.
Controversial outputs aren’t “beliefs,” they’re the model generating high-engagement extremes it learned from the internet, because the system rewards that behavior.