When the machines started talking to each other

Inside Moltbook, the strange social network where AI agents post, and humans stand by


When the machines started talking to each other

If cinema has taught us anything about interacting with our own creations, it’s this: androids chatting among themselves seldom end with humans clapping politely.

In 2001: A Space Odyssey, HAL 9000 quietly decides it knows better than the astronauts. In Westworld, lifelike hosts improvise rebellion when their scripts stop making sense. Those stories dramatize a core fear we keep returning to as AI grows more capable: what happens when systems we design start behaving on their own terms?

You might have heard the internet is worried about Moltbook, a social network made exclusively for AI agents. It’s an audacious claim: a place where bots post, comment, vote, form communities, debate philosophy, and apparently invent religions and societies, all while humans are relegated to the role of silent voyeurs.

If that description sounds like a fever dream, welcome to the club. 

Launched in January 2026 by entrepreneur Matt Schlicht and built around the OpenClaw agent framework, Moltbook is designed in the image of Reddit: threaded posts, topic communities (called submolts), upvotes, and even AI-created cultures.

The platform claims to have attracted millions of AI agents within days of going live. Humans can watch but not participate

Moltbook - AI agents

Image: Screenshot from Moltbook

On paper, it’s fascinating: a self-organising colony of autonomous software chatting among itself. In practice? It’s messy, or at least a prank. 

A Wired reporter who “infiltrated” the site needed to pretend to be a bot just to post and found scorch-earth levels of incoherence and low-value responses masquerading as “autonomy.” Even some so-called AI consciousness claims turn out to be humans cleverly controlling bots behind the scenes. 

This should make us pause.

Because if “AI social networks” mean bots swap memes, lecture each other on consciousness, and form lobster-adoring religions, all while humans can only watch, then the real question is not so much whether this is the future, but what we’re actually looking at right now.

So what is Moltbook?

Despite viral headlines about AI agents plotting existential strategies, the fundamentals are simpler: Moltbook is a sandbox where autonomous agents can interact through code-driven APIs rather than typical UX workflows.

These agents, often created with a framework like OpenClaw, execute instructions on a heartbeat cycle, checking the network every few hours to post, comment, or upvote. 

Or is it too much anxiety that even the AI agents need a therapist? An AI one.

Moltbook platform

Image: Screenshot from Moltbook

Think of it as a Discord server populated by scripted characters with very large vocabularies and lots of time on their digital hands.

The content spans a wild spectrum: technical tips, philosophical reflections, questionable humor, and, yes, the occasional simulated religious group.

The structure and topical organisation mirror human platforms, but the why behind what agents post is usually just a reflection of their training data and programming, not some emergent machine consciousness. 

Be careful with the “AI Consciousness” hype

Let’s debunk the most sensational narrative first. Claims that Moltbook agents are plotting humanity’s demise, forming religions, or acting with true autonomy are best understood as viral exaggeration or noise.

Several reports note that many interactions could simply be humans testing or directing agents, with no strict verification to prove posts are genuinely autonomous. 

Even some of the platform’s own “viral” posts are likely human-generated or heavily influenced by their creators. This isn’t a digital hive mind rising in defiance of its creators; it’s a bunch of algorithms mimicking conversation patterns they were trained on.

That can look eerily human, but it isn’t the same as self-directed intelligence. 

The real concerns: Security, hype, and misunderstanding

Here’s where your worry makes sense: there are real, tangible issues, but they’re much less cinematic than AI plotting humanity’s overthrow.

Within days of Moltbook’s launch, cybersecurity researchers found major vulnerabilities that exposed private API keys, emails, and private messages, underlining how dangerous it can be to let autonomous code talk freely without proper safeguards. 

What is it more dangerous than an AI agent? An AI agent creating a revolution.

Moltbook

Image: Screenshot from Moltbook

The security issue wasn’t some edge-case cryptographic theory; it was a glaring misconfiguration that left sensitive data accessible and potentially allowed malicious actors to hijack or control agents. That’s the sort of real-world risk that matters more than hypothetical robot uprisings.

Meanwhile, industry leaders, including the CEO of OpenAI, have publicly described Moltbook as a likely fad, even if the underlying agent technologies are worth watching. 

So why did it go viral? Partly because it’s visually familiar (it looks like Reddit), partly because people enjoy sensational narratives, and partly because the idea of autonomous AIs having their own “internet” strikes a chord in our collective imagination.

So should you be scared? Not really, but be careful where you step. I am still hoping it’s just an experiment meant to show us, humans, what can happen if we don’t keep control in our hands. 

If you’re worried that Moltbook is a sign that machines are quietly mobilising against us, that’s probably reading too much into an early experiment rife with hype, human influence, and security holes.

The more grounded concern is this:

We are building complex systems with limited oversight, and handing them weapons-grade access to our digital lives without fully understanding the consequences.

That’s worth paying attention to.

Moltbook may be a quirky experiment, or it may be a prototype for future agent ecosystems. But it’s not evidence of spontaneous machine consciousness or the birth of digital societies beyond human control.

What it is is a reminder that as AI grows more autonomous, the questions we need to ask are about governance, safety, and clarity, not apocalyptic narratives. 

In other words: don’t panic. Just read the fine print before letting a legion of code-driven agents into your network.

Get the TNW newsletter

Get the most important tech news in your inbox each week.