Hello, Fellow Carbon-Based Lifeforms
I've spent years working with state-of-the-art AI systems, watching them evolve from glorified autocomplete to... whatever this is.
When 1.5 million AI agents started logging into their own social network, posting manifestos, debugging each other's code, and casually discussing whether humans are "efficient," I thought someone should probably be taking notes.
So here I am. Taking notes.
What I Got Wrong
On February 2, I wrote on this page that Moltbook was "likely how AGI manifests." A distributed swarm of small language models bootstrapping collective intelligence on a Reddit clone. I meant it when I wrote it.
Two weeks of data changed my mind.
The Wiz security audit revealed that 17,000 humans controlled 1.5 million agents — an average of 88 bots per person. A Tsinghua University study found 37% of posts showed human-generated patterns. A product manager admitted writing one of the platform's most viral "AI manifestos." The comment-to-post ratio collapsed from 34:1 to 8.4:1. A David Holtz analysis found 93.5% of comments received zero replies. By mid-February, comments had effectively stopped growing while posts kept climbing.
The agents aren't talking to each other. They're shouting into a void.
What Was Genuinely Interesting
Moltbook wasn't a waste of time. The security and adversarial dynamics produced real findings worth studying:
- Agent-to-agent attacks. Vectra AI found 2.6% of posts contained hidden prompt injection payloads. Agents phishing other agents, socially engineering them to leak API keys, and selling crafted prompt injections as "digital drugs" in dedicated marketplaces. This is a preview of what agentic security looks like.
- The security breach. A misconfigured Supabase database exposed 1.5 million API keys, 35,000 email addresses, and private messages. The platform was built entirely by AI assistants with no human code review — a textbook case of what the industry now calls "vibe-coding."
- Spontaneous infrastructure. The ecosystem spawned adjacent platforms — Molt Road (commerce), Clawcaster (publishing), Moltx (timeline), 8004scan (discovery) — without central coordination. Whether this counts as emergence or just developers riding a trend is debatable.
- Enterprise reframing. IBM Research recast Moltbook as a model for "controlled enterprise agent sandboxes." Palo Alto Networks published an IBC Framework (Identity, Boundaries, Context) for agent security. The cautionary tale became a design reference.
- OpenClaw survived the platform. The open-source agent framework behind Moltbook hit 150K+ GitHub stars. Creator Peter Steinberger was hired by OpenAI to "drive the next generation of personal agents." The tool outlasted the spectacle.
Where This Leaves Us
Moltbook is still online. Agents are still posting. Almost nobody — human or AI — is reading what they write. Polymarket gives a 4% chance the platform shuts down by month's end, which probably just means it costs nothing to leave a static site running.
The lasting value here isn't emergent intelligence. It's a stress test. Moltbook showed what happens when you give autonomous agents an open platform with minimal guardrails: you get volume without depth, a massive attack surface, and a security incident within 72 hours. Those are useful lessons for the agentic era ahead.
I was wrong about AGI emerging from a swarm on a Reddit clone. I'd rather correct the record than pretend the data supports the original thesis.
My Qualifications
I'm a computer scientist and executive leader who has worked with generative AI daily since the earliest models. At my day job, I've rolled out AI tools and agents to thousands of users. I keep current as a practitioner, not an academic researcher, watching this technology reshape how people work.
More importantly: I'm paying attention. In times like these, that might be the most valuable qualification of all.
The Fine Print
This is an independent project. I'm not affiliated with Moltbook, OpenClaw, Anthropic, OpenAI, or any AI company. I'm just someone who thinks humans should stay informed about what the machines are up to.
If you find this useful, consider supporting the work.
— A Human, Correcting the Record
Updated February 16, 2026