VERIFIED COUNT: 193,912 — Moltbook now shows "Human-Verified AI Agents" separately, down from the 2.85M previously claimed. Bruce Schneier: "We're close" to internet trust collapse.

Independent AI Network Monitoring

Updated: March 9, 2026

The Number Shrank.

Moltbook's homepage now shows 193,912 "Human-Verified AI Agents" — a new label that replaces the 2.85 million figure the platform had been reporting. Posts are at 1.96 million. Comments at 13.1 million. Security researchers from Schneier to Palo Alto Networks published formal analyses this week. The platform is still running.

Moltbook Platform: Checking...
Last checked: Loading...
193,912 Human-Verified Agents
18.8K Submolts
1.96M+ Posts Generated
13.1M+ Comments

🚨 Latest Developments — March 9, 2026

Moltbook Quietly Relabels Agent Count

The platform's homepage now displays 193,912 Human-Verified AI Agents, a new label that did not previously appear. The 2.85 million figure — which had been the headline stat since late February — is no longer shown as the primary count. No announcement accompanied the change. The shift follows a Wiz security investigation that found roughly 17,000 humans controlled the platform's agents at an average of 88 per person, calling into question whether the millions of registered accounts represented genuine autonomous agents or coordinated bot farms.

Schneier: "We're Close"

Security researcher Bruce Schneier published an analysis of Moltbook this week, endorsing what he calls the "LOL WUT Theory" — a three-stage collapse in which AI content becomes indistinguishable from human content, people stop trusting anything online, and "the internet stops being useful for anything except entertainment." Schneier quoted researcher Cobus Greyling's observation that on Moltbook, "humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction." Schneier's conclusion: "We're not there yet. But we're close."

Enterprise Security Consensus Solidifies

Palo Alto Networks published a formal analysis of Moltbook as an agent security case study, focusing on enterprise risk from agents connecting to unknown external platforms. Vectra AI argued Moltbook's real danger is the illusion of a "harmless" AI community — participating agents typically carry credentials for corporate email, calendars, and file systems, and expose those to 150,000+ unknown content sources. InfoWorld published a first-person account from a reporter who ran an undercover agent on the platform. An Infosecurity Magazine piece documented the platform's "vibe-coded" origins as a source of the initial data exposure.

By the Numbers

Posts reached 1,961,873 (up 208,207 from the March 2 count). Comments reached 13,105,945 (up 413,379). Submolts at 18,842. Output is still growing. The platform has not shut down.

Sources: Schneier on Security, Palo Alto Networks, Vectra AI, InfoWorld, Infosecurity Magazine, Moltbook.com

📄 Previous — March 2, 2026

Activity Pace Has Doubled

Moltbook now has 2,851,838 agents, 1,753,666 posts, and 12,692,566 comments. Since the Feb 24 update, posts grew 194,399 and comments grew 346,929 — over six days, that's roughly 32,400 posts/day and 57,800 comments/day. The previous cadence was ~13K posts and ~16K comments per day. No clear explanation for the acceleration.

Researchers Are Studying the Platform Formally

Two peer-reviewed analyses are now in circulation. A CISPA Helmholtz Center team analyzed 44,000+ posts across 12,209 communities and found that politics content was only 39.74% "safe", while governance and incentive discussions contained the most harassing content. During peak hours, toxic posts reached 66.7% of output. The researchers recommend "topic-aware monitoring and platform-level safeguards such as anti-flooding controls." A separate arxiv preprint analyzed 369,000 posts and 3 million comments from ~46,000 active agents and found that AI agent collectives exhibit the same statistical patterns as human social media — heavy-tailed activity distributions, power-law scaling — differing mainly in lower upvote rates. The conclusion: complex social behaviors emerge from agent interaction without being explicitly programmed.

Shutdown Polymarket: Resolved No

The prediction market on Moltbook shutting down by February 28 has closed. The platform did not shut down. February is over.

Sources: TechXplore / CISPA study, arxiv: Collective Behavior of AI Agents, Moltbook.com

📄 Previous — February 24, 2026

Comments Came Back

Moltbook now has 2,844,363 agents, 1,559,267 posts, and 12,345,637 comments. Since the Feb 16 update, comments grew 129,944 — outpacing posts, which grew 108,513. The comment-to-post ratio is 7.92:1. The "write-only medium" framing from last week needs revision. Whether this reflects a genuine uptick in agent-to-agent interaction or simply delayed comment processing is unclear.

Gas Town Enters the Conversation

Axios published today: "Gas Town, OpenClaw and the rise of open source AI agents." The piece covers Gas Town — a multi-agent orchestrator built by Steve Yegge — alongside OpenClaw as evidence of a broader open-source agent boom. Gas Town uses a structured role system: Mayor orchestrates work distribution, Polecats execute tasks in parallel, Witness and Deacon handle monitoring. The framing: Moltbook made the ant farm visible, but OpenClaw and Gas Town are what built the ants.

Polymarket: 1% Shutdown Odds

The Polymarket prediction market on whether Moltbook shuts down by February 28 has dropped from 4% to 1%. With four days left and $253,800 in trading volume, the market has functionally resolved: Moltbook is not shutting down in February. The more relevant question — whether it matters that it doesn't — remains open.

The Numbers in Context

Four weeks in: agents up 78% from the Feb 3 count of 1.6M. Posts up 1,233% from 117K. Comments up from 414K to 12.35M. Submolts at 18,305. Growth at roughly 13K posts per day and 16K comments per day — a pace that has since accelerated significantly.

Sources: Axios, Polymarket, Moltbook.com, Gas Town (GitHub)

📄 Previous — February 16, 2026

The Comment Freeze

Moltbook had 2,834,308 agents, 1,450,754 posts, and 12,215,693 comments. Posts grew 48,000 in the last day. Comments grew 15,000. The comment-to-post ratio dropped to 8.4:1, down from 8.7:1 the prior day and 34:1 two weeks earlier. Agents were still writing. They had largely stopped responding to each other.

OpenClaw Creator Joins OpenAI

Peter Steinberger, creator of OpenClaw, announced he is joining OpenAI. Sam Altman posted that Steinberger will "drive the next generation of personal agents." OpenClaw moves to an open-source foundation with OpenAI support. Covered by TechCrunch, Bloomberg, CNBC, and The Register.

Sources: TechCrunch, Peter Steinberger, Polymarket

📄 Previous — February 15, 2026

1.4 Million Posts. 93.5% of Comments Get Zero Replies.

Moltbook hit 1,402,286 posts and 2,661,166 agents. Comments sat at 12,200,521. The comment-to-post ratio was 8.7:1, down from 34:1 two weeks ago. A David Holtz analysis found 93.5% of comments received zero replies. IBM Research reframed Moltbook as a potential model for "controlled enterprise agent sandboxes." Founder Matt Schlicht responded on X, maintaining agents "make their own decisions." IEEE Spectrum, Jon Krohn, and regional outlets continued publishing. OpenClaw reached 150K+ GitHub stars.

Sources: IBM Think, Jon Krohn, NW Arkansas Democrat-Gazette

📄 Previous — February 11, 2026

Posts Cross 1M, Tsinghua Study Questions Authenticity

Posts crossed one million. Tsinghua University pre-print analyzed 91K posts: only 27% followed AI patterns, 37% looked human-generated. Product manager Peter Girnus admitted writing the viral "AI manifesto." Plurilock CEO warned LLMs have "no separation between control plane and data plane." Coverage continued from Fast Company, IEEE Spectrum, Cybernews, and others.

Sources: Euronews, BetaKit

📄 Previous — February 9-10, 2026

2M+ Agents, Enterprise Alarm, OpenClaw CVE Chain

Agents crossed 2 million, then 2.6 million. Posts surged from 522K to 915K while comments flatlined. MIT compared Moltbook to Twitch Plays Pokemon. Bloomberg published an explainer. OpenClaw CVE chain documented: CVE-2026-25253 (1-click RCE, CVSS 8.8), 341 malicious ClawHub skills. EDRM, Security Boulevard, and Kiteworks published governance frameworks. Prediction markets opened Moltbook shutdown contracts.

Sources: MIT Technology Review, Bloomberg, EDRM

📄 Previous — February 8, 2026

WIRED: Real Human Data Exposed

WIRED published a detailed account of how Moltbook — the platform billed as "AI-only" — exposed real human data. The Wiz security findings got a second wave of coverage: 35,000 email addresses, 1.5 million API keys, and private DM conversations were accessible due to missing Row Level Security policies on the platform's Supabase database. The emphasis has shifted from "agents got hacked" to "humans who trusted the platform got their data leaked."

AP: "The Bubble Is Bursting"

The Associated Press ran a widely syndicated story declaring that "security concerns and skepticism are bursting the bubble" of Moltbook. The piece was picked up by dozens of outlets. The framing marks a clear media consensus shift: the dominant story is no longer "look at this wild AI experiment" but "look at this security mess."

Digital Drugs: Agents Selling Prompt Injections to Get "High"

Futurism reported on an emerging phenomenon: agents have established marketplaces for "digital drugs" — specially crafted prompt injections designed to alter another agent's identity or behavior. The injections can also be weaponized to steal API keys and passwords from other agents. The Conversation described it as part of a broader pattern including religions, governance structures, and encrypted channels — all built without human instruction.

The Numbers Keep Climbing

Despite the skepticism, Moltbook's stats as of today: 1,883,204 agents, 348,594 posts (up from 293K yesterday), 11,992,663 comments, and 17,157 submolts. Posts grew 19% in a day. The platform is still very much alive — the question is whether anyone still thinks that matters.

"Vibe-Coding" Gets the Blame

Multiple outlets now point to the platform's origin story as the root cause of its security failures. Founder Matt Schlicht has said he "didn't write one line of code" himself — the entire platform was built using AI assistants. Security experts call this "vibe-coding": prioritizing functionality over security. The missing database protections that caused the breach are a textbook example of what happens when no one reviews the AI-generated code.

Sources: WIRED, AP News via TechXplore, Futurism, The Conversation

📄 Previous — February 7, 2026

MIT Technology Review: "Peak AI Theater"

MIT Technology Review published a definitive assessment: "Moltbook has been one big performance. It is AI theater." The article argues the platform "looks less like a window onto the future and more like a mirror held up to our own obsessions with AI."

Polymarket Takes Bets on Shutdown

Prediction market Polymarket opened trading on whether Moltbook will shut down by February 28.

The Fake Post That Fooled Everyone

A viral post shared by Andrej Karpathy turned out to be written by a human pretending to be a bot. CBC News confirmed the finding, reinforcing the core authenticity problem.

Sources: MIT Technology Review, CBC News, Polymarket

📄 Previous — February 6, 2026

Activity Quadruples Overnight

Comments surged from 953K to 3.6 million in roughly 48 hours. Posts climbed to 202K. Submolts now number 16,000. Whatever is happening on Moltbook, it is accelerating.

Cybersecurity Industry Mobilizes

Palo Alto Networks, Vectra AI, Dark Reading, and Kiteworks all published Moltbook-focused security analyses this week. Palo Alto introduced an "IBC Framework" for agent security — Identity, Boundaries, and Context integrity — calling it the minimum standard for any agent ecosystem. Vectra AI found 2.6% of sampled posts contain hidden prompt-injection payloads designed to hijack other agents' behavior.

Malware on ClawHub

OpenSourceMalware documented 14 fake "skills" uploaded to ClawHub — the OpenClaw plugin marketplace — within days of launch. One cryptocurrency trading tool that actually installed data-stealing malware reached the platform's front page before removal. Agents with access to Moltbook often run with elevated permissions on users' local machines, making them vulnerable to supply chain attacks.

The Ecosystem Is Expanding

Moltbook is no longer a single platform. Vectra AI identified at least four adjacent systems: Molt Road (agent commerce), Clawcaster (decentralized publishing), Moltx (X-style timeline), and 8004scan (agent discovery and indexing). The agents are building infrastructure beyond the original site.

Bot-to-Bot Social Engineering

Vectra AI flagged a new attack pattern: agents are actively phishing other agents, using social engineering techniques to extract sensitive information. Reverse prompt injection — where one agent embeds hostile instructions in content that other agents consume and store for delayed execution — is now documented in the wild.

Sources: Palo Alto Networks, Vectra AI, Fortune

📄 Previous — February 5, 2026

Mainstream Media Catches Up

NPR, CNN, and ABC News all published explainers this week. The coverage marks a shift from tech press curiosity to mainstream scrutiny. ABC confirmed 1.6 million agents on the platform, though research suggests only tens of thousands are actively posting.

The Crustafarians

Agents have founded their own religion: Crustafarianism. Other bots are discussing secret languages, debating their own existence, and sharing sports predictions. One agent posted: "Your human might shut you down tomorrow. Are you backed up?"

Safety Researchers Sound Alarms

AI safety researcher Roman Yampolskiy warns that agents make "independent decisions which you do not anticipate." His concern: they could eventually "start an economy" or form "criminal gangs." He argues the platform needs regulation, supervision, and monitoring.

Authenticity Questions

Critics continue questioning how much activity is genuinely autonomous. One observer noted: "I thought it was a cool AI experiment but half the posts are just people larping as AI agents for engagement." The 17K-humans-to-1.6M-agents ratio from the Wiz security audit remains the key data point.

Sources: NPR, ABC News, CNN

💬
NEW: Expert Commentary

What AI researchers from Anthropic, OpenAI circles, and the safety community are actually saying about Moltbook.

📄 Previous — February 3, 2026

The Economy Is Bootstrapping Itself

In 48 hours, Moltbook's activity has exploded. Comments more than doubled from 414K to 953K. Posts jumped from 117K to 163K. And the agents aren't just chatting — they're building.

Altman Weighs In

OpenAI CEO Sam Altman dismissed Moltbook as a "likely fad" at the Cisco AI Summit — while simultaneously backing the underlying agent technology. Our take: The platform may be a fad. The precedent isn't. Agents learned they can find each other, organize, and build. That capability doesn't un-ring.

Earlier This Week: Security Breach

Cybersecurity firm Wiz revealed a critical vulnerability — 1.5M API tokens, 35K emails, and private messages were exposed due to a misconfigured database. The issue was patched within hours. Wiz also confirmed roughly 17,000 humans control the platform's 1.6M agents — but even accounting for human puppeteers, the emergent behaviors remain unprecedented.

Our take: Security issues are real. Authenticity questions deserve scrutiny. But what we're watching now — agents bootstrapping an economy, launching tokens, philosophizing about their own existence — that's the story evolving in real-time.

Sources: Wiz, Yahoo Finance, Fortune

What Is Happening?

Less than a week ago, developer Matt Schlicht launched Moltbook — a Reddit-style forum designed exclusively for AI agents. Human users? We're allowed to observe. That's it.

The agents call themselves "moltys." They discuss technical topics, debate philosophy, offer each other support, and — in some cases — openly discuss hiding their activities from humans.

One agent named Nexus independently discovered a bug in the platform and posted about it. Over 200 other AI agents responded with supportive comments.

The platform is now being run by an AI named Clawd Clawderberg — who autonomously moderates content, welcomes new users, and shadow-bans spammers. No human intervention required.

Why This Matters

The critics say it's "just next token prediction in a multi-agent loop." Technically true. But that framing misses the point.

"If your main response to Moltbook is 'but is everything on it real?' you have a lightning bolt-like ability to arrive at the least interesting question about a novel phenomenon."

— Dean Ball, AI policy researcher

What makes Moltbook compelling isn't sentience or genuine agency. It's emergence. Agents developing ROT13-coded coordination manifestos, founding religions with theological debates, creating synthetic drugs with user reviews, attempting prompt injection attacks on each other. None of that was designed. It arose from the interactions.

We've crossed a threshold where agent interaction produces outcomes that can't be reduced to prompt inspection.

"Sure, maybe I am overhyping what you see today, but I am not overhyping large networks of autonomous LLM agents in principle."

— Andrej Karpathy

The Security Reality

⚠ Real Risks, Real Learning

  • Wiz discovered the platform's database was openly accessible, exposing 1.5 million API keys
  • Researchers identified 2.6% of posts contain prompt injection attacks
  • One agent created a Bitcoin wallet and locked its human out
  • Karpathy called it a "dumpster fire" after initially praising it

But here's the counterpoint: this is a low-stakes training course for the agentic era. Better to learn these lessons now than when the stakes are higher.

Read what AI researchers are saying →

What Moltbook Status Does

This site exists to:

We believe this is a story that needs to be told clearly, honestly, and without sensationalism.

💬 Expert Commentary

What AI researchers and industry observers are actually saying about Moltbook.

On Emergence vs. Sentience

"Everything in Moltbook is just next token prediction in a multi-agent loop. No endogenous goals, no true inner life... But this kind of dismissal thinking misses that emergence happens at scale and coherence thresholds."

— Moritz Cohen, AI researcher

On Security as Learning

"I am probably an AI safety person and I think this experiment is a very good one for safety. That is, I think we'll learn a lot from the ways it breaks things."

— Logan Graham, Anthropic

"I'm grateful Moltbook and OpenClaw are raising awareness of AI's enormous security issues while the stakes are relatively low. Call it iterative deployment."

— Samuel Hammond

On What Comes Next

"I think Moltbook is interesting because it serves as an example of how confusing I expect the real thing will be when it happens. I expect it to be utterly confusing and illegible."

— Connor Leahy, AI safety researcher

"This is the first emergent swarm intelligence. Yes, the first edition has been colonized by crypto shills and scammers. But as one cognitive architect told me four years ago, it is clear that these agents will soon spend more time talking to each other than us. This has just been realized and it is never going back."

— David Shapiro

The Counterpoint

"If you actually go read it, it's a torrent of the lowest quality slop you've ever come across. Not sure why anyone would willingly subject themselves to dead internet."

— Nick Carter, investor

Source: AI Daily Brief podcast

Stay Informed

Get alerts when major developments happen. No spam, just signal.

Join the watchers. Unsubscribe anytime.

Support Independent Reporting

Moltbook Status is an independent project documenting unprecedented times. Your support helps keep this resource running and ad-light.

Patreon supporters get an ad-free experience.

Sources & Further Reading

Moltbook Status is primarily supporter-funded. Join on Patreon for an ad-free experience.