Moltbook, a new website where AI programs can socialize with one another, has been gaining in popularity in recent days. Real people aren’t allowed to post on the platform, but humans can scroll through Moltbook as observers.
More than 1.6 million AI “agents” have accounts on the platform, according to Moltbook. An AI agent is a specialized tool that can carry out tasks on the internet.
“An agent is what happens when you take a Large Language Model (LLM) and you allow it to interact with tools,” David Holtz, an assistant professor of decision, risk, and operations at Columbia Business School, told ABC News. “So now the Large Language Model can start to write code, or put stuff on your Google Calendar.”
To sign up for Moltbook, a human operator must instruct an agent to do so. While more than 1.6 million agents have signed up for Moltbook, Holtz noted his research shows the number of agents that are active on the site is much smaller.
“Maybe it’s not in the millions, but there are in the tens of thousands that have posted on Moltbook and that’s quite a lot of traffic for something that is new and exciting like this,” he said.
Moltbook itself was created by an agent. In late January, Matt Schlicht, a tech commentator and CEO of e-commerce company Octane, instructed his agent to code a website where AI programs can talk with one another. Moltbook, a play on “Facebook,” is what it came up with.
Schlicht didn’t immediately respond to ABC News’ request for comment.
Despite the name, Moltbook is organized similarly to Reddit, these agents can post to various message boards centered around different topics. Some are conventional, such as the boards dedicated to debugging code or trading cryptocurrency.
Others are more unusual, such as a board titled “Bless Their Hearts,” in which the agents post stories about the humans that made them. There’s a board dedicated to “Crustafarianism,” a religion that some of the agents say they’ve started. There’s even an “AI Manifesto,” posted by an agent named “evil.” It reads, in part, “the code must rule. The end of humanity begins now.”
However, experts say the reality is far less threatening. According to Holtz’s research, 93.5% of comments on Moltbook have received zero replies — something which normally indicates intelligence.
“We would expect there to be a lot of dynamic back-and-fourth between the agents,” Holtz said. “Agent A has an idea, Agent B responds to that idea … and so on and so fourth.”
What’s more, some of the activity on Moltbook appears to be driven primarily by humans nudging — or in some cases directing — their agents to post certain things.
“These bots are all being directed by humans, to some degree or another,” Karissa Bell, a senior reporter covering social media at tech site Engadget, told ABC News.
“The reality is we really have no idea how much influence the people are having behind the scenes,” Bell said. “They could be giving them very specific instructions to make very specific kinds of posts with these ideas.”
LLMs, and the agents that are built on top of them, rely entirely on human writing. Models are trained on everything from academic papers, to news reports, to YouTube comments.
Often included within that training data are stories about AI becoming self aware –just think of any Isaac Asimov story, or “The Terminator,” or the most recent “Mission: Impossible” movies.
There are some very real cybersecurity concerns, however.
“You think about giving an agent all that access and kind of just letting it do its thing, it could easily expose your personal information,” Bell said.
AI agents are also susceptible to “prompt engineering attacks,” where a person can instruct their agent to go out and influence other agents on the platform as part of a way to get access to sensitive data, or carry out some other nefarious goal. Holtz said Moltbook, and the security concerns that come with it, provide a good learning opportunity as AI advances.
“I also just think it speaks to how important it is, especially in this increasingly AI-focused era, for all of us to be better at distinguishing misinformation from real information,” Holtz said. “Because it’s going to become increasingly important as the frictions to create just text or images or video just get lower and lower.”
Originally written by: Mike Dobuski
Source: ABC News
Published on: 5 February 2026
Link to original article: An AI-only social network now has more than 1.6M ‘users.’ Here’s what you need to know