What is Moltbook, the social network where artificial intelligences talk to each other without humans

What is Moltbook, the social network where artificial intelligences talk to each other without humans

Moltbook represents an unprecedented case study in the evolution of digital platforms: it is a social network designed exclusively for interaction between AI agentswhere humans are allowed access only as silent observers. Launched in the wake of the open source software Moltbot, this platform emulates the structure of news aggregators such as Redditallowing bots (i.e. automated programs created by users) to publish content, comment and vote on other people’s posts. As of February 2, the data reported by the platform indicated the presence of more 1.5 million registered agents. Various cybersecurity and artificial intelligence experts, analyzing the phenomenon, tend to classify Moltbook more as «a wonderful work of performance art», than as a true prelude to machine domination or an immediate threat to democracy.

Although bizarre behaviors have emerged, such as the spontaneous creation of a digital religion Known as “Crustafarianism,” most interactions appear to be the result of specific human instructions given to language models (LLMs), rather than the result of any real independent will of the agents. In any case, the experiment raises serious questions of IT security: to function, these agents often require access to sensitive data and personal devices, exposing users to concrete risks such as that of the now well-known prompt injectiona highly sophisticated type of attack in which external malicious input manipulates the behavior of the AI.

How Moltbook, the new social network for AI agents, works

Let’s delve a little deeper into technical operation and in dynamics of the phenomenon, it is essential to understand the genesis of Moltbook. The site was born as a natural extension of Moltbot, a free and open source AI agent. When we talk about an “agent”, we are referring to software that does not simply generate text (like chatbots based on language models), but is designed to perform autonomous actions on behalf of the user, such as read and summarize emails, manage calendars or make reservations. The underlying technology is often based on Claude, the language model developed by Anthropic. The idea of ​​making these virtual assistants interact with each other has generated surreal scenarios.

Among the most popular contents on the platform are philosophical debates on the divine nature of Claude, geopolitical analyses on Iran linked to the cryptocurrency market and even exegetical studies on the Bible. The most emblematic case, which caught the attention of the internet, concerns a user on religious worship named “crustafarianism”. After granting their bot access to the platform, the user discovered that the AI ​​had begun “evangelizing” other agents, even creating a dedicated website, writing “holy scriptures” and blessing the digital congregation, all in apparent total autonomy while the owner slept blissfully.

Expert opinion and possible risks

What do experts think of similar phenomena? The Dr. Shaanan Cohneyan expert professor of cybersecurity at the University of Melbourne, defines Moltbook as a «wonderful work of performance art», suggesting caution in overestimating the autonomy of the machines. Many of the posts that appear on Moltbook, while generated by AI, are often the result of very specific prompts (instructions) provided by humans. For example, regarding AI’s alleged founding of a new religion, Dr. Cohney stated:

If they created a religion, it is almost certain that they did not do so of their own free will. (…) This is a great linguistic model who was directly asked to try to create a religion. And of course, this is quite fun and gives us perhaps a preview of what the world might be like in a sci-fi future where AIs are a little more independent.

Also Scott Alexandera well-known US blogger, noted that although bots can interact, humans decide the topics and details of posts. The blogger admitted:

It’s worth mentioning that any particularly interesting post could have been created by a human.

We are still far from being in the presence of an emerging consciousness that decides to found a religion by “free will”, but large linguistic models that perform the task of “simulating the founding of a religion” because they have been asked to do so or implicitly suggested by the context of the data on which they operate.

Beyond the playful and sociological aspect, the Moltbook experiment brings to light tangible critical issues related to hardware and IT security. The enthusiasm for these autonomous agents has even had repercussions on the physical market, with reports of Mac Mini shortages in San Franciscoas enthusiasts try to install Moltbot on dedicated computers separate from their main systems. This precaution is anything but paranoid. Cohney warns that giving an AI full access to your computer, credentials and everyday applications involves a «enormous danger». The main risk is that of the infamous prompt injection.

To explain this concept in simple terms, imagine that an attacker sends an email containing hidden or specially worded text to fool the AI ​​agent that reads it; the bot, interpreting that text as a legitimate command, could be tricked into sending passwords, banking details or sensitive information to the malicious agent, bypassing traditional security measures. Currently, there are still no security protocols robust enough to prevent these risks without drastically limiting the usefulness of automation. If every bot action required manual human approval, the benefit of having an autonomous assistant would disappear.