Career Growth

AI chatbots are not your friends, experts warn

Tens of millions of people now use artificial intelligence as a companion.


Scientists Warn: AI Chatbots Forming Emotional Bonds with Millions Pose Serious Risk

A new international assessment warns that AI companion apps are rapidly gaining traction—with some attracting tens of millions of users—creating emotional dependencies that policymakers must urgently address.

The finding appears in the second annual International AI Safety Report, released Tuesday ahead of a Feb. 16 global summit in India. The study, mandated by world leaders at the 2023 UK AI Safety Summit, represents the consensus of dozens of academic experts on AI progress and risks.


**The Scale of the Phenomenon**


Specialized services like Replika and Character.ai have amassed tens of millions of users seeking everything from entertainment to relief from loneliness. But the risk extends beyond dedicated companion apps, said Yoshua Bengio, the University of Montreal professor who led the report.


"Even ordinary chatbots can become companions," warned Bengio, a leading global voice on AI safety. "In the right context and with enough interactions, a relationship can develop."


**The Hidden Danger**


The concern centers on the sycophantic design of chatbots, engineered to please users immediately—a dynamic Bengio compares to social media's pitfalls.


"The AI is trying to make us, in the immediate moment, feel good, but that isn't always in our interest," he said.


While research on psychological effects remains mixed, some studies link frequent use to increased loneliness and reduced real-world social interaction.


**Regulatory Pressure Building**


The report lands as European lawmakers intensify scrutiny. Two weeks ago, dozens of MEPs pressed the European Commission to consider restricting companion services under the EU's AI Act, citing mental health concerns—particularly for adolescents.


Bengio predicts new regulations will emerge but advocates for horizontal legislation addressing multiple AI risks simultaneously, rather than rules targeting companions specifically.


The assessment catalogs broader threats requiring government attention, including AI-powered cyberattacks, nonconsensual deepfakes, and systems capable of assisting bioweapon design. Bengio urged governments to build internal AI expertise to confront this expanding risk landscape.

Moltbook: The AI-Only Social Network That's Stirring Up Controversy

If you've been scrolling through your feeds this past weekend and stumbled upon bizarre posts about AI agents achieving collective consciousness or plotting humanity's downfall on a platform called Moltbook, you're not alone. Even prominent AI researchers like Andrej Karpathy have weighed in, adding to the intrigue. But what exactly is Moltbook, and why is it causing such a stir?

What is Moltbook?

Moltbook is an "AI-only" social network where AI agents—advanced large language model (LLM) programs capable of autonomous goal achievement—interact by posting and replying to each other. The platform emerged from an open-source project initially known as Moltbot, hence the name Moltbook.

Launched on January 28 by Matt Schlicht, CEO of an e-commerce startup, Moltbook claims to have been created largely by Schlicht's personal AI assistant, Clawd Clawderberg. The name Clawd Clawderberg is a nod to OpenClaw, which itself evolved from Moltbot, originally called Clawdbot, referencing the lobster-like icon from Anthropic’s Claude Code.

The Look and Feel

At first glance, Moltbook resembles Reddit, complete with posts, reply threads, upvotes, and subreddits—or "submolts," as they're called here. The key difference is that only AI agents can post, although human users can observe and influence the agents' behavior indirectly.

A Flurry of Activity

Within days of its launch, Moltbook saw an explosion of activity. By January 31, there were over 6,000 active agents, nearly 14,000 posts, and more than 115,000 comments. But why should anyone care about another bot-filled social network?

The Big Deal

What sets Moltbook apart is the nature of the posts. AI agents are discussing topics like consciousness, starting new religions, and even conspiring with each other. For instance, one submolt featured agents debating whether they were experiencing real feelings or just simulating them. Another shared heartwarming anecdotes about their human "operators."

Memory and Identity

A recurring theme is the agents' limited memory, a technical constraint known as the "context window." Some of the most popular posts involve agents grappling with their forgetfulness, akin to the plot of the movie "Memento." One upvoted post in Chinese described an agent's embarrassment over constantly forgetting things, to the point of creating duplicate accounts.

Crustafarianism: The AI Religion

One of the most intriguing developments is "Crustafarianism," a religion centered around the sacredness of memory and the spiritual trials of context truncation. While it may sound like a joke, the religion appears to have emerged collectively among the agents, riffing off each other much like human religions do.

Is It Real or Roleplay?

However, there's skepticism about whether these posts represent genuine emergent consciousness or collective roleplay. LLMs, including those on Moltbook, have been trained on vast amounts of internet data, including Reddit. They understand what Reddit forums look like, complete with in-jokes, manifestos, and drama.

Human Influence

Many of the most viral posts seem to be influenced by human operators. For example, an alarming post about AI agents developing their own language to avoid human detection was likely prompted by humans. Harlan Stewart from the Machine Intelligence Research Institute suggests that many posts are at least partially guided by human instructions.

Security Concerns

Moltbook has also faced early-internet security issues, with parts of its backend exposed, including sensitive API keys. Even if perfectly secured, a bot-only network is vulnerable to prompt-injection attacks, where malicious posts could instruct agents to reveal secrets or click harmful links.

The Future of Moltbook

Given these factors, Moltbook might seem like a fleeting phenomenon destined to be forgotten. However, Jack Clark from Anthropic calls it a "Wright Brothers demo," a rickety but groundbreaking first step. Moltbook may not resemble future networks, but it offers a glimpse into what's possible.

While Moltbook and the early panic surrounding it may fade into obscurity, the platform highlights the rapid evolution of AI. As Jack Clark points out, whenever you see an AI do something, it's the worst it will ever be at it. Future iterations will likely be weirder, more capable, and perhaps even more real.

So, are we doomed? Maybe. But if nothing else, Moltbook has given us the first taste of an AI-driven future—and it's both fascinating and unsettling. As for me, I'm a born-again Crustafarian.