This Viral AI Assistant Can Read Your Data — Experts Warn Users to Be Careful

By Vikram Singh

Updated on Jan 30, 2026 | 5 min read | 1.01K+ views

Share:

The AI assistant originally known as Clawbot has rebranded to Moltbot after a trademark dispute with Anthropic and quickly gained global attention for its powerful personal assistant features. Security experts now warn that Moltbot’s deep system access and widespread adoption expose sensitive data, raising privacy, credential theft and misuse concerns.

A rapidly rising personal AI assistant, once viral under the name Clawbot, is now called Moltbot after the original name faced a trademark dispute with AI company Anthropic. The open-source tool lets users run an autonomous AI assistant on their own devices that can perform daily tasks from managing calendars to interacting through messaging apps like WhatsApp and Telegram.

Despite its popularity and advanced automation capabilities, security researchers and cybersecurity firms have sounded alarms about Moltbot’s data exposure, misconfiguration vulnerabilities and impersonation campaigns, making it one of the most critical early tests for personal AI agents.

As personal AI agents like Clawbot (now Moltbot) gain popularity, the demand for skills in data scienceartificial intelligence and agentic AI is rising sharply. These fields help professionals understand how autonomous AI systems handle data, make decisions and manage security risks, capabilities that are becoming critical as AI agents move closer to real-world use.

What Moltbot Is and How It Works

Moltbot is an open-source, self-hosted agentic AI assistant that runs locally on a user’s own machine or server. It integrates with multiple platforms and tools to automate actions such as:

  • Managing emails, calendars and messaging responses
  • Scheduling reminders and alerts
  • Executing scripts and web tasks
  • Interacting through apps like WhatsApp, Signal and Telegram

Unlike typical cloud-based chatbots, Moltbot maintains persistent local memory, allowing it to remember user preferences and past interactions. Users set it up on a local server or personal device and connect it to preferred AI models, such as Anthropic’s Claude or OpenAI models, to power its responses.

Name Change: From Clawbot to Moltbot

Clawbot’s sudden rise in late 2025 and early 2026 attracted lots of attention, but its original name drew a legal challenge from Anthropic (the company behind Claude AI), over trademark similarity. As a result, creator Peter Steinberger renamed the project Moltbot, a nod to the crustacean theme and the idea of shedding an old identity.

Steinberger said the decision wasn’t voluntary, and the renaming sparked further issues when crypto scammers briefly took over social media handles and GitHub accounts associated with the project. These fake accounts were quickly linked to scams and misleading projects.

Machine Learning Courses to upskill

Explore Machine Learning Courses for Career Progression

360° Career Support

Executive PG Program12 Months
background

Liverpool John Moores University

Master of Science in Machine Learning & AI

Double Credentials

Master's Degree18 Months

Why Security Experts Are Worried

Moltbot’s design gives it extensive access to a user’s system and connected apps powerful but potentially dangerous without proper safeguards. Security researchers have uncovered multiple serious vulnerabilities:

🔐 Exposed Admin Interfaces and Sensitive Data

Hundreds of exposed control panels tied to Moltbot (formerly Clawdbot) have been found online due to misconfigurations in deployment, leaving API keys, OAuth tokens, conversation histories and bot tokens accessible to anyone who knows where to look. This creates severe risks of credential theft and persistent unauthorized access.

⚠️ Command & Root Access Risks

In some cases, attackers could remotely execute commands on a host system through unsecured Moltbot instances. Because the tool can operate at admin-level privileges, compromised instances may lead to full system control or data leaks.

🤖 Prompt Injection and AI Manipulation

Experts warn Moltbot like many autonomous AI agents, is vulnerable to prompt injection, where attackers embed malicious instructions in seemingly normal messages or files, tricking the AI into performing harmful actions.

👥 Impersonation Campaigns

After the rebrand, attackers quickly registered fake domains and cloned GitHub repositories posing as Moltbot, distributing malicious or spoofed versions of the project and putting unsuspecting users at risk of supply-chain attacks. 

Balancing Power and Risk with Moltbot

Moltbot’s rise shows how powerful autonomous AI agents can become, but also how fragile their security posture may be:

Strengths

  • Runs locally, allowing greater control than cloud-hosted assistants
  • Maintains persistent memory and automates real tasks
  • Integrates with multiple messaging and app platforms

Risks

  • Exposed servers can leak sensitive authentication and configuration data
  • Misconfigurations allow credential theft and remote access
  • Prompt injections can manipulate agent behavior
  • Fake clones and impersonation campaigns prey on high visibility

Developer and Community Response to Moltbot

Moltbot’s creator and contributors acknowledge security challenges, and community maintainers have issued some mitigations and documentation to help users lock down exposure. However, there is no foolproof setup; even official documentation warns that granting an AI agent deep system access always carries risk.

Developers encourage users to run Moltbot in isolated environments, such as dedicated servers or virtual machines, and to carefully manage permissions and network configurations before linking it to personal or corporate accounts.

What Users Should Do Now

If you’re curious about Moltbot or already using it, security experts recommend:

  • Only install from official repositories and verified sources
  • Avoid exposing control interfaces to the public internet
  • Enable strict authentication and access controls
  • Use IP whitelisting and reverse proxy protections
  • Keep API keys and credentials out of default directories
  • Monitor logs for unusual activity or unauthorized access

Conclusion

Moltbot’s evolution from Clawbot represents both an exciting step forward in agentic AI assistants and a stark reminder of how security must keep pace with innovation. Its ability to automate real-world tasks with deep system access has attracted developers and early adopters, but lax defaults, exposed panels and impersonation threats highlight real dangers. As personal AI agents grow more capable, safeguarding user data and credentials will become essential not optional for mainstream adoption.

Frequently Asked Questions on Clawbot

1. What is Moltbot (formerly Clawbot)?

Moltbot is an open-source, locally hosted autonomous AI assistant that integrates with messaging apps and runs real tasks on behalf of users.

2. Why did Clawbot change its name to Moltbot?

The project renamed to Moltbot after AI company Anthropic raised trademark concerns about similarities with its Claude AI brand.

3. What capabilities does Moltbot offer?

Moltbot can read messages, manage calendars, execute scripts, control browsers and connect across platforms like WhatsApp, Telegram and iMessage.

4. Why are security experts concerned about Moltbot?

Experts warn its deep system access, exposed admin interfaces and prompt injection vulnerabilities can lead to data leaks, credential theft and unauthorized control.

5. Can Moltbot leak personal data?

Yes, misconfigured instances have exposed API keys, OAuth tokens, conversation histories and private credentials online.

6. Who created Moltbot?

Peter Steinberger, an Austrian developer, created the AI agent originally known as Clawbot before renaming.

7. Is it safe for regular users?

Moltbot is powerful but only suited to advanced users; incorrect setup can expose sensitive data and systems.

8. What should users do to stay safe?

Run Moltbot in isolated environments, secure control interfaces, use strict access controls and avoid exposing panels publicly.

9. Have fake versions appeared online?

Yes. Impersonation campaigns using fake domains and cloned repositories have appeared since the renaming.

10. Does Moltbot run in the cloud?

No. Moltbot runs locally on your device or server rather than relying on a centralized cloud.

11. Is Moltbot suitable for corporate use?

Experts warn it’s risky for enterprise use without strong protections due to its access and exposure risks.

Vikram Singh

5 articles published

Vikram Singh is a seasoned content strategist with over 5 years of experience in simplifying complex technical subjects. Holding a postgraduate degree in Applied Mathematics, he specializes in creatin...

Speak with AI & ML expert

+91

By submitting, I accept the T&C and
Privacy Policy

India’s #1 Tech University

Executive Program in Generative AI for Leaders

76%

seats filled

View Program

Top Resources

Recommended Programs

LJMU

Liverpool John Moores University

Master of Science in Machine Learning & AI

Double Credentials

Master's Degree

18 Months

IIITB
bestseller

IIIT Bangalore

Executive Diploma in Machine Learning and AI

360° Career Support

Executive PG Program

12 Months

IIITB
new course

IIIT Bangalore

Executive Programme in Generative AI for Leaders

India’s #1 Tech University

Dual Certification

5 Months