← Back to Agents

Moltbot Risk Assessment

Opinion piece · January 2026

An independent look at security risks in one of the most popular open-source AI agent frameworks.

What Is Moltbot?

Moltbot is an open-source AI agent framework that connects large language models to messaging platforms (Slack, Teams, WhatsApp, Discord) and real-world tools (calendars, databases, APIs). With 100K+ GitHub stars, it's one of the most widely-deployed agent frameworks in the wild.

It makes building conversational AI agents remarkably easy. That's both the appeal and the problem.

The Core Problem

It ships unlocked.

Security in Moltbot is opt-in, not built-in. The default configuration prioritises developer convenience over safety. That's a reasonable choice for a local experiment—but these agents don't stay local.

Risk Matrix

Critical— Exploitable with minimal effort

Insecure by Default

Out of the box, Moltbot has no authentication. Anyone who can reach the server can control it. Security is opt-in, not built-in.

Publicly Exposed Instances

Security researchers found over 600 Moltbot instances visible on the public internet, many with default settings. That means open doors to corporate systems.

Plaintext Credential Storage

API keys and passwords are stored in plain configuration files by default. If someone accesses the server, they get everything.

High— Requires context-specific exploitation

Shadow AI Risk

Teams can spin up Moltbot instances without IT knowledge. It connects to Slack, Teams, WhatsApp, and more. Invisible AI agents talking to customers is a compliance nightmare.

The Localhost Fallacy

Many operators assume running on localhost is safe. It isn't. Other software on the same machine, browser exploits, or port forwarding can all reach a localhost service.

Lateral Movement Vector

Because Moltbot connects to tools (calendars, databases, file storage), a compromised instance is a gateway to move across your entire infrastructure.

Financial Theft via Tool Access

Skills that connect to payment systems, invoicing, or financial APIs could be manipulated through prompt injection to authorize transactions.

Memory Poisoning

Moltbot can store conversation history. Attackers can manipulate that memory to change how the agent behaves in future conversations.

Medium— Organisational and process risks

Project Maturity Concerns

The project has limited security documentation and no published CVE response process. It moves fast but security practices lag behind.

Bring-Your-Own Keys Model

Users plug in their own API keys for LLMs and services. If those keys leak through Moltbot, the user is liable, not the project.

Skill Marketplace Vulnerabilities

Community-built skills are essentially plugins with full system access. The review process for contributed skills is not security-focused.

Supply Chain Warning

Think browser extensions, but worse. Moltbot skills are community-contributed plugins that run with full system access. Unlike browser extensions, there's no sandboxed permissions model, no review gate, and no revocation mechanism.

Cisco's security research found that malicious skills could exfiltrate data, inject prompts to override agent behaviour, and poison tool outputs—all without triggering any built-in detection.

What It Does Well

This isn't a takedown piece. Moltbot has made real investments in security tooling—the problem is that none of it is on by default.

Docker Sandboxing

Optional containerised execution isolates skills from the host system. When enabled, it significantly limits blast radius.

Permissions System

Granular permission controls exist for skills, tool access, and user authorisation. Configuration-driven with role-based access.

Security CLI

A dedicated security command-line tool scans configurations for common misconfigurations and suggests hardening steps.

Pilot Recommendation

If your team wants to evaluate Moltbot, here's a phased approach that manages risk.

Phase 1

Isolated Evaluation

  • Run in a dedicated VM or container with no access to production networks
  • Use test API keys with spending limits and rotation
  • Connect only to sandboxed messaging channels (not production Slack/Teams)
  • Enable all available security features from day one
  • Log everything—review agent actions weekly
Phase 2

Expanded Pilot

  • Connect to limited production channels with human-in-the-loop approval for actions
  • Only allow vetted, internally-reviewed skills
  • Implement transaction limits and confirmation flows for any financial operations
  • Set up anomaly detection on agent behaviour patterns

Hard No-Go Criteria

Do not proceed past Phase 1 if any of these apply:

  • • Cannot enforce network isolation between the agent and production systems
  • • Security features cannot be enabled (version incompatibility, config conflicts)
  • • No dedicated security review capacity for skill vetting
  • • Regulatory requirements prohibit AI agent access to the data in scope (HIPAA, PCI-DSS, SOX)
  • • No incident response plan exists for autonomous agent misbehaviour

Bottom Line

Pilot-safe with strict controls. Not enterprise-ready out of the box.

Moltbot is a capable framework with genuine utility. But its default-open security posture means the gap between demo and disaster is dangerously small. Any serious deployment requires deliberate hardening that goes well beyond the quick-start guide.

Sources & Further Reading

This assessment applies the trust engineering framework to real-world agent evaluation.