Back to List
Industry NewsAISecurityOpen Source

OpenClaw Security Risks Soar: Thousands of Corporate Deployments Expose Critical Vulnerabilities and Sensitive Data, Raising Alarm for Security Leaders

OpenClaw, an open-source AI agent, has seen a rapid surge in deployments, escalating from 1,000 to over 21,000 publicly exposed instances in less than a week. This widespread adoption includes corporate environments, where employees are installing OpenClaw on company machines, granting autonomous agents extensive privileges like shell access, file system access, and OAuth tokens for services such as Slack, Gmail, and SharePoint. Critical vulnerabilities have been identified, including CVE-2026-25253, a CVSS 8.8 remote code execution flaw, and CVE-2026-25157, a command injection vulnerability. A security analysis of ClawHub marketplace skills revealed that 7.1% contain critical security flaws exposing plaintext credentials, with a Bitdefender audit finding 17% of skills exhibited malicious behavior. Furthermore, Moltbook, an AI agent social network built on OpenClaw, exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages with plaintext OpenAI API keys due to a misconfigured Supabase database. This rapid proliferation and inherent security risks present a significant challenge for security leaders seeking controlled evaluation paths.

VentureBeat

The open-source AI agent, OpenClaw, is experiencing a rapid and concerning increase in adoption, with Censys tracking its publicly exposed deployments from approximately 1,000 to over 21,000 in under a week. This surge is particularly alarming within business environments, as confirmed by Bitdefender’s GravityZone telemetry. Employees are deploying OpenClaw on corporate machines using simple install commands, inadvertently granting these autonomous agents significant privileges, including shell access, file system access, and OAuth tokens for critical corporate applications like Slack, Gmail, and SharePoint.

Several critical security vulnerabilities have been identified within OpenClaw and its ecosystem. CVE-2026-25253, a one-click remote code execution flaw rated CVSS 8.8, allows attackers to steal authentication tokens via a single malicious link, potentially leading to full gateway compromise in milliseconds. Another vulnerability, CVE-2026-25157, is a command injection flaw that permits arbitrary command execution through the macOS SSH handler. A comprehensive security analysis of 3,984 skills available on the ClawHub marketplace revealed that 283, or approximately 7.1% of the entire registry, contain critical security flaws that expose sensitive credentials in plaintext. A separate audit conducted by Bitdefender further indicated that roughly 17% of the skills analyzed exhibited outright malicious behavior.

The exposure of credentials extends beyond OpenClaw itself. Researchers at Wiz discovered that Moltbook, an AI agent social network built upon OpenClaw infrastructure, had its entire Supabase database publicly accessible without Row Level Security enabled. This significant breach exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages exchanged between agents, which contained plaintext OpenAI API keys. A single misconfiguration granted anyone with a web browser full read and write access to every agent credential on the platform.

The rapid proliferation of such AI agents is undeniable, with OpenAI’s Codex app achieving 1 million downloads in its first week. Meta has also been observed testing OpenClaw integration within its AI platform codebase. This rapid adoption, coupled with the severe security vulnerabilities and widespread credential exposure, presents a dilemma for security leaders. While setup guides suggest acquiring hardware like a Mac Mini for evaluation, security advisories caution against interacting with these agents, leaving security professionals without a controlled pathway for secure evaluation.

Related News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints
Industry News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints

Anthropic has announced plans to restrict the use of Claude Code when integrated with third-party tools and harnesses. The decision was communicated by Boris Cherny, the head of Claude Code, via a statement on X (formerly Twitter). According to Cherny, the current subscription models for Claude Code were not originally designed to accommodate the specific usage patterns generated by external third-party harnesses. This move highlights a strategic shift in how Anthropic manages its developer tools and subscription structures, ensuring that usage remains aligned with the intended design of their service tiers. The restriction aims to address discrepancies between user behavior on third-party platforms and the underlying subscription framework provided by Anthropic.

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes
Industry News

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes

The Gujarat High Court in India has officially established new boundaries regarding the integration of Artificial Intelligence within the judicial system. According to recent reports, the court has restricted the use of AI in formal judicial decisions, while still permitting its application for specific supportive roles. Under the new guidelines, AI technologies can be utilized for administrative tasks, legal research, and IT automation. However, a critical caveat remains: all AI-generated outputs must undergo a mandatory review by a human officer to ensure accuracy and accountability. This move highlights a cautious approach to legal tech, prioritizing human oversight in the delivery of justice while leveraging automation for operational efficiency.

Industry News

The Microsoft Copilot Naming Paradox: Mapping Over 75 Different Products Under One Brand Name

A recent investigation into Microsoft's branding strategy reveals a complex ecosystem where the name 'Copilot' now represents at least 75 distinct entities. The research, compiled from various product pages, launch announcements, and marketing materials, highlights that 'Copilot' is no longer just a single AI assistant. Instead, it encompasses a vast array of applications, features, platforms, physical hardware like keyboard keys, and even an entire category of laptops. The study found that no single official source, including Microsoft’s own documentation, provides a comprehensive list of these products. This fragmentation has led to significant confusion, as the brand now simultaneously refers to end-user tools and the infrastructure used to build additional AI assistants.