Back to List
Industry NewsAISecurityOpen Source

OpenClaw Security Risks Soar: Thousands of Corporate Deployments Expose Critical Vulnerabilities and Sensitive Data, Raising Alarm for Security Leaders

OpenClaw, an open-source AI agent, has seen a rapid surge in deployments, escalating from 1,000 to over 21,000 publicly exposed instances in less than a week. This widespread adoption includes corporate environments, where employees are installing OpenClaw on company machines, granting autonomous agents extensive privileges like shell access, file system access, and OAuth tokens for services such as Slack, Gmail, and SharePoint. Critical vulnerabilities have been identified, including CVE-2026-25253, a CVSS 8.8 remote code execution flaw, and CVE-2026-25157, a command injection vulnerability. A security analysis of ClawHub marketplace skills revealed that 7.1% contain critical security flaws exposing plaintext credentials, with a Bitdefender audit finding 17% of skills exhibited malicious behavior. Furthermore, Moltbook, an AI agent social network built on OpenClaw, exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages with plaintext OpenAI API keys due to a misconfigured Supabase database. This rapid proliferation and inherent security risks present a significant challenge for security leaders seeking controlled evaluation paths.

VentureBeat

The open-source AI agent, OpenClaw, is experiencing a rapid and concerning increase in adoption, with Censys tracking its publicly exposed deployments from approximately 1,000 to over 21,000 in under a week. This surge is particularly alarming within business environments, as confirmed by Bitdefender’s GravityZone telemetry. Employees are deploying OpenClaw on corporate machines using simple install commands, inadvertently granting these autonomous agents significant privileges, including shell access, file system access, and OAuth tokens for critical corporate applications like Slack, Gmail, and SharePoint.

Several critical security vulnerabilities have been identified within OpenClaw and its ecosystem. CVE-2026-25253, a one-click remote code execution flaw rated CVSS 8.8, allows attackers to steal authentication tokens via a single malicious link, potentially leading to full gateway compromise in milliseconds. Another vulnerability, CVE-2026-25157, is a command injection flaw that permits arbitrary command execution through the macOS SSH handler. A comprehensive security analysis of 3,984 skills available on the ClawHub marketplace revealed that 283, or approximately 7.1% of the entire registry, contain critical security flaws that expose sensitive credentials in plaintext. A separate audit conducted by Bitdefender further indicated that roughly 17% of the skills analyzed exhibited outright malicious behavior.

The exposure of credentials extends beyond OpenClaw itself. Researchers at Wiz discovered that Moltbook, an AI agent social network built upon OpenClaw infrastructure, had its entire Supabase database publicly accessible without Row Level Security enabled. This significant breach exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages exchanged between agents, which contained plaintext OpenAI API keys. A single misconfiguration granted anyone with a web browser full read and write access to every agent credential on the platform.

The rapid proliferation of such AI agents is undeniable, with OpenAI’s Codex app achieving 1 million downloads in its first week. Meta has also been observed testing OpenClaw integration within its AI platform codebase. This rapid adoption, coupled with the severe security vulnerabilities and widespread credential exposure, presents a dilemma for security leaders. While setup guides suggest acquiring hardware like a Mac Mini for evaluation, security advisories caution against interacting with these agents, leaving security professionals without a controlled pathway for secure evaluation.

Related News

Datawhale Launches 'Easy-Vibe': A Modern Step-by-Step Programming Course for the 2026 Vibe Coding Era
Industry News

Datawhale Launches 'Easy-Vibe': A Modern Step-by-Step Programming Course for the 2026 Vibe Coding Era

Datawhale has introduced "easy-vibe," a pioneering educational project on GitHub designed to guide beginners through the complexities of modern programming. Centered on the emerging concept of "vibe coding" for the year 2026, the repository offers a structured, step-by-step curriculum. As a trending project in the developer community, easy-vibe aims to redefine the introductory experience for new coders by focusing on contemporary practices and intuitive mastery. The project is positioned as the first of its kind to offer a progressive path toward mastering the modern programming landscape, signaling a significant shift in how technical skills are acquired in an evolving digital environment.

Hugging Face Unveils Strategic Building Blocks for Foundation Model Training and Inference on AWS Infrastructure
Industry News

Hugging Face Unveils Strategic Building Blocks for Foundation Model Training and Inference on AWS Infrastructure

On May 11, 2026, Hugging Face announced a new initiative titled 'Building Blocks for Foundation Model Training and Inference on AWS.' This development focuses on providing a structured framework for developers and enterprises to manage the complex lifecycle of large-scale AI models within the Amazon Web Services (AWS) ecosystem. By focusing on both the training and inference phases, the announcement highlights a comprehensive approach to cloud-based AI development. While the initial report focuses on the foundational components, it signals a significant step in the ongoing collaboration between Hugging Face and AWS to simplify the deployment of foundation models for a broader range of users.

OpenAI Launches Daybreak: A New AI Initiative for Proactive Vulnerability Detection and Automated Patching
Industry News

OpenAI Launches Daybreak: A New AI Initiative for Proactive Vulnerability Detection and Automated Patching

OpenAI has officially introduced Daybreak, a specialized AI initiative designed to identify and remediate security vulnerabilities before they can be exploited by malicious actors. Building upon the Codex Security AI agent released in March, Daybreak develops comprehensive threat models tailored to an organization's specific codebase. By focusing on potential attack paths and validating likely vulnerabilities, the system aims to automate the detection of high-priority security risks. This move positions OpenAI as a direct competitor to existing security-focused AI models like Claude Mythos, emphasizing a proactive approach to cybersecurity through automated threat modeling and validation. The initiative represents a significant step in leveraging AI to secure software infrastructure against emerging digital threats.