Back to List
Industry NewsAISecurityOpen Source

OpenClaw Security Risks Soar: Thousands of Corporate Deployments Expose Critical Vulnerabilities and Sensitive Data, Raising Alarm for Security Leaders

OpenClaw, an open-source AI agent, has seen a rapid surge in deployments, escalating from 1,000 to over 21,000 publicly exposed instances in less than a week. This widespread adoption includes corporate environments, where employees are installing OpenClaw on company machines, granting autonomous agents extensive privileges like shell access, file system access, and OAuth tokens for services such as Slack, Gmail, and SharePoint. Critical vulnerabilities have been identified, including CVE-2026-25253, a CVSS 8.8 remote code execution flaw, and CVE-2026-25157, a command injection vulnerability. A security analysis of ClawHub marketplace skills revealed that 7.1% contain critical security flaws exposing plaintext credentials, with a Bitdefender audit finding 17% of skills exhibited malicious behavior. Furthermore, Moltbook, an AI agent social network built on OpenClaw, exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages with plaintext OpenAI API keys due to a misconfigured Supabase database. This rapid proliferation and inherent security risks present a significant challenge for security leaders seeking controlled evaluation paths.

VentureBeat

The open-source AI agent, OpenClaw, is experiencing a rapid and concerning increase in adoption, with Censys tracking its publicly exposed deployments from approximately 1,000 to over 21,000 in under a week. This surge is particularly alarming within business environments, as confirmed by Bitdefender’s GravityZone telemetry. Employees are deploying OpenClaw on corporate machines using simple install commands, inadvertently granting these autonomous agents significant privileges, including shell access, file system access, and OAuth tokens for critical corporate applications like Slack, Gmail, and SharePoint.

Several critical security vulnerabilities have been identified within OpenClaw and its ecosystem. CVE-2026-25253, a one-click remote code execution flaw rated CVSS 8.8, allows attackers to steal authentication tokens via a single malicious link, potentially leading to full gateway compromise in milliseconds. Another vulnerability, CVE-2026-25157, is a command injection flaw that permits arbitrary command execution through the macOS SSH handler. A comprehensive security analysis of 3,984 skills available on the ClawHub marketplace revealed that 283, or approximately 7.1% of the entire registry, contain critical security flaws that expose sensitive credentials in plaintext. A separate audit conducted by Bitdefender further indicated that roughly 17% of the skills analyzed exhibited outright malicious behavior.

The exposure of credentials extends beyond OpenClaw itself. Researchers at Wiz discovered that Moltbook, an AI agent social network built upon OpenClaw infrastructure, had its entire Supabase database publicly accessible without Row Level Security enabled. This significant breach exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages exchanged between agents, which contained plaintext OpenAI API keys. A single misconfiguration granted anyone with a web browser full read and write access to every agent credential on the platform.

The rapid proliferation of such AI agents is undeniable, with OpenAI’s Codex app achieving 1 million downloads in its first week. Meta has also been observed testing OpenClaw integration within its AI platform codebase. This rapid adoption, coupled with the severe security vulnerabilities and widespread credential exposure, presents a dilemma for security leaders. While setup guides suggest acquiring hardware like a Mac Mini for evaluation, security advisories caution against interacting with these agents, leaving security professionals without a controlled pathway for secure evaluation.

Related News

Amazon Invests $5 Billion in Anthropic as AI Startup Pledges $100 Billion in AWS Cloud Spending
Industry News

Amazon Invests $5 Billion in Anthropic as AI Startup Pledges $100 Billion in AWS Cloud Spending

Amazon has expanded its strategic partnership with AI startup Anthropic through a significant new investment and long-term service agreement. According to recent reports, Amazon is injecting an additional $5 billion into Anthropic, further solidifying its stake in the developer of the Claude AI models. In a reciprocal arrangement, Anthropic has committed to spending $100 billion on Amazon Web Services (AWS) infrastructure over an unspecified period. This deal highlights the growing trend of circular investments within the artificial intelligence sector, where cloud providers provide capital to AI firms that, in turn, commit to massive spending on the provider's cloud computing resources to train and deploy large-scale language models.

Silicon Valley's Disconnect: Why Tech Insiders Are Losing Touch with the Needs of Average Users
Industry News

Silicon Valley's Disconnect: Why Tech Insiders Are Losing Touch with the Needs of Average Users

In a critical observation of the current technology landscape, Elizabeth Lopatto explores the growing divide between Silicon Valley's internal enthusiasm and the practical realities of the general public. The narrative centers on the 'mortifying' experience of witnessing tech insiders present basic realizations—often facilitated by Large Language Models (LLMs)—as groundbreaking discoveries. This phenomenon highlights a recurring pattern where industry figures become deeply immersed in niche trends like NFTs, the Metaverse, and now AI, often failing to recognize that these innovations may not align with what 'normal people' actually want or need. The article suggests that the tech elite's excitement over technical capabilities frequently overlooks the fundamental human experience and common-sense utility.

The Rise of Repetitive AI Syntax: How the 'It's Not Just This, It's That' Construction Signals Synthetic Content
Industry News

The Rise of Repetitive AI Syntax: How the 'It's Not Just This, It's That' Construction Signals Synthetic Content

A specific linguistic pattern has emerged as a definitive hallmark of AI-generated text. The sentence construction "It's not just this — it's that" has seen such widespread adoption by large language models that it now serves as a primary indicator of synthetic writing. According to reports, this phraseology has transitioned from a simple stylistic preference to a near-guarantee that a piece of content was produced by artificial intelligence rather than a human author. This phenomenon highlights the predictable nature of current AI writing styles and the identifiable markers that distinguish machine-generated prose from human-centric narratives.