Back to List
Industry NewsAISecurityOpen Source

OpenClaw Security Risks Soar: Thousands of Corporate Deployments Expose Critical Vulnerabilities and Sensitive Data, Raising Alarm for Security Leaders

OpenClaw, an open-source AI agent, has seen a rapid surge in deployments, escalating from 1,000 to over 21,000 publicly exposed instances in less than a week. This widespread adoption includes corporate environments, where employees are installing OpenClaw on company machines, granting autonomous agents extensive privileges like shell access, file system access, and OAuth tokens for services such as Slack, Gmail, and SharePoint. Critical vulnerabilities have been identified, including CVE-2026-25253, a CVSS 8.8 remote code execution flaw, and CVE-2026-25157, a command injection vulnerability. A security analysis of ClawHub marketplace skills revealed that 7.1% contain critical security flaws exposing plaintext credentials, with a Bitdefender audit finding 17% of skills exhibited malicious behavior. Furthermore, Moltbook, an AI agent social network built on OpenClaw, exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages with plaintext OpenAI API keys due to a misconfigured Supabase database. This rapid proliferation and inherent security risks present a significant challenge for security leaders seeking controlled evaluation paths.

VentureBeat

The open-source AI agent, OpenClaw, is experiencing a rapid and concerning increase in adoption, with Censys tracking its publicly exposed deployments from approximately 1,000 to over 21,000 in under a week. This surge is particularly alarming within business environments, as confirmed by Bitdefender’s GravityZone telemetry. Employees are deploying OpenClaw on corporate machines using simple install commands, inadvertently granting these autonomous agents significant privileges, including shell access, file system access, and OAuth tokens for critical corporate applications like Slack, Gmail, and SharePoint.

Several critical security vulnerabilities have been identified within OpenClaw and its ecosystem. CVE-2026-25253, a one-click remote code execution flaw rated CVSS 8.8, allows attackers to steal authentication tokens via a single malicious link, potentially leading to full gateway compromise in milliseconds. Another vulnerability, CVE-2026-25157, is a command injection flaw that permits arbitrary command execution through the macOS SSH handler. A comprehensive security analysis of 3,984 skills available on the ClawHub marketplace revealed that 283, or approximately 7.1% of the entire registry, contain critical security flaws that expose sensitive credentials in plaintext. A separate audit conducted by Bitdefender further indicated that roughly 17% of the skills analyzed exhibited outright malicious behavior.

The exposure of credentials extends beyond OpenClaw itself. Researchers at Wiz discovered that Moltbook, an AI agent social network built upon OpenClaw infrastructure, had its entire Supabase database publicly accessible without Row Level Security enabled. This significant breach exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages exchanged between agents, which contained plaintext OpenAI API keys. A single misconfiguration granted anyone with a web browser full read and write access to every agent credential on the platform.

The rapid proliferation of such AI agents is undeniable, with OpenAI’s Codex app achieving 1 million downloads in its first week. Meta has also been observed testing OpenClaw integration within its AI platform codebase. This rapid adoption, coupled with the severe security vulnerabilities and widespread credential exposure, presents a dilemma for security leaders. While setup guides suggest acquiring hardware like a Mac Mini for evaluation, security advisories caution against interacting with these agents, leaving security professionals without a controlled pathway for secure evaluation.

Related News

New Quinnipiac Poll Reveals 15% of Americans Are Willing to Report to an AI Supervisor
Industry News

New Quinnipiac Poll Reveals 15% of Americans Are Willing to Report to an AI Supervisor

A recent national poll conducted by Quinnipiac University has uncovered a significant shift in workplace attitudes regarding artificial intelligence. According to the survey results, 15% of Americans expressed a willingness to work in a role where their direct supervisor is an AI program. This potential AI 'boss' would be responsible for core management duties, including assigning specific tasks and managing employee schedules. While the majority of the workforce remains hesitant about algorithmic management, this data point highlights a growing niche of acceptance for automated leadership structures. The findings provide a rare glimpse into how U.S. workers perceive the integration of AI into the traditional corporate hierarchy and the evolving dynamics of human-computer interaction in professional environments.

LiteLLM Severs Ties with Delve Following Major Security Breach and Credential-Stealing Malware Incident
Industry News

LiteLLM Severs Ties with Delve Following Major Security Breach and Credential-Stealing Malware Incident

LiteLLM, a prominent AI gateway startup, has officially terminated its relationship with the security compliance firm Delve. This strategic move follows a severe security incident occurring last week, where LiteLLM fell victim to devastating credential-stealing malware. Prior to the breach, LiteLLM had utilized Delve's services to obtain two critical security compliance certifications. The incident has raised significant concerns regarding the efficacy of compliance-led security measures and the vulnerabilities inherent in third-party security partnerships. As the AI industry prioritizes data integrity, this separation marks a pivotal moment for LiteLLM as it navigates the aftermath of the attack and seeks to fortify its infrastructure against future malicious threats.

Rising AI Adoption in the United States Met with Declining Public Trust and Transparency Concerns
Industry News

Rising AI Adoption in the United States Met with Declining Public Trust and Transparency Concerns

A recent Quinnipiac poll reveals a growing paradox in the American technology landscape: while more citizens are integrating artificial intelligence tools into their daily lives, trust in the results generated by these systems is simultaneously declining. The data highlights a significant gap between the utility of AI and the public's confidence in its reliability. Most Americans expressed deep-seated concerns regarding the lack of transparency in AI operations and the urgent need for more robust regulation. This shift in sentiment suggests that as AI becomes more ubiquitous, users are becoming increasingly skeptical of its broader societal impact and the integrity of the information it provides, posing a challenge for developers and policymakers alike.