Back to List
Industry NewsCybersecurityAIData Security

Attackers Exploit CX Platform AI Blind Spots to Compromise 700+ Organizations, Bypassing Approved SOC Defenses

A critical security vulnerability in Customer Experience (CX) platforms, often overlooked by Security Operations Centers (SOCs), has allowed attackers to compromise over 700 organizations. Attackers are poisoning the data fed into CX platform AI engines, which then trigger automated workflows connected to sensitive systems like payroll, CRM, and payment systems. The Salesloft/Drift breach in August 2025 exemplified this, where attackers accessed Salesforce environments across numerous organizations, including Cloudflare and Palo Alto Networks, by stealing OAuth tokens and scanning for AWS keys and plaintext passwords, all without deploying malware. Security leaders often miscategorize these platforms, failing to recognize their deep integration with critical business systems. This gap is exacerbated by the fact that while 98% of organizations have DLP programs, only 6% dedicate resources, and 81% of intrusions now use legitimate access, not malware. Cloud intrusions surged 136% in the first half of 2025, highlighting the urgent need to address input integrity once AI is integrated into workflows.

VentureBeat

Customer Experience (CX) platforms, which process billions of unstructured interactions annually through survey forms, review sites, social feeds, and call center transcripts, are feeding these vast datasets into AI engines. These AI engines subsequently trigger automated workflows that interact with critical business systems such as payroll, CRM, and payment systems. A significant security blind spot has emerged: Security Operation Center (SOC) leaders' existing tools do not inspect the data ingested by these CX platform AI engines. Attackers have identified and exploited this vulnerability by 'poisoning' the data, effectively making the AI perform the malicious actions on their behalf.

The Salesloft/Drift breach in August 2025 serves as a clear illustration of this attack vector. During this incident, attackers compromised Salesloft’s GitHub environment, subsequently stealing Drift chatbot OAuth tokens. This unauthorized access allowed them to infiltrate Salesforce environments across more than 700 organizations, including prominent names like Cloudflare, Palo Alto Networks, and Zscaler. Following the breach, the stolen data was scanned for sensitive credentials such as AWS keys, Snowflake tokens, and plaintext passwords. Notably, no malware was deployed in the attack, indicating a reliance on exploiting legitimate access and system functionalities.

This security gap is more pervasive than many security leaders currently acknowledge. According to Proofpoint’s 2025 Voice of the CISO report, which surveyed 1,600 CISOs across 16 countries, 98% of organizations have a data loss prevention (DLP) program in place, yet only a mere 6% allocate dedicated resources to it. Furthermore, CrowdStrike’s 2025 Threat Hunting Report highlights that 81% of interactive intrusions now leverage legitimate access credentials rather than deploying malware. The report also noted a significant surge in cloud intrusions, which increased by 136% in the first half of 2025.

Assaf Keren, Chief Security Officer at Qualtrics and former CISO at PayPal, emphasized the severity of this miscategorization in an interview with VentureBeat. He stated, “Most security teams still classify experience management platforms as ‘survey tools,’ which sit in the same risk tier as a project management app.” Keren stressed that this is a “massive miscategorization” because these platforms are now deeply integrated with HRIS, CRM, and compensation engines. Qualtrics alone processes 3.5 billion interactions annually, a figure that has doubled since 2023. The increasing integration of AI into workflows necessitates that organizations cannot afford to overlook steps related to input integrity.

Related News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints
Industry News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints

Anthropic has announced plans to restrict the use of Claude Code when integrated with third-party tools and harnesses. The decision was communicated by Boris Cherny, the head of Claude Code, via a statement on X (formerly Twitter). According to Cherny, the current subscription models for Claude Code were not originally designed to accommodate the specific usage patterns generated by external third-party harnesses. This move highlights a strategic shift in how Anthropic manages its developer tools and subscription structures, ensuring that usage remains aligned with the intended design of their service tiers. The restriction aims to address discrepancies between user behavior on third-party platforms and the underlying subscription framework provided by Anthropic.

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes
Industry News

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes

The Gujarat High Court in India has officially established new boundaries regarding the integration of Artificial Intelligence within the judicial system. According to recent reports, the court has restricted the use of AI in formal judicial decisions, while still permitting its application for specific supportive roles. Under the new guidelines, AI technologies can be utilized for administrative tasks, legal research, and IT automation. However, a critical caveat remains: all AI-generated outputs must undergo a mandatory review by a human officer to ensure accuracy and accountability. This move highlights a cautious approach to legal tech, prioritizing human oversight in the delivery of justice while leveraging automation for operational efficiency.

Industry News

The Microsoft Copilot Naming Paradox: Mapping Over 75 Different Products Under One Brand Name

A recent investigation into Microsoft's branding strategy reveals a complex ecosystem where the name 'Copilot' now represents at least 75 distinct entities. The research, compiled from various product pages, launch announcements, and marketing materials, highlights that 'Copilot' is no longer just a single AI assistant. Instead, it encompasses a vast array of applications, features, platforms, physical hardware like keyboard keys, and even an entire category of laptops. The study found that no single official source, including Microsoft’s own documentation, provides a comprehensive list of these products. This fragmentation has led to significant confusion, as the brand now simultaneously refers to end-user tools and the infrastructure used to build additional AI assistants.