Back to List
Industry NewsCybersecurityAIData Security

Attackers Exploit CX Platform AI Blind Spots to Compromise 700+ Organizations, Bypassing Approved SOC Defenses

A critical security vulnerability in Customer Experience (CX) platforms, often overlooked by Security Operations Centers (SOCs), has allowed attackers to compromise over 700 organizations. Attackers are poisoning the data fed into CX platform AI engines, which then trigger automated workflows connected to sensitive systems like payroll, CRM, and payment systems. The Salesloft/Drift breach in August 2025 exemplified this, where attackers accessed Salesforce environments across numerous organizations, including Cloudflare and Palo Alto Networks, by stealing OAuth tokens and scanning for AWS keys and plaintext passwords, all without deploying malware. Security leaders often miscategorize these platforms, failing to recognize their deep integration with critical business systems. This gap is exacerbated by the fact that while 98% of organizations have DLP programs, only 6% dedicate resources, and 81% of intrusions now use legitimate access, not malware. Cloud intrusions surged 136% in the first half of 2025, highlighting the urgent need to address input integrity once AI is integrated into workflows.

VentureBeat

Customer Experience (CX) platforms, which process billions of unstructured interactions annually through survey forms, review sites, social feeds, and call center transcripts, are feeding these vast datasets into AI engines. These AI engines subsequently trigger automated workflows that interact with critical business systems such as payroll, CRM, and payment systems. A significant security blind spot has emerged: Security Operation Center (SOC) leaders' existing tools do not inspect the data ingested by these CX platform AI engines. Attackers have identified and exploited this vulnerability by 'poisoning' the data, effectively making the AI perform the malicious actions on their behalf.

The Salesloft/Drift breach in August 2025 serves as a clear illustration of this attack vector. During this incident, attackers compromised Salesloft’s GitHub environment, subsequently stealing Drift chatbot OAuth tokens. This unauthorized access allowed them to infiltrate Salesforce environments across more than 700 organizations, including prominent names like Cloudflare, Palo Alto Networks, and Zscaler. Following the breach, the stolen data was scanned for sensitive credentials such as AWS keys, Snowflake tokens, and plaintext passwords. Notably, no malware was deployed in the attack, indicating a reliance on exploiting legitimate access and system functionalities.

This security gap is more pervasive than many security leaders currently acknowledge. According to Proofpoint’s 2025 Voice of the CISO report, which surveyed 1,600 CISOs across 16 countries, 98% of organizations have a data loss prevention (DLP) program in place, yet only a mere 6% allocate dedicated resources to it. Furthermore, CrowdStrike’s 2025 Threat Hunting Report highlights that 81% of interactive intrusions now leverage legitimate access credentials rather than deploying malware. The report also noted a significant surge in cloud intrusions, which increased by 136% in the first half of 2025.

Assaf Keren, Chief Security Officer at Qualtrics and former CISO at PayPal, emphasized the severity of this miscategorization in an interview with VentureBeat. He stated, “Most security teams still classify experience management platforms as ‘survey tools,’ which sit in the same risk tier as a project management app.” Keren stressed that this is a “massive miscategorization” because these platforms are now deeply integrated with HRIS, CRM, and compensation engines. Qualtrics alone processes 3.5 billion interactions annually, a figure that has doubled since 2023. The increasing integration of AI into workflows necessitates that organizations cannot afford to overlook steps related to input integrity.

Related News

Granola Privacy Alert: AI Notes Viewable via Link and Used for Training by Default
Industry News

Granola Privacy Alert: AI Notes Viewable via Link and Used for Training by Default

Users of the AI-powered note-taking application Granola are being advised to review their privacy settings following revelations regarding data accessibility and usage. Although the company markets its service as 'private by default,' the platform currently allows anyone with a specific link to view notes. Furthermore, Granola utilizes user notes for internal AI training purposes unless individuals manually opt out of the process. Positioned as an AI notepad for professionals, these default configurations have raised concerns regarding the actual level of privacy provided to its user base. This report explores the discrepancy between the marketing claims and the functional reality of Granola's data handling policies as reported by The Verge.

OpenAI Expands Media Footprint with Acquisition of Technology Talk Show TBPN
Industry News

OpenAI Expands Media Footprint with Acquisition of Technology Talk Show TBPN

OpenAI has officially acquired the technology talk show TBPN, marking a strategic move into the media and content space. While the acquisition has been confirmed, OpenAI has not disclosed the financial terms of the deal. Furthermore, the future of TBPN’s existing distribution channels remains uncertain, as the company has not yet clarified whether the show will continue its current presence on major platforms including YouTube, X (formerly Twitter), and various podcast networks. This acquisition highlights OpenAI's growing interest in controlling tech-centric narratives and engaging directly with audiences through established media properties, though specific integration plans and the long-term status of the show's accessibility are currently unavailable.

Open Models Reach Parity with Closed Frontier Models in Core AI Agent Tasks and Efficiency
Industry News

Open Models Reach Parity with Closed Frontier Models in Core AI Agent Tasks and Efficiency

A recent evaluation by LangChain reveals that open models, specifically GLM-5 and MiniMax M2.7, have crossed a significant performance threshold. These models now match the capabilities of closed frontier models in critical agent-related functions, including file operations, tool utilization, and instruction following. Beyond performance parity, these open-source alternatives offer substantial advantages in cost-effectiveness and reduced latency. This shift marks a turning point for developers and enterprises looking to deploy sophisticated AI agents without the high overhead typically associated with proprietary closed-source systems. The findings suggest that the gap between open and closed models is closing rapidly in the domain of functional AI tasks.