Back to List
Industry NewsAIIntellectual PropertyGeopolitics

Anthropic Accuses DeepSeek, Moonshot AI, and MiniMax of Industrial-Scale Claude Model Theft Using 24,000 Fake Accounts

Anthropic has publicly accused three prominent Chinese AI laboratories—DeepSeek, Moonshot AI, and MiniMax—of orchestrating large-scale campaigns to extract capabilities from its Claude models. The San Francisco-based AI company alleges that these labs collectively generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, violating Anthropic's terms of service and regional access restrictions. Anthropic describes these campaigns as the most concrete public evidence of foreign competitors systematically using 'distillation' to bypass years of research and significant investment. The company warned that these campaigns are increasing in intensity and sophistication, requiring urgent, coordinated action from industry, policymakers, and the global AI community. This disclosure escalates tensions between American and Chinese AI developers and is linked to the ongoing debate in Washington regarding export controls on advanced AI chips, a policy Anthropic has actively supported.

VentureBeat

Anthropic dropped a bombshell on the artificial intelligence industry Monday, publicly accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of orchestrating coordinated, industrial-scale campaigns to siphon capabilities from its Claude models using tens of thousands of fraudulent accounts. The San Francisco-based company said the three labs collectively generated more than 16 million exchanges with Claude through approximately 24,000 fake accounts, all in violation of Anthropic's terms of service and regional access restrictions. The campaigns, Anthropic said, are the most concrete and detailed public evidence to date of a practice that has haunted Silicon Valley for months: foreign competitors systematically using a technique called distillation to leapfrog years of research and billions of dollars in investment.

"These campaigns are growing in intensity and sophistication," Anthropic wrote in a technical blog post published Monday. "The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community." The disclosure marks a dramatic escalation in the simmering tensions between American and Chinese AI developers — and it arrives at a moment when Washington is actively debating whether to tighten or loosen export controls on the advanced chips that power AI training. Anthropic, led by CEO Dario Amodei, has been among the most vocal advocates for restricting chip sales to China, and the company explicitly connected Monday's revelations to that policy fight. To understand what Anthropic alleges, it helps to understand what distillation actually is — and how it evolved from an academic curiosity into the most contentious issue in the global AI race. At its core, distillation is a process of

Related News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints
Industry News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints

Anthropic has announced plans to restrict the use of Claude Code when integrated with third-party tools and harnesses. The decision was communicated by Boris Cherny, the head of Claude Code, via a statement on X (formerly Twitter). According to Cherny, the current subscription models for Claude Code were not originally designed to accommodate the specific usage patterns generated by external third-party harnesses. This move highlights a strategic shift in how Anthropic manages its developer tools and subscription structures, ensuring that usage remains aligned with the intended design of their service tiers. The restriction aims to address discrepancies between user behavior on third-party platforms and the underlying subscription framework provided by Anthropic.

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes
Industry News

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes

The Gujarat High Court in India has officially established new boundaries regarding the integration of Artificial Intelligence within the judicial system. According to recent reports, the court has restricted the use of AI in formal judicial decisions, while still permitting its application for specific supportive roles. Under the new guidelines, AI technologies can be utilized for administrative tasks, legal research, and IT automation. However, a critical caveat remains: all AI-generated outputs must undergo a mandatory review by a human officer to ensure accuracy and accountability. This move highlights a cautious approach to legal tech, prioritizing human oversight in the delivery of justice while leveraging automation for operational efficiency.

Industry News

The Microsoft Copilot Naming Paradox: Mapping Over 75 Different Products Under One Brand Name

A recent investigation into Microsoft's branding strategy reveals a complex ecosystem where the name 'Copilot' now represents at least 75 distinct entities. The research, compiled from various product pages, launch announcements, and marketing materials, highlights that 'Copilot' is no longer just a single AI assistant. Instead, it encompasses a vast array of applications, features, platforms, physical hardware like keyboard keys, and even an entire category of laptops. The study found that no single official source, including Microsoft’s own documentation, provides a comprehensive list of these products. This fragmentation has led to significant confusion, as the brand now simultaneously refers to end-user tools and the infrastructure used to build additional AI assistants.