Back to List
Industry NewsAISoftware DevelopmentPartnership

Anthropic Launches AI-Powered Code Review for Claude Code Amidst Pentagon Blacklist Lawsuit and Microsoft Partnership

Anthropic has introduced Code Review, a multi-agent AI system for Claude Code designed to identify bugs in pull requests that human reviewers often miss. This feature, currently in research preview for Team and Enterprise customers, was released on a significant day for the company. Simultaneously, Anthropic filed lawsuits against the Trump administration regarding a Pentagon blacklisting and announced a new partnership with Microsoft to embed Claude into its Microsoft 365 Copilot platform. This confluence of a major product launch, a federal legal challenge, and a significant distribution deal highlights the complex environment Anthropic is navigating, as it aims to grow its developer tools business, address a government national security designation, and expand its commercial reach.

VentureBeat

Anthropic on Monday released Code Review, a multi-agent code review system built into Claude Code that dispatches teams of AI agents to scrutinize every pull request for bugs that human reviewers routinely miss. The feature, now available in research preview for Team and Enterprise customers, arrives on what may be the most consequential day in the company's history: Anthropic simultaneously filed lawsuits against the Trump administration over a Pentagon blacklisting, while Microsoft announced a new partnership embedding Claude into its Microsoft 365 Copilot platform. The convergence of a major product launch, a federal legal battle, and a landmark distribution deal with the world's largest software company captures the extraordinary tension defining Anthropic's current moment. The San Francisco-based AI lab is simultaneously trying to grow a developer tools business approaching $2.5 billion in annualized revenue, defend itself against an unprecedented government designation as a national security threat, and expand its commercial footprint through the very cloud platforms now navigating the fallout. Code Review is Anthropic's most aggressive bet yet that engineering organizations will pay significantly more — $15 to $25 per review — for AI-assisted code quality assurance that prioritizes thoroughness over speed. It also signals a broader strategic pivot: the company isn't just building models, it's building opinionated developer workflows around them. How a team of AI agents reviews your pull requests Code Review works differently from the lightweight code review tools most developers are accustomed to. When a developer opens a pull request, the system dispatches multiple AI agents that operate in parallel. These agents independently search for bugs, then cross-verify each other's findings to filter out false positives, and finally rank the remaining issues by severity. The output appears as a single overview comment on the PR along with detailed findings.

Related News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints
Industry News

Anthropic to Restrict Claude Code Usage with Third-Party Tools Due to Subscription Design Constraints

Anthropic has announced plans to restrict the use of Claude Code when integrated with third-party tools and harnesses. The decision was communicated by Boris Cherny, the head of Claude Code, via a statement on X (formerly Twitter). According to Cherny, the current subscription models for Claude Code were not originally designed to accommodate the specific usage patterns generated by external third-party harnesses. This move highlights a strategic shift in how Anthropic manages its developer tools and subscription structures, ensuring that usage remains aligned with the intended design of their service tiers. The restriction aims to address discrepancies between user behavior on third-party platforms and the underlying subscription framework provided by Anthropic.

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes
Industry News

India’s Gujarat High Court Implements Strict Restrictions on AI Usage Within Judicial Decision-Making Processes

The Gujarat High Court in India has officially established new boundaries regarding the integration of Artificial Intelligence within the judicial system. According to recent reports, the court has restricted the use of AI in formal judicial decisions, while still permitting its application for specific supportive roles. Under the new guidelines, AI technologies can be utilized for administrative tasks, legal research, and IT automation. However, a critical caveat remains: all AI-generated outputs must undergo a mandatory review by a human officer to ensure accuracy and accountability. This move highlights a cautious approach to legal tech, prioritizing human oversight in the delivery of justice while leveraging automation for operational efficiency.

Industry News

The Microsoft Copilot Naming Paradox: Mapping Over 75 Different Products Under One Brand Name

A recent investigation into Microsoft's branding strategy reveals a complex ecosystem where the name 'Copilot' now represents at least 75 distinct entities. The research, compiled from various product pages, launch announcements, and marketing materials, highlights that 'Copilot' is no longer just a single AI assistant. Instead, it encompasses a vast array of applications, features, platforms, physical hardware like keyboard keys, and even an entire category of laptops. The study found that no single official source, including Microsoft’s own documentation, provides a comprehensive list of these products. This fragmentation has led to significant confusion, as the brand now simultaneously refers to end-user tools and the infrastructure used to build additional AI assistants.