Anthropic Launches AI-Powered Code Review for Claude Code Amidst Pentagon Blacklist Lawsuit and Microsoft Partnership
Anthropic has introduced Code Review, a multi-agent AI system for Claude Code designed to identify bugs in pull requests that human reviewers often miss. This feature, currently in research preview for Team and Enterprise customers, was released on a significant day for the company. Simultaneously, Anthropic filed lawsuits against the Trump administration regarding a Pentagon blacklisting and announced a new partnership with Microsoft to embed Claude into its Microsoft 365 Copilot platform. This confluence of a major product launch, a federal legal challenge, and a significant distribution deal highlights the complex environment Anthropic is navigating, as it aims to grow its developer tools business, address a government national security designation, and expand its commercial reach.
Anthropic on Monday released Code Review, a multi-agent code review system built into Claude Code that dispatches teams of AI agents to scrutinize every pull request for bugs that human reviewers routinely miss. The feature, now available in research preview for Team and Enterprise customers, arrives on what may be the most consequential day in the company's history: Anthropic simultaneously filed lawsuits against the Trump administration over a Pentagon blacklisting, while Microsoft announced a new partnership embedding Claude into its Microsoft 365 Copilot platform. The convergence of a major product launch, a federal legal battle, and a landmark distribution deal with the world's largest software company captures the extraordinary tension defining Anthropic's current moment. The San Francisco-based AI lab is simultaneously trying to grow a developer tools business approaching $2.5 billion in annualized revenue, defend itself against an unprecedented government designation as a national security threat, and expand its commercial footprint through the very cloud platforms now navigating the fallout. Code Review is Anthropic's most aggressive bet yet that engineering organizations will pay significantly more — $15 to $25 per review — for AI-assisted code quality assurance that prioritizes thoroughness over speed. It also signals a broader strategic pivot: the company isn't just building models, it's building opinionated developer workflows around them. How a team of AI agents reviews your pull requests Code Review works differently from the lightweight code review tools most developers are accustomed to. When a developer opens a pull request, the system dispatches multiple AI agents that operate in parallel. These agents independently search for bugs, then cross-verify each other's findings to filter out false positives, and finally rank the remaining issues by severity. The output appears as a single overview comment on the PR along with detailed findings.