Back to List
Anthropic Addresses Claude Code Quality Degradation Reports and Implements Fixes for Sonnet and Opus Models
Industry NewsAnthropicClaudeAI Engineering

Anthropic Addresses Claude Code Quality Degradation Reports and Implements Fixes for Sonnet and Opus Models

Anthropic has released a postmortem addressing recent user reports regarding the degradation of Claude's performance across specific tools, including Claude Code, the Claude Agent SDK, and Claude Cowork. The investigation identified three distinct technical issues occurring between March and April 2026: an intentional but poorly received reduction in reasoning effort to manage latency, a session-clearing bug that caused repetitive behavior and memory loss, and a system prompt change aimed at reducing verbosity that inadvertently harmed coding quality. While the API remained unaffected, these issues impacted Sonnet 4.6, Opus 4.6, and Opus 4.7. Anthropic has since reverted the problematic changes and fixed the bugs as of April 20 (v2.1.116), emphasizing their commitment to maintaining model intelligence over speed.

Hacker News

Key Takeaways

  • Three Distinct Issues Identified: The perceived degradation was caused by a change in reasoning effort, a session-clearing bug, and a system prompt instruction to reduce verbosity.
  • Specific Tools Affected: Issues were limited to Claude Code, the Claude Agent SDK, and Claude Cowork; the core API and inference layer were not impacted.
  • Models Impacted: The performance dips affected Sonnet 4.6, Opus 4.6, and Opus 4.7 across different timeframes.
  • Full Resolution: All identified issues were resolved as of April 20 with the release of version 2.1.116.

In-Depth Analysis

Reasoning Effort and Latency Trade-offs

On March 4, Anthropic attempted to address UI latency issues where the interface appeared frozen by changing the default reasoning effort from "high" to "medium." While this was intended to improve the user experience by reducing wait times, it resulted in a noticeable drop in intelligence for Sonnet 4.6 and Opus 4.6. Following user feedback indicating a preference for higher intelligence over speed, Anthropic reverted this change on April 7. The company acknowledged that prioritizing lower latency at the expense of reasoning quality was the "wrong tradeoff."

Technical Bugs and Prompting Side Effects

Two additional technical factors contributed to the degradation. On March 26, a feature designed to clear old thinking from idle sessions to improve resumption speed introduced a bug. This bug caused the system to clear thinking every turn, making the models appear forgetful and repetitive. Furthermore, an April 16 update to the system prompt intended to reduce verbosity negatively impacted coding quality when combined with other prompt adjustments. This specific issue affected the latest models, including Opus 4.7. Both the bug and the prompt changes were corrected and reverted by April 20.

Investigation Challenges and Aggregate Effects

Anthropic noted that because these three changes occurred on different schedules and affected different segments of traffic, the resulting feedback appeared as broad and inconsistent degradation. The investigation began in early March but was complicated by the difficulty of distinguishing these specific technical failures from the normal variation in user feedback. The company has reaffirmed that they never intentionally degrade models and are implementing changes to prevent similar regressions in the future.

Industry Impact

This incident highlights the delicate balance AI providers must maintain between model "intelligence" (reasoning effort) and operational performance (latency). For the AI industry, it serves as a case study in how minor optimizations—such as reducing verbosity or clearing session cache—can have significant, unintended consequences on the quality of complex tasks like coding. Anthropic's transparent postmortem underscores the importance of user feedback loops in identifying non-obvious regressions that automated testing might miss, particularly when those regressions are tied to UI-specific implementations rather than the underlying API.

Frequently Asked Questions

Question: Was the Claude API affected by these quality issues?

No. Anthropic confirmed that the API and inference layer remained unaffected throughout this period; the issues were isolated to Claude Code, the Claude Agent SDK, and Claude Cowork.

Question: Which Claude models were impacted by the performance degradation?

The issues affected Sonnet 4.6, Opus 4.6, and Opus 4.7, depending on the specific technical change and the timeframe.

Question: How has Anthropic resolved these issues?

As of April 20 (v2.1.116), Anthropic has reverted the reasoning effort to "high," fixed the session-clearing bug, and removed the system prompt instructions that were harming coding quality.

Related News

50 Rising AI Startups in Asia: Identifying the Next Generation of Industry Leaders
Industry News

50 Rising AI Startups in Asia: Identifying the Next Generation of Industry Leaders

Tech in Asia has released a curated list of 50 rising AI startups across the Asian continent, highlighting companies that are positioned to become the next major players in the global artificial intelligence landscape. The report identifies these specific entities as having the potential to achieve significant scale and influence, marking them as the 'next big thing' in the industry. This selection underscores the rapid growth and increasing importance of the Asian AI ecosystem as it produces a new wave of innovative companies ready to disrupt the market.

Intercom Rebrands Corporate Entity to Fin: A Strategic Pivot Toward AI Customer Agents
Industry News

Intercom Rebrands Corporate Entity to Fin: A Strategic Pivot Toward AI Customer Agents

Intercom has officially announced a major corporate rebranding, changing its company name to Fin. While the well-known customer service software platform will retain the Intercom name—supported by the recent launch of Intercom 2—the parent company will now align its identity with its flagship customer agent platform, Fin. This move marks the culmination of a multi-year transition involving shifts in culture, pricing, and product strategy. CEO Eoghan Jennings (implied) emphasizes that the change is necessary to move beyond past successes and embrace the future of the service agent category. All 1,400 employees are now officially part of Fin, signaling a total commitment to the company's AI-driven technological direction.

Industry News

Claude Design Users Warn of Project Data Loss and Credit Expiration Following Subscription Cancellation

A recent report on Hacker News has raised significant concerns regarding data retention and credit management within Anthropic's Claude ecosystem. A user, identified as 'pycassa,' shared a cautionary experience detailing the immediate loss of access to Claude Design projects after unsubscribing from a five-month Claude Code Max subscription. The report further highlights issues with promotional credits—granted due to previous service instabilities—which reportedly vanished upon plan termination and remained inaccessible even after the user resubscribed. This incident has sparked a broader discussion within the developer community about the 'fast and loose' nature of bleeding-edge AI tools and the inherent risks of complex billing systems that may prioritize growth-oriented contracts over robust user-centric implementation and data persistence.