Claude users are furious. Power users on X, GitHub and Reddit say Anthropic’s flagship AI assistant has quietly lost its edge since February, with developers calling the model “nerfed” and shorter, sloppier in its coding work. The backlash arrives just as Anthropic pushes a stronger frontier system called Mythos, raising a bigger question about who still gets top-tier AI.
At a Glance:
- AMD’s AI director logged 6,852 Claude Code sessions showing a 67% reasoning drop.
- Anthropic’s annualized revenue jumped from $9 billion to $30 billion in months.
- Claude Opus 4.7 launched April 16, with Mythos Preview limited to 40 organizations.
- Enterprise customers now pay per token after Anthropic scrapped flat-rate seats.
Power Users Sound the Alarm Over Claude’s Decline
The complaints have piled up fast. Developers say Claude now writes shorter responses, ignores instructions and skips key reasoning steps, especially on complex coding tasks.
Since its February 2026 launch, Claude Opus 4.6 has faced persistent complaints about perceived quality decline, with developers reporting shorter responses, weaker instruction-following, and reduced reasoning depth during peak hours. Much of the anger centers on a single word: nerfed.
Users suspect Anthropic scaled the model back to save money or free up GPUs for its new Mythos system. The perception has already pushed some paying customers to rivals like OpenAI Codex, even when Anthropic insists nothing fundamental has changed.

An AMD Engineer Turns Frustration Into Hard Data
The loudest critic is not a hobbyist. One of the most detailed public complaints originated as a GitHub issue filed by Stella Laurenzo on April 2, 2026, whose LinkedIn profile identifies her as Senior Director in AMD’s AI group.
Laurenzo did not just vent. She brought receipts, with detailed telemetry from 6,852 sessions, 234,760 tool calls, and 17,871 thinking blocks across a stable internal engineering workload from January through March 2026.
The numbers were brutal. An independent analysis of over 6,800 Claude Code sessions found reasoning depth dropped roughly 67% by late February. The model’s file-read ratio before editing code fell from 6.6 to 2.0, which suggests it attempted fixes on code it had barely reviewed.
“Claude cannot be trusted to perform complex engineering tasks,” Laurenzo wrote, noting that her team reached that conclusion by referring to months of logs from the “very consistent, high complexity work environment” in which they use Claude Code. “Every senior engineer on my team has reported similar experiences/anecdotes,” Laurenzo added. Her post, detailed by The Register, racked up more than 2,000 reactions within days.
Key Takeaway: The debate is no longer anecdotal. A senior engineer at a major chipmaker has quantified Claude’s slide, and Anthropic is now answering to logs, not vibes.
Anthropic Says It Tuned Defaults, Not the Model
Anthropic’s leadership pushed back hard. Cherny said the “redact-thinking-2026-02-12” header cited in the complaint is a UI-only change that hides thinking from the interface and reduces latency, but “does not impact thinking itself,” “thinking budgets,” or how extended reasoning works under the hood. He also said two other product changes likely affected what users were seeing: Opus 4.6’s move to adaptive thinking by default on Feb. 9, and a March 3 shift to medium effort, or effort level 85, as the default for Opus 4.6, which he said Anthropic viewed as the best balance across intelligence, latency and cost for most users.
The company denies any hidden throttling. Shihipar has also directly denied the broader demand-management accusation, saying in a reply on X posted April 11 that Anthropic does not “degrade” its models to better serve demand.
Even Claude itself weighed in. Analyst Patrick Moorhead asked the chatbot to diagnose the controversy, and the answer was surprisingly candid.
“Anthropic made real configuration changes that objectively reduced default thinking depth across all surfaces including claude.ai, but the most extreme ‘secret nerfing’ narrative overstates what happened,” Claude said as part of its lengthy response.
Independent trackers back part of Anthropic’s defense. Data from Margin Lab suggests that Claude Opus 4.6 has at least maintained its score on the SWE-Bench-Pro test, with assessments conducted since February showing some variation but no substantive change.
Still, analysts say the tension is real. “This is primarily a capacity and cost issue. Complex engineering tasks require significantly more compute, including intermediate reasoning steps. As usage increases, the system cannot sustain this level of compute for every request,” said Chandrika Dutt, research director at Avasant. “As a result, the system limits how long a task runs or how much reasoning depth is applied and how many such tasks can run simultaneously,” Dutt added.
https://x.com/bcherny/status/2029970236460691885
Mythos and the Compute Crunch Behind the Storm
The backlash hit just as Anthropic unveiled Mythos Preview, a model so powerful the company refused to release it widely. Anthropic is rolling out a preview of its new Mythos model only to a handpicked group of tech and cybersecurity companies over concerns about its ability to find and exploit security flaws, the company said Tuesday.
Anthropic is pouring money into defensive partners. According to Anthropic’s Project Glasswing announcement, the firm is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations.
Compute scarcity is reshaping every decision Anthropic makes. Blackwell GPU rental prices climbed 48 percent in two months. CoreWeave raised prices more than 20 percent late last year and now forces smaller customers into three-year contracts.
| Model | Availability | Focus |
|---|---|---|
| Claude Opus 4.6 | Generally available | Default flagship, under fire |
| Claude Opus 4.7 | Launched April 16 | Coding, agents, safer than Mythos |
| Mythos Preview | 40 partners only | Cybersecurity, too risky for release |
A Widening Divide Between AI Haves and Have-Nots
Anthropic just reset the rules for its biggest customers. The new enterprise structure detailed by Implicator confirms the shift to token-based billing.
Anthropic has restructured its enterprise plan to bill Claude, Claude Code, and Cowork usage separately from seat fees, moving its largest business customers to per-token pricing at standard API rates, according to the company’s updated enterprise help documentation. Organizations on older seat-based plans with fixed usage allowances must migrate by their next contract renewal or lose the grandfathered terms. The Information first reported the shift, describing it as tied to a deepening compute crunch that will raise bills for heavy business users significantly.
The stakes for Anthropic are staggering. Run rate at the end of 2025 was $9 billion. Run rate today is $30 billion. And the company’s message to its biggest customers is: budget more for compute. That is not a typo. Revenue tripled in four months.
What users now get depends heavily on what they pay:
- Retool’s founder David Hsu told the Wall Street Journal he preferred Anthropic’s Opus 4.6 model, but the service kept going down, citing 98.95 percent API uptime over the 90 days ending April 8 against a 99.99 percent cloud standard, a gap that translates to roughly 100 hours of downtime a year versus 50 minutes.
- Claude Opus 4.7, announced April 16, is pitched as better at software engineering but deliberately weaker on cyber tasks than Mythos.
- Security researchers praise Mythos’s power yet warn defenders cannot keep up. In a way, an AI tool like Mythos that can find thousands of cybersecurity vulnerabilities a minute is really an “incredibly expensive alarm,” said Tal Kollender, a former hacker and founder of cybersecurity platform Remedio. Finding risk faster than you can fix it, she told her, does not make companies more secure.
Frequently Asked Questions
Did Anthropic actually make Claude worse?
Anthropic says it changed default settings, not the model itself. It shifted Claude Opus 4.6 to adaptive thinking on February 9 and set medium effort as the new default on March 3.
What is Mythos and why was its release limited?
Mythos is Anthropic’s new frontier model, released only to 40 partner organizations because it can find and exploit software vulnerabilities too easily for a public launch.
How did AMD’s Stella Laurenzo prove Claude regressed?
She analyzed 6,852 Claude Code sessions, 17,871 thinking blocks and 234,760 tool calls from January through March, finding reasoning depth dropped roughly 67%.
How does Anthropic’s new enterprise pricing work?
Enterprise customers now pay per token at standard API rates on top of a base seat fee, replacing fixed flat-rate subscriptions that many teams relied on.
Is Claude Opus 4.7 a fix for the complaints?
Anthropic says Opus 4.7 outperforms 4.6 on coding, instruction-following and agentic tasks, but it is not as capable on cybersecurity work as the restricted Mythos Preview.
The fight over Claude’s “intelligence” is really a fight over access. With reasoning depth reportedly down 67% on the default tier and top-end power now gated behind Mythos and per-token bills, the AI gap between well-funded firms and everyday developers keeps widening. Whether Opus 4.7 restores trust or just resets the debate, users are watching every changelog. Share your experience in the comments.




Leave a Comment