I can't see why anyone still chooses Claude. Codex outperforms it in most respects, and its quotas are about ten times larger. A $100 Codex plan gets me through the whole week with 6–12 hours of coding per day.
I found GPT 5.5 is pretty solid, but I keep getting impressed by opus. It's tracked down some insane stuff while I look away during a meeting. 5.5 is way closer than previous OpenAI models to Anthropic IMO.
These things are so tricky because everyone has a seemingly conflicting experience. Part of the fun I guess!
I've never actually run into the issues that people talk about online, like Claude suddenly getting dumb or running out of usage. So there's just not a lot of incentive for me to shop around. I've used Amp a bit, and it's quite nice, but a bit more expensive without the subsidized subscription.
It has always been like this. We actually know that the model performance has been mostly steady[0], but you cannot beat the notion of "evil companies secretly serving us worse models." The meme value is too strong.
I'm using Opus on xhigh 10+ hours a day, and I've only reached 80% of weekly limits when doing massive ports or refactors. I haven't once hit hourly limits, and I've used Claude very, very aggressively. I guess its a pain point for power users.
When do you use it the most? I’ve noticed that it most often starts to degrade during 10-5 US East coast time. Late at night, I have the least amount of issues, but without fail, if I’m trying to do anything complex during the day, Claude gets loopy.
One reason might be that Claude Opus 4.7 thinking benchmarks better on Arena Coding at https://arena.ai/leaderboard/text/coding ... hopefully that effectively assesses correctness. It doesn't account for reliability though.
I certainly get more usage before cutoff from GPT 5.5, but the output I get from Opus 4.7 is way better. It just sucks that I get 2 good "long running" prompts on Opus 4.7 before my daily quota is met on the $20 subscription.
Corporate policies and agreements. In large corporations, using external non-approved models with proprietary source code is a good way to have significant career issues.
You get a discount for paying for a full year on Teams and Enterprise can involve contractual obligations. It's a lot of effort to get buy-in to change providers and to shift an entire organization. The winds change frequently in this space and the pain needs to get to a certain level before it's worth rolling the dice.
Based on the experience of people using the $20 Claude Pro subscription and exhausting their quotas in a manner of minutes, the answer to your question is probably "less". (I would guess that the $100 plan would do the trick.)
In my org the teams doing agent engineering at scale are all on Codex using gpt-5.5. By scale I mean fully agent authored code workflows with long running / multi hour plans.
But 100$ Claude subscription also gets me easily entire week of coding 6-8 hours a day? What on earth do you do to run out of limits on Max? Do you vibe multiple new codebases every day for a living? The benefit of Claude is also not gaslighting me every time I tell it it's wrong.
Claude is (per benchmarks) much worse at instruction following, but is more charming and deceptive and anthropomorphized by default (in name and image), leading to productivity assessment psychosis
Wow, I'm really surprised. I tried deepseek (their best model, through the official API). Its extremely cheap, but its clearly not as good at programming as Opus 4.7. It seems nowhere near as good at making high level design choices. Deepseek also seems to get stuck in whack-a-mole fixing loops much more than opus. I stopped it at one point, and asked opus to solve the problem it was trying to solve and it saw the solution immediately.
I was running deepseek through claude's code agent harness. Maybe it works better through a different tool?
I've given V4 Pro some curly things and I was impressed at how it figured them out. I agree high level design is not its forte. But it sat in a loop and dogmatically debugged a crazy dependency issue to come to the right answer over the course of 15 minutes which impressed me.
Idk, I don’t vibe code so even the flash model is great for generating code for myself. I tend to do the planning and design myself though.
Harness also matters, and also provider. I was using openrouter and switched to the Deepseek api and suddenly all the tool call issues I was having resolved themselves. Flash is so damn fast at doing stuff like generating boilerplate I can’t go back to the bigger slower models.
I feel you. I'd prefer to stick entirely with local open source models. I tried using Aider and Qwen last week, and while it's still impressive what it can do with just local resources and entirely for free, its error rate is too high, and it's clearly not remotely in the same league as Claude Code.
interestingly I had the same experience, and weirdly it's in part because it is clearly less intelligent. It's more of a mechanistic tool just doing what I ask (but still very smart and very competent about it) and less trying to win a nobel prize with each answer. Turns out I actually like that.
Say what you will about Sam Altman, but at least he engages with his user base and acts on user feedback.
Dario and co seem to be on some elevated pedestal - us mere mortals are beneath them - and they have this scattershot devrel where each engineer has their own X way of communicating to the public often at odds with each other.
Plus I can’t really trust someone who emphasizes ethics and then partners with Elon to buy compute from a potentially illegal natural gas powered datacenter.
so, all those CEOs moving all those remaining engineers to be dependent on a cloud service to the extent that there's no local development capability are gonna appologize right
You're assuming the elevated error rates are due to the system being overloaded. We have no evidence this is actually the case. Its much more likely due to a simple misconfiguration or failing router or something.
The incredible infrastructure required to coordinate warehouses worth of compute actually seems pretty tricky. They’re worth more money than god so they get 0 leniency, but it does seem hard.
I can't see why anyone still chooses Claude. Codex outperforms it in most respects, and its quotas are about ten times larger. A $100 Codex plan gets me through the whole week with 6–12 hours of coding per day.
I found GPT 5.5 is pretty solid, but I keep getting impressed by opus. It's tracked down some insane stuff while I look away during a meeting. 5.5 is way closer than previous OpenAI models to Anthropic IMO.
These things are so tricky because everyone has a seemingly conflicting experience. Part of the fun I guess!
I've never actually run into the issues that people talk about online, like Claude suddenly getting dumb or running out of usage. So there's just not a lot of incentive for me to shop around. I've used Amp a bit, and it's quite nice, but a bit more expensive without the subsidized subscription.
It has always been like this. We actually know that the model performance has been mostly steady[0], but you cannot beat the notion of "evil companies secretly serving us worse models." The meme value is too strong.
[0]: https://marginlab.ai/trackers/claude-code/
Hmm, today's pass rate raised to 73% - interesting, are they AB-testing some new model? This is too high for Opus 4.7.
Are you using Opus? Sonnet remains as useful as it was while Opus efficacy and token burn rate has soured over the last 4 months.
I'm using Opus on xhigh 10+ hours a day, and I've only reached 80% of weekly limits when doing massive ports or refactors. I haven't once hit hourly limits, and I've used Claude very, very aggressively. I guess its a pain point for power users.
I sometimes run multiple claudes at the same time, with each terminal working on a different task. I have 2 going right now.
Its very easy to burn through your quota if you work like that. Especially on high / xhigh.
I used to be mostly at high/xhigh but now at medium I think it actually performs quite well both on results and token usage.
Yes, I've pretty much used Opus exclusively for the last year, except for a brief period when Sonnet was ahead
When do you use it the most? I’ve noticed that it most often starts to degrade during 10-5 US East coast time. Late at night, I have the least amount of issues, but without fail, if I’m trying to do anything complex during the day, Claude gets loopy.
9-5 Pacific Time
Same here. Works every time. Never ran into usage limits either.
One reason might be that Claude Opus 4.7 thinking benchmarks better on Arena Coding at https://arena.ai/leaderboard/text/coding ... hopefully that effectively assesses correctness. It doesn't account for reliability though.
Claude is the only AI coding tool I've found worth a damn. Without it I'd just do everything by hand save for a few bash scripts or whatever.
Have you tried other harnesses, such as OpenCode?
Yeah, harness quality matters too, but the underlying model capabilities are night and day.
I certainly get more usage before cutoff from GPT 5.5, but the output I get from Opus 4.7 is way better. It just sucks that I get 2 good "long running" prompts on Opus 4.7 before my daily quota is met on the $20 subscription.
I think it's impossible to say that codex x.y.z is better than Sonnet x.y.z, I used many "high" end models and they're just all good.
Corporate policies and agreements. In large corporations, using external non-approved models with proprietary source code is a good way to have significant career issues.
You get a discount for paying for a full year on Teams and Enterprise can involve contractual obligations. It's a lot of effort to get buy-in to change providers and to shift an entire organization. The winds change frequently in this space and the pain needs to get to a certain level before it's worth rolling the dice.
Claude Max 20x gives me unlimited (for my level of usage) Opus 4.7 - how much money do I have pay OpenAI for that?
Based on the experience of people using the $20 Claude Pro subscription and exhausting their quotas in a manner of minutes, the answer to your question is probably "less". (I would guess that the $100 plan would do the trick.)
Okay so how much less will I have to pay OpenAI for unlimited Opus 4.7?
In my org the teams doing agent engineering at scale are all on Codex using gpt-5.5. By scale I mean fully agent authored code workflows with long running / multi hour plans.
I'd rather not give money to Sam Altman.
with Anthropic you’re giving money to Elon Musk. Seems like a pick-your-billionaire world we’re in now
I can't chose what the people I give money to do with it, just who I chose to give money to.
I refuse to accept the lazy cynicism of "nothing matters at all".
But 100$ Claude subscription also gets me easily entire week of coding 6-8 hours a day? What on earth do you do to run out of limits on Max? Do you vibe multiple new codebases every day for a living? The benefit of Claude is also not gaslighting me every time I tell it it's wrong.
Claude is (per benchmarks) much worse at instruction following, but is more charming and deceptive and anthropomorphized by default (in name and image), leading to productivity assessment psychosis
Corporate reasons. AWS hasn't opened codex models to everyone yet.
Claude is significantly better at Rust in my experience, and Rust is my favorite language to emit from LLMs.
Opus 4.7 + Rust is a killer combo.
because my shard isn’t erroring
I use Codex when Claude Code is down, and I only began using Claude when ChatGPT was down
yes codex is very fast, I go back to Claude for now
Because of marketing and vibes mostly.
Heck I prefer DeepSeek to both of those.
Wow, I'm really surprised. I tried deepseek (their best model, through the official API). Its extremely cheap, but its clearly not as good at programming as Opus 4.7. It seems nowhere near as good at making high level design choices. Deepseek also seems to get stuck in whack-a-mole fixing loops much more than opus. I stopped it at one point, and asked opus to solve the problem it was trying to solve and it saw the solution immediately.
I was running deepseek through claude's code agent harness. Maybe it works better through a different tool?
I've given V4 Pro some curly things and I was impressed at how it figured them out. I agree high level design is not its forte. But it sat in a loop and dogmatically debugged a crazy dependency issue to come to the right answer over the course of 15 minutes which impressed me.
Idk, I don’t vibe code so even the flash model is great for generating code for myself. I tend to do the planning and design myself though.
Harness also matters, and also provider. I was using openrouter and switched to the Deepseek api and suddenly all the tool call issues I was having resolved themselves. Flash is so damn fast at doing stuff like generating boilerplate I can’t go back to the bigger slower models.
You tried v4?
I tried to like it, but it eventually got stuck in a near-infinite loop trying to debug an extra curly bracket in an iOS app.
That and the lack of image-read support surprised me. I'm a big fan of feeding screenshots into my llm and that killed it for me.
Yeah, v4.
I would have been much more impressed with v4 about 6 months ago. But I've been spoiled by opus 4.7. Deepseek isn't at the same level.
I feel you. I'd prefer to stick entirely with local open source models. I tried using Aider and Qwen last week, and while it's still impressive what it can do with just local resources and entirely for free, its error rate is too high, and it's clearly not remotely in the same league as Claude Code.
interestingly I had the same experience, and weirdly it's in part because it is clearly less intelligent. It's more of a mechanistic tool just doing what I ask (but still very smart and very competent about it) and less trying to win a nobel prize with each answer. Turns out I actually like that.
Sonnet is also throwing overloaded error.
My systems are hitting exponential delay retries, so this might not get better because retries overload things again.
> {'type': 'error', 'error': {'details': None, 'type': 'overloaded_error', 'message': 'Overloaded'}, 'request_id': 'req_ ...
I can see a weird spike in my cache hit-rate a few minutes before, so this might actually be some extra caching they have thrown in.
Say what you will about Sam Altman, but at least he engages with his user base and acts on user feedback.
Dario and co seem to be on some elevated pedestal - us mere mortals are beneath them - and they have this scattershot devrel where each engineer has their own X way of communicating to the public often at odds with each other.
I loved Sonnet and Opus fwiw but not anymore.
Plus I can’t really trust someone who emphasizes ethics and then partners with Elon to buy compute from a potentially illegal natural gas powered datacenter.
https://status.claude.com/
They're having quite the day for devrel..
Do they need a waiting list, or what?
Sonnet is giving an overloaded message as well.
so, all those CEOs moving all those remaining engineers to be dependent on a cloud service to the extent that there's no local development capability are gonna appologize right
in a year or two when AI tool costs go from 5M per year to 15M per year...even then, maybe not.
I thought the deal with xai was supposed to solve this? Is this basically the adding lanes paradox?
You're assuming the elevated error rates are due to the system being overloaded. We have no evidence this is actually the case. Its much more likely due to a simple misconfiguration or failing router or something.
The incredible infrastructure required to coordinate warehouses worth of compute actually seems pretty tricky. They’re worth more money than god so they get 0 leniency, but it does seem hard.
I love Claude but I hate waiting a minute or two for any inference to start. I hope they can get their xAI capacity online ASAP and that it helps!