<- Back
Comments (200)
- antirezVery good move. In my experience, for system programming at least, GPT 5.4 xhigh is vastly superior to Claude Opus 4.6 max effort. I ran many brutal tests, including reconstructing for QEMU the SCSI controller (not longer accessible) of a SVSY UNIX of the early 90s used in a 386. Side by side, always re-mirroring the source trees each time one did a breakthrough in the implementation. Well, GPT 5.4 single handed did it all, while Opus continued to take wrong paths. The same for my Redis bug tracking and development. But 200$ is too much for many people (right now, at least: the reality is that if frontier LLMs are not democratized, we will end paying like a house rent to a few providers), and also while GPT 5.4 is much stronger, it is slower and less sharp when the thing to do is simple, so many people went for Claude (also because of better marketing and ethical concerns, even if my POV is different on that side: both companies sell LLM models with similar capabilities and similar internal IP protection and so forth, to me they look very similar in practical terms). This will surely change things, and many people will end with a Claude 5x account + a Codex 5x account I bet.
- mrdependableIt's interesting seeing all the ChatGPT users in this thread, knowing what we know about OpenAI. Either they don't care about what OpenAI does, don't know their reputation, or feel like their use is too insignificant to matter.
- patates5.4, in my own testing, was almost always ahead of Opus 4.6 for reviews and planning. I'm on plus plan on openai, so I couldn't test it so deeply. Anyone who had more experience on both could perhaps chime in? Pros/cons compared to Opus? I'm invested in Claude ecosystem but the recent quality and session limits decrease have me on the edge.
- 2001zhaozhaoThe title is misleading. The only thing they seem to have done was add a $100 plan identical to Claude's, which gives 5x usage of ChatGPT Plus. There is still a $200 plan that gives 20x usage.
- satvikpendemThe era of subsidization is over, it seems.For my money, on the code side at least, GitHub Copilot on VSCode is still the most cost effective option, 10 bucks for 300 requests gets me all I need, especially when I use OpenAI models which are counted as 1x vs Opus which is 3x. I've stopped using all other tools like Claude Code etc.
- pseudosavantThat has me quite tempted. In general, I stay under the Plus limits, but I do watch my consumption. I could use `/fast` mode all of the time, with extra high reasoning, and use gpt-5.4-pro for especially complex tasks. It wasn't worth 10x the price to me before, but 5x is approachable.
- sourcecodeplzI like that they kept limited access to Codex even on free tier.LE: Someone said this is how the tiers are now counted:"Essentially if old plus is 1x then new limits are: Plus - 0.3x Pro $100 - 1.5x Pro $200 - 6x (unchanged)"
- xur17Any idea way "5x or 20x more usage" means?
- laaczThey are actively exploiting the compute shortages of Anthropic. In our team we're pushing for more or less vanilla and portability, since the best harness today might not be the best one in 6 months.
- daft_pinkIt says 5x or 20x more usage, so does that mean they have copied clause and have a 5x for 100 and 20x for 200?
- gmigThis is an additional offering to the existing plan.5x=$100 20x=$200
- rossantHow much was it before?
- disiplusIt looks like its called prolite.https://snipboard.io/jmGKfM.jpg
- bottlepalmAre you allowed to run your own autonomous agents with it outside of Codex, like OpenClaw and others?
- yoniknakI’ve used both a fair amount, and for actual coding work I still prefer Codex over Opus.
- koolbaDoes this give you something different than the $20/mo plan when using codex?
- I_am_tiberiusFor me it's not the price. It's the fact that they obviously read my prompts and may even use a derived version of my data for training. As it's very clear in the meantime that SAMA lies most of the time, there's just no way I can trust this company in any way.
- MallocVoidstarhttps://x.com/OpenAI/status/2042296046009626989>Our existing $200 Pro tier still remains our highest usage option.
- christkvI wish these plans had burst mode where I could set default plan size and max plan size and just scale up for a month automatically if needed but automatically drop back to my default plan at the next billing cycle
- d3rockkNow they just need every iPhone owner on the planet to subscribe, and this AI bubble will officially be unpoppable!
- jedisct1Awesome news.And that includes usage of the API with any agent without risking being banned. OpenAI is also very supportive of open source software.I'm using GPT-5.4 with Swival (https://swival.dev) for a while, alongside local models, and it's absolutely fantastic.
- bossyTeacherIt really feels like LLMs will mostly become tools for tech workers rather than the kind of civilization-level transformation sama has been peddling. Every single comment here seems to confirm the above.
- righthandThis is like the 2010s hosting price wars.
- varispeedWhat is the difference between Pro and normal mode apart from the fact the Pro takes ages to finish? I see not much difference in output quality.
- anonundefined
- azuanrb[dead]
- flextherulerTell me you're losing market share to competitors without telling me you're losing market share to competitors
- hackable_sandCan you guys remind me again why you're doing this?
- Archerlmjust a rumor, but i heard altman was adding a timer which required the R&D dept. to triple
- selectivelyPrice drops are nice. Unfortunately, the quality differential versus the competitor is night and day.And everyone serious uses the API rate billing anyway.