Opus 4.6 went live at 6:40 PM on Wednesday. GPT-5.3-Codex followed twenty minutes later. The timing was obviously deliberate on OpenAI's part, and it turned the evening into a kind of split-screen experiment. Two flagship coding models, released simultaneously, aimed at roughly the same audience. The reactions since then have been revealing — not for what they say about the models, but for how cleanly developer opinion has fractured along workflow lines.
The Opus 4.6 launch drew immediate praise for agent teams and the million-token context window. Developers on Hacker News reported loading entire codebases into a single session and running multi-agent reviews that finished in ninety seconds instead of thirty minutes. Rakuten claimed Opus 4.6 autonomously closed thirteen issues in a single day. But within hours, a Reddit thread titled "Opus 4.6 lobotomized" gathered 167 upvotes — users complaining that writing quality had cratered. The emerging theory: reinforcement learning tuned for reasoning came at the expense of prose. The early consensus is blunt. Upgrade for code, keep 4.5 around for anything involving actual sentences.
GPT-5.3-Codex landed with a different problem entirely. The model itself impressed people — 25% faster inference, stable eight-hour autonomous runs, strong Terminal-Bench numbers. Matt Shumer called it a "phase change" and meant it. But nobody was talking about that. Sam Altman had spent the previous morning publishing a 400-word essay calling Anthropic's Super Bowl ads "dishonest" and referencing Orwell's 1984. The top reply, with 3,500 likes: "It's a funny ad. You should have just rolled with it." Instead of discussing Codex's Terminal-Bench scores, the entire discourse was about whether Sam Altman can take a joke.
The practical picture that's forming is more interesting than the drama. Simon Willison struck the most measured note, observing that both models are "really good, but so were their predecessors." He couldn't find tasks the old models failed at that the new ones ace. That feels honest. The improvements are real but incremental. The self-development claims around Codex are provocative; the actual day-to-day experience is a faster, slightly more capable version of what we already had.
FactSet stock dropped 9.1% on the day. Moody's fell 3.3%. The market apparently decided these models are coming for financial analysts before software engineers. I'm not sure the market is wrong.
Dan Shipper's summary captures where most working developers seem to have landed: "50/50 — vibe code with Opus and serious engineering with Codex." Two models, twenty minutes apart, already sorting themselves into different drawers.
Sources:
-
AI War: 20 Minutes After Opus 4.6 - UCStrategies
-
Opus 4.6 Coding-Writing Tradeoff - WinBuzzer
-
Sam Altman and the Super Bowl Ads - TechCrunch
-
Simon Willison on Both Models - Simon Willison
-
Financial Stocks Plunge - Blockonomi