OpenAI waited exactly one week.
On April 7, Anthropic locked Claude Mythos behind a coalition of launch partners and over 40 additional organisations and called it Project Glasswing. The message was clear enough: this model is too dangerous to sell, so we'll give it to the people who build the things it can break. One week later, OpenAI unveiled GPT-5.4-Cyber.
The timing was not subtle. OpenAI's blog post notes that its Trusted Access for Cyber programme launched months before Glasswing. Nobody at OpenAI mentions Mythos by name. But the framing is unmistakable: a cybersecurity-tuned variant of GPT-5.4, optimised for vulnerability research, binary reverse engineering, and defensive patching, rolled out to "thousands of individuals and organisations" through an expanded TAC programme with Know-Your-Customer identity verification.
The pitch mirrors Glasswing almost exactly. Put the sharpest model in the hands of defenders. Lock out attackers with verification gates. Talk about democratising security while restricting who gets to use the dangerous parts.
Bruce Schneier was unimpressed. He called Glasswing "very much a PR play" and said the security firm AISLE had replicated Mythos's findings using older, cheaper, publicly available models. Tom's Hardware pointed out that Anthropic's "thousands of zero-days" claim extrapolates from 198 manually reviewed reports, and the actual testing surfaced 10 severe vulnerabilities across 7,000 software stacks. On Mashable, Tal Kollender, CEO of cybersecurity firm Remedio, called it "brilliant corporate theater."
That phrase sticks. Corporate theater implies the performance matters more than the outcome. Both labs are now racing to position themselves as the responsible steward of offensive-grade capabilities. Anthropic restricts access to a coalition. OpenAI expands access to thousands but gates it behind KYC. The difference is philosophical (Anthropic trusts institutions, OpenAI trusts verified individuals) but the marketing structure is identical.
What neither company has answered convincingly is why a specialised cyber model is necessary when their general-purpose flagships already find vulnerabilities. Anthropic's own framing of Mythos as a general-purpose model that happens to be devastating at exploit discovery undercuts the idea that you need a dedicated product. If the capabilities emerge naturally from scale, gating access to one model while selling the base model commercially is a distinction without much security benefit.
The real signal might be financial. Codex Security, OpenAI's existing application security agent, has already contributed to over 3,000 fixed vulnerabilities. GPT-5.4-Cyber sits as the premium tier above it. Glasswing comes with $100 million in usage credits, which amounts to $100 million in locked-in API consumption across Anthropic, AWS, Google, and Microsoft. These are not just defensive programmes. They are enterprise sales channels dressed as public goods.
None of this means the capabilities are fake. Both models genuinely find bugs. The question is whether the theatrical framing, the coalitions, the gating, the carefully timed competitive releases, does anything a well-funded bug bounty programme wouldn't already do. Schneier's bet is that it doesn't. The labs are betting that it sounds like it does.
Sources:
-
OpenAI expands Trusted Access for Cyber program — CyberScoop
-
OpenAI Launches GPT-5.4-Cyber — The Hacker News
-
On Anthropic's Mythos Preview and Project Glasswing — Schneier on Security
-
Claude Mythos isn't a sentient super-hacker — Tom's Hardware
-
Is Claude Mythos a PR stunt? — Mashable