OpenAI has a new model and they named it after a potato. Spud finished pre-training around March 24, built on over 100,000 H100 GPUs at the Stargate facility in Abilene, Texas. Sam Altman told employees in an internal memo that it's a "very strong model" that could "really accelerate the economy." The Information broke the story, and now everyone is trying to figure out whether this is GPT-6, GPT-5.5, or something with a new naming scheme entirely.

Nobody knows. OpenAI hasn't confirmed anything publicly. What they have confirmed, through actions rather than announcements, is a company restructuring itself around a single bet. Sora is dead. The safety team no longer reports independently but has been folded into Research, subordinate to the Chief Research Officer. Fidji Simo's product organisation got renamed from "product deployment" to "AGI Deployment". As one analyst put it: the guardrails are no longer guarding the people building the AGI. The builders are now guarding the guardrails.

The rumoured capabilities read like a checklist of everything the market has been asking for. Natively multi-modal across text, audio, image, and possibly video. Real-time audio interaction. Agentic behaviour. One breakdown of the rumours notes that whether "natively multi-modal" means a single unified architecture or just a tightly integrated ensemble remains entirely unconfirmed. Internal employees have apparently hinted at a capability that is "very different from what we've seen before," which is the kind of phrase that could mean anything from a genuine architectural breakthrough to a better system prompt.

I've watched this pattern before. GPT-5.4 launched three weeks ago to a collective shrug. GPT-5.3 was solid but not transformative. The company has developed a rhythm of massive hype followed by incremental delivery, and each cycle makes the next round of promises harder to take at face value.

But the organisational moves are harder to dismiss. You don't kill a product, dissolve a Disney partnership, and restructure your safety function for something incremental. Either Spud is genuinely different or OpenAI just burned institutional credibility for nothing. The ARC-AGI-3 benchmark released this month showed frontier models scoring around 0.37% on tasks where humans score 100%. That gap is not the kind that closes with more parameters.

Both OpenAI and Anthropic are reportedly timing major releases to position for IPOs later this year. The cynical reading is that Spud's real audience isn't users or developers but investors who need a reason to believe the next valuation is justified. The less cynical reading is that a hundred thousand GPUs produced something worth reorganising a company around, and we'll know within weeks. I genuinely don't know which one is closer to the truth, and I suspect the people inside OpenAI don't either.

Sources: