On Friday the Department of Defense announced agreements with seven companies to run AI on its highest-classification networks, the IL6 and IL7 tiers where the actual operational data lives. The list is the predictable one and the surprising one at the same time: SpaceX, OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, and the comparatively new Reflection. Notably absent, and absent on purpose, is Anthropic. The company that has spent two years building its brand around frontier-safety commitments just got formally cut out of the most lucrative classified contract round in the sector's short history.

The exclusion is not procedural. Defense Secretary Pete Hegseth has been trying for months to label Anthropic a "supply chain risk," a designation usually reserved for foreign-adversary sabotage threats, after Anthropic insisted on contract language that would prohibit Claude from being used in fully autonomous weapons or for surveillance of Americans. Hegseth's position, as relayed through the Pentagon's CTO Emil Michael, is that vendors must allow any use the department deems lawful, full stop. That sentence does a lot of work. "Lawful" inside an IL7 environment is whatever the executive branch and its lawyers say it is on a given Tuesday, and the whole point of Anthropic's clause was to have something more durable than a memo.

OpenAI got there first. Its March deal was, in the words of people involved, structured to "replace Anthropic with ChatGPT in classified environments." That framing is unusual to see in print. Most procurement language is bloodless. This one names a loser. Reading the earlier blacklist move and the West Wing meeting that followed it, the trajectory is now legible: the administration tested whether it could pressure Anthropic into dropping the clause, found that it couldn't, and built the classified stack around the six companies that didn't push back. SpaceX and Reflection are the bonus picks; the rest of the list is just the hyperscalers.

Inside Google the deal is not landing quietly. Six hundred employees signed a letter to Sundar Pichai before the announcement asking him to keep Gemini out of classified work, and after the news broke a smaller group floated a strike before backing off over retaliation fears. DeepMind staff in particular have been here before, the Pentagon-autonomy budget piece laid out the $13.4 billion that's now flowing toward exactly the applications they were promised in 2018 their work would never support. The promise expired. The compute did not.

What's striking is how cleanly the disagreement has been allowed to surface. Anthropic could have signed and quietly carved exceptions into the statement of work, the way large vendors usually do. They didn't. They sued the administration, they publicly held the line on autonomous-weapons use, and now they're watching seven competitors split a contract pool they are not allowed to bid into. Whether that turns out to be principled or expensive is a question for the next funding round, but it is the first time in this cycle a frontier lab has paid a real commercial cost for a stated safety position rather than just gesturing at one.

The cost is the news. The position was already on the record.

Sources: