Bloomberg reported on Sunday that OpenAI, Anthropic, and Google have started sharing threat intelligence through the Frontier Model Forum, the nonprofit the three companies co-founded with Microsoft in 2023. The arrangement works like a cybersecurity ISAC: when one company detects a suspicious query pattern, it flags the signature for the others.

The target is adversarial distillation. Chinese labs — DeepSeek, Moonshot AI, and MiniMax — have been systematically querying Claude, ChatGPT, and Gemini through fake accounts to generate training data for cheaper models. Anthropic's February disclosure put numbers to it: roughly 24,000 fraudulent accounts generating over 16 million exchanges with Claude alone. MiniMax accounted for 13 million of those. The operations used what Anthropic called "hydra cluster" architectures — sprawling proxy networks managing thousands of accounts simultaneously, mixing distillation traffic with innocuous requests to avoid detection.

The Decoder has a good free summary of the Bloomberg story, which reports that US authorities estimate the practice costs American AI labs billions annually.

What's interesting isn't the distillation itself. That problem has been visible since DeepSeek R1 shook the market in January 2025. What's interesting is the vehicle. The Frontier Model Forum was chartered to study catastrophic risks: CBRN threats, advanced cyberattacks, the kind of existential scenarios that get discussed at Senate hearings. Its stated mission mentions nothing about distillation, model copying, or commercial intelligence. The pivot from "prevent bioweapon synthesis" to "detect bulk API scraping" is a significant scope expansion, and nobody seems to have remarked on it.

The legal terrain underneath all of this is surprisingly weak. Fenwick & West's analysis found that copyright offers little protection, because AI outputs generally lack the human authorship required. The Computer Fraud and Abuse Act has a gap since Van Buren v. United States (2021): if you have authorized API access, misusing the data violates terms of service but possibly not federal law. Trespass to chattels requires proving system degradation. Patents may be the strongest tool, but nobody has tested distillation-specific claims in court.

Policy hawks are pushing harder. Joe Khawam at the Law Reform Institute proposed a three-phase escalation: Entity List designation for the three Chinese labs, an IEEPA executive order creating sanctions authority over AI capability theft, and ultimately full SDN blocking sanctions. CSIS testimony from May 2025 went further, suggesting offensive countermeasures including data poisoning.

The irony sits right on the surface. These are companies that built their models by ingesting the open web, books, articles, code repositories, forum posts, without explicit permissions from creators. The legal and ethical arguments they used to justify that training are structurally similar to the ones Chinese labs could deploy to justify distillation. Monash University's analysis compared distillation to reverse engineering under Sega v. Accolade: studying a system's outputs to learn its methods is not, historically, the same as copying the system.

None of this means the alliance won't work. Sharing detection signatures is a practical step. DeepSeek has already pivoted to domestic silicon, which suggests the API route was always supplemental. But the Forum's quiet transformation from safety research body to competitive defense mechanism deserves more scrutiny than it's getting. When three companies that control most of the world's frontier AI capability coordinate to restrict access, the word for that depends entirely on where you're standing.

Sources: