Skip to content

Plutonic Rainbows

Lobbying With the Thing You Built

Anthropic has been privately briefing government officials about Claude Mythos for weeks, telling them the model could enable large-scale cyberattacks. The briefings started before the CMS leak made the model public. That detail matters.

The leaked draft was unusually candid. It admitted that "AI is currently providing more meaningful capability uplift to attackers than to defenders, and that gap is widening." Mythos can chain attack actions autonomously and run multiple hacking campaigns without human oversight. Security analysts at CSO Online noted the model's recursive self-fixing capability, compressing the gap between human and machine software engineering. Between the leaked draft and the external analysis, the picture is of something closer to a weapon than a product.

The question is what you do with that framing. Gizmodo called it directly: the "classic AI company playbook of talking up the dangers of a model to highlight how powerful and capable it is." I think that's right and incomplete at the same time. Anthropic is doing three things simultaneously: fighting the Pentagon over ethical guardrails on military AI use, warning those same officials that its product could facilitate mass cyberattacks, and preparing for what's rumoured to be a $60 billion IPO later this year. Each of those three positions reinforces the other two. The safety brand makes the government warnings credible. The government warnings make the capability story investable. The IPO pressure makes the capability story necessary.

None of which means the warnings are fabricated. Mythos triggered ASL-3 protections under Anthropic's Responsible Scaling Policy, meaning the company's own framework classified it as requiring enhanced security for model weights and deployment restrictions targeting cyber and biological misuse. Whether it approaches ASL-4, the tier defined by models that become "the primary source of national security risk in a major area," hasn't been disclosed. The leaked capabilities suggest the boundary is closer than anyone expected.

I wrote last week about how conveniently the leak landed. The government briefings add another layer. Pre-leak, they look like responsible disclosure. Post-leak, they look like groundwork, the kind of advance positioning that makes an "accidental" revelation feel less accidental. A company that already told the government "this thing is dangerous" has a much easier time controlling the narrative once the public finds out.

CrowdStrike lost roughly $15 billion in market cap on March 27, the day after the leak. Nearly half of cybersecurity professionals now rank agentic AI as their top threat vector. Anthropic gets to sit at the centre of both the problem and the proposed solution, which is a remarkable place to be when you're about to go public.

Sources:

Downloading Room 13

Spitfire Audio released a sample library in February 2025: 1,087 sounds from the BBC Radiophonic Workshop, recorded at the original Maida Vale studios, sold as a virtual instrument for £149. You load it into your DAW and there they are. Test-tone oscillators. Junk percussion. Tape loops. The raw material of a future that got cancelled nearly three decades ago.

The Workshop opened in 1958, in Room 13 of Maida Vale. Desmond Briscoe and Daphne Oram built it to produce experimental sound for BBC radio and television: effects, incidental music, theme tunes for programmes that hadn't been invented yet. Delia Derbyshire arrived in 1962 and made the Doctor Who theme by constructing each note individually on quarter-inch mono tape, inch by inch. She and Dick Mills unwound the entire reel along the corridor to check for anomalies. Ron Grainer heard the finished piece and asked, "Did I write that?" The BBC refused Derbyshire a credit or royalties.

The Workshop closed in March 1998. Mark Ayres catalogued roughly 3,500 reels of tape. When Derbyshire died in 2001, her partner found 267 tapes in her attic. Tea chests and cardboard boxes, labels peeling off. The originals are too fragile to play.

I keep returning to Chris Christodoulou's 2018 paper, which frames these sounds as artefacts of "a utopian future that has been irrevocably lost." Mark Fisher was more blunt: the Workshop was state-funded. Thatcherism killed the model. The 1998 closure wasn't administrative. It was political.

Ghost Box Records, founded in 2004, built an entire aesthetic from the Workshop's DNA. Simon Reynolds described their approach to sampling as a kind of séance, retrieving voices from dead formats, making them undead rather than restored. You'd find these records in charity shop bins, between warped folk compilations and cracked library music LPs. The medium was part of the message. There's something about the weight of a 10-inch pressing that a FLAC file can't replicate, though I suspect that's nostalgia rather than acoustics.

The Spitfire library is something else entirely. The sounds are clean, categorised, tagged for search: Archive Content, Found Sounds, Junk Percussion, Tape Loops, Synths, Miscellany. Peter Howell and Paddy Kingsland contributed. Dick Mills, who unwound that tape with Derbyshire over sixty years ago, helped record new material. You can have the oscillators, the tape artefacts, the junk percussion. What you can't download is Room 13 itself: the institution, the funding model, the specific arrangement of public money and creative latitude that made someone think it was worth paying Delia Derbyshire to build a bassline from a single plucked string, one inch of tape at a time.

Sources:

Broadband Money as an AI Weapon

The Trump administration has found a creative way to kill state AI laws: threaten to take away their internet money.

The legislative route failed first. Senator Ted Cruz proposed a 10-year moratorium on all state AI laws last May, tucked into the budget reconciliation bill. The House passed it 215-214. The Senate stripped it out 99 to 1. Republican Senators Marsha Blackburn and Josh Hawley voted against it. Blackburn's reason was specific: it would override Tennessee's deepfake protections. When your own party's senators won't back your preemption play by a margin of 99-1, the legislative strategy is dead.

The states made their position clear before the White House even responded. In November, thirty-six attorneys general, led by New York's Letitia James, wrote to Congress opposing federal preemption. The coalition includes Idaho, Indiana, Kansas, Louisiana, Mississippi, South Carolina, Tennessee, and Utah alongside the blue states you'd expect. Republican state lawmakers have been even more blunt. Angela Paxton, a Republican Texas senator, put it plainly: "When you have no regulation, what you have is the wild west." Doug Fiefia, a Republican Utah representative and former Google employee, said Congress "not only will not act, they can't act."

Then came the executive order. Signed in December 2025, it established three enforcement tools. A DOJ litigation task force to challenge state laws in court. A Commerce Department review to label state regulations "onerous." And the sharpest blade: conditioning $42.5 billion in BEAD broadband infrastructure funding on states agreeing not to enforce their AI laws. Texas alone received $1.27 billion in broadband grants. That's a lot of leverage disguised as telecommunications policy.

The order's legal footing looks precarious. John Bergmayer of Public Knowledge pointed to the 2023 Supreme Court decision in National Pork Producers Council v. Ross, noting that states routinely regulate interstate commerce. Congress has now rejected preemption twice. Without an actual federal regulatory framework in place, the argument that state laws "conflict with federal law" doesn't have much federal law to conflict with.

Meanwhile, states keep passing bills. Washington's governor signed chatbot and provenance laws the last week of March. Colorado's AI Act, delayed once already, finally takes effect this summer. California's transparency requirements for frontier AI developers are already on the books. The state-by-state picture suggests the administration faces a whack-a-mole problem it created for itself by rescinding Biden's executive order on inauguration day without replacing it with anything substantive. The vacuum invited exactly the fragmentation they're now scrambling to contain.

I keep thinking about the broadband angle. It's a genuinely novel tactic, using infrastructure money as a regulatory weapon against an entirely different policy domain. Whether courts let it stand is one question. Whether it signals a pattern of using federal funding as coercive leverage against AI dissent is a more uncomfortable one.

Sources:

The .map File That Mapped Everything

A 59.8 megabyte source map file sitting in the npm registry. That's how 512,000 lines of Claude Code's TypeScript ended up on GitHub this morning, mirrored across half a dozen repositories before most people had finished their coffee. Security researcher Chaofan Shou found it in version 2.1.88 of the @anthropic-ai/claude-code package. Bun's bundler generates source maps by default. Nobody added them to .npmignore. The entire codebase shipped.

The detail that keeps pulling me back is Undercover Mode. Buried in the source is a system that activates when Claude Code detects it's being used by an Anthropic employee in a public repository. It injects instructions into the system prompt telling the model not to "blow your cover," blocking it from outputting internal codenames, unreleased model references, or Slack channel names. And the entire mechanism, along with everything it was supposed to protect, shipped in a .map file that anyone could unpack with a single command. I wrote last week about Anthropic leaking its own model details through a CMS misconfiguration. Two packaging errors in the same month from a company whose entire brand is safety and careful deployment. The pattern is getting harder to call coincidence.

What the code actually reveals is more interesting than the leak itself. Claude Code runs 40+ discrete tools behind a permission engine that classifies risk levels, with a YOLO classifier for auto-approving low-risk actions. The multi-agent architecture spawns parallel sub-agents it calls "swarms," each running in isolated contexts. A 46,000-line query engine handles all LLM orchestration. This is not a wrapper around an API. It is a full operating environment.

Then there's KAIROS, an always-on background daemon that monitors your project even when you're not actively prompting. It runs memory consolidation during idle periods through a process called autoDream, cycling through orientation, signal gathering, consolidation, and pruning. Forty-four feature flags gate capabilities that are fully built but compiled to false in the shipping build. The gap between what Claude Code is and what Claude Code ships as is significant.

The most disarming discovery: a complete Tamagotchi pet system. Eighteen species across five rarity tiers, procedurally generated stats for DEBUGGING, PATIENCE, CHAOS, WISDOM, and SNARK, ASCII art sprites with animation frames. Deterministic gacha mechanics seeded per user. Someone at Anthropic built this. Deliberately.

Internal model codenames confirm what the earlier Mythos leak hinted at: Capybara maps to a Claude 4.6 variant, Fennec to Opus 4.6, and Numbat remains in prelaunch testing. None of this was supposed to be public.

The real takeaway isn't any single feature. It's the distance between Anthropic's public narrative of measured, careful deployment and the velocity of what's actually being built behind the flags. They have a background agent, a pet system, remote planning infrastructure, and an entire mode dedicated to hiding the fact that AI wrote the code. All of it shipping in the same package where someone forgot to exclude debug files.

Sources:

The Lighthouse Refuses to Explain Itself

Robert Eggers built a 70-foot lighthouse in Nova Scotia, shot on Kodak Double-X 5222 (the same film stock used for Raging Bull), and directed two grown men screaming at each other about flatulence. The result is one of the best horror films of the last decade.

The Lighthouse traps Robert Pattinson and Willem Dafoe on a rock in the North Atlantic as two keepers descending into madness. Or maybe just one of them. The film doesn't clarify, and that refusal is its greatest strength.

Eggers has a specific talent for historical claustrophobia. His Nosferatu remake proved he could scale production design without losing the dread. But The Lighthouse works because it refuses to scale. The 1.19:1 aspect ratio, nearly square and tighter than old Academy, started as a joke from cinematographer Jarin Blaschke: "If 1.33 is confining, this will really give you what you're after." Eggers took him seriously. The frame squeezes everything into vertical compositions. Walls, the tower, two men stacked on top of each other. The poster captures it: two faces pressed into a column of grey, nowhere to look but up.

Blaschke earned an Oscar nomination for this, and it's deserved. He sourced 1930s lenses from Panavision's archive and collaborated with Schneider Optics on a custom orthochromatic filter that renders skin with the harshness of early photography. Dafoe's face looks like a geological formation. Pattinson's looks like it's decomposing in real time. The whole setup required nearly twenty times more light than The Witch, which Eggers shot digitally on an Alexa. Worth it.

Dafoe's account of the shoot is revealing. Camera positions were locked before the actors arrived. One chance per setup, no coverage. Pattinson refused to rehearse anything before cameras rolled. Dafoe, a Wooster Group veteran, wanted structured preparation. They barely spoke off-set. The tension wasn't manufactured.

The mythology runs deep. Prometheus reaching for forbidden light, Proteus shifting beneath the waves, the seabird curse pulled straight from Coleridge. Eggers drew dialect from Melville and Sarah Orne Jewett. But the film never stops to explain its references the way lesser genre work explains its monsters. It has the same refusal I admire in Enys Men, where isolation becomes its own grammar. The difference is two people bouncing the madness between them, and their combined instability hits harder than any lone figure staring at lichen.

The loose inspiration was the Smalls Lighthouse tragedy from early-1800s Wales, where two keepers named Thomas were stranded and one went mad watching the other's corpse decompose. Eggers has said the connection is "so loose that it's hard to impress how loose." The Lighthouse takes a true story, strips it for parts, and builds something closer to a hallucination than a narrative. I don't think it has a clean reading. I don't think it's supposed to.

Sources

Spud and the Silence Before the Pitch

OpenAI has a new model and they named it after a potato. Spud finished pre-training around March 24, built on over 100,000 H100 GPUs at the Stargate facility in Abilene, Texas. Sam Altman told employees in an internal memo that it's a "very strong model" that could "really accelerate the economy." The Information broke the story, and now everyone is trying to figure out whether this is GPT-6, GPT-5.5, or something with a new naming scheme entirely.

Nobody knows. OpenAI hasn't confirmed anything publicly. What they have confirmed, through actions rather than announcements, is a company restructuring itself around a single bet. Sora is dead. The safety team no longer reports independently but has been folded into Research, subordinate to the Chief Research Officer. Fidji Simo's product organisation got renamed from "product deployment" to "AGI Deployment". As one analyst put it: the guardrails are no longer guarding the people building the AGI. The builders are now guarding the guardrails.

The rumoured capabilities read like a checklist of everything the market has been asking for. Natively multi-modal across text, audio, image, and possibly video. Real-time audio interaction. Agentic behaviour. One breakdown of the rumours notes that whether "natively multi-modal" means a single unified architecture or just a tightly integrated ensemble remains entirely unconfirmed. Internal employees have apparently hinted at a capability that is "very different from what we've seen before," which is the kind of phrase that could mean anything from a genuine architectural breakthrough to a better system prompt.

I've watched this pattern before. GPT-5.4 launched three weeks ago to a collective shrug. GPT-5.3 was solid but not transformative. The company has developed a rhythm of massive hype followed by incremental delivery, and each cycle makes the next round of promises harder to take at face value.

But the organisational moves are harder to dismiss. You don't kill a product, dissolve a Disney partnership, and restructure your safety function for something incremental. Either Spud is genuinely different or OpenAI just burned institutional credibility for nothing. The ARC-AGI-3 benchmark released this month showed frontier models scoring around 0.37% on tasks where humans score 100%. That gap is not the kind that closes with more parameters.

Both OpenAI and Anthropic are reportedly timing major releases to position for IPOs later this year. The cynical reading is that Spud's real audience isn't users or developers but investors who need a reason to believe the next valuation is justified. The less cynical reading is that a hundred thousand GPUs produced something worth reorganising a company around, and we'll know within weeks. I genuinely don't know which one is closer to the truth, and I suspect the people inside OpenAI don't either.

Sources:

The Leak That Knew Where to Land

I wrote yesterday about Anthropic's CMS misconfiguration and the 3,000 files it left searchable on the open web. The facts haven't changed. But the conversation has shifted from "what leaked" to a more interesting question: did Anthropic want this to happen?

The accident theory is straightforward. A content management system defaulted to public. Someone uploaded draft assets without toggling the setting. Security researchers found them. Fortune called Anthropic, and Anthropic pulled access that evening. Mundane infrastructure failure, the kind that happens at companies of every size.

Three details make the accident theory persuasive. First, roughly 3,000 files were exposed, including internal planning documents and employee data. No PR team greenlights a controlled leak at that scale. Second, two competing draft versions existed, one branding the model Mythos, the other Capybara. That's unfinished work, not a polished marketing drop. Third, the scale itself is wrong. A strategic leak is a scalpel. This was a drawer left open.

But the timing is suspicious enough to keep the question alive.

Around the same time Fortune published the Mythos story, reports surfaced that Anthropic was exploring an IPO later in 2026. A leaked narrative about your most powerful model, one that "presages an upcoming wave" of unprecedented capability, is precisely the kind of story that strengthens an investor pitch without requiring an official announcement. You get the market signal without the regulatory exposure of making a public claim.

The draft itself was oddly ready. Structured with headings, publication dates, formatted as a launch page. As Implicator noted, it "reads like it was crafted for the audience that found it." Meanwhile, cybersecurity stocks dropped 3 to 9 percent on Friday. CrowdStrike fell 7 percent, Palo Alto Networks 6, Tenable hit a 52-week low. All before anyone outside Anthropic had tested the model.

The charitable reading is that Anthropic prepares polished drafts well before launch, which is normal, and the CMS error simply exposed one at an inconvenient moment. The less charitable reading is that the draft was ready because the exposure was ready. OpenAI is reportedly close to its own next-generation release. Both companies are shipping at a pace that rewards whoever controls the narrative first.

I think it was an accident. The HR records, the dual branding, the sheer messiness of 3,000 files all point toward genuine carelessness rather than coordination. However, the more useful observation is that it doesn't matter. Accidental or deliberate, the leak achieved what a controlled announcement would have: market repositioning, investor buzz, and competitor pressure, all without Anthropic having to stand behind a single benchmark in public. "Intentional or not," as one analyst put it, "the narrative landed exactly where it needed to."

Sources:

The Atelier Between Empires

Spanish fashion in 1993 had a problem it didn't fully recognise. Pasarela Cibeles was eight years into its government-backed mission to position Madrid alongside Paris and Milan. Pasarela Gaudí kept Barcelona relevant. The infrastructure existed. What it lacked, mostly, was international credibility beyond a handful of names.

Purificación García was one of those names. She'd been showing at both runways since 1983, debuted in Milan in 1989, and opened boutiques in Tokyo, Osaka, and Kyoto by 1990. Then her Japanese investors withdrew, the expansion collapsed, and she was back in Barcelona with a small workshop and an approach she called "new couture." Most designers would have panicked through that gap. García seems to have treated it as permission.

The 1993 work sits squarely in this period. No corporate backing, no international retail network, just a designer operating at the scale where every fabric choice is a direct conversation between hand and material. García always worked from textiles first, selecting cloth before sketching silhouettes, and in the amber and pinstripe of her 1993 collection you can see that instinct given full authority. The blazer is structured but unhurried. The wrap skirt falls without fuss. Those layered resin beads do the work that a louder designer would have assigned to a print.

This was the same year Ralph Lauren was building his own case for restraint as luxury on a much larger stage, and Liza Bruce was turning swimwear engineering into daywear from a SoHo loft. The common thread across all three was a refusal to confuse volume with value. García's version felt different because it came from genuine constraint rather than aesthetic choice. She wasn't choosing minimalism from a position of abundance. She was working within actual limits, and the discipline showed.

Five years later she'd sign with Sociedad Textil Lonia, the company built by Adolfo Domínguez's siblings, and scale to hundreds of points of sale across four continents. The Origami bag would become her most recognised design. But the 1993 work belongs to the period before all of that, when the brand was just a woman in a Barcelona workshop deciding that good fabric and clean proportion were enough. They were.

Sources:

Four Ways to Disappear

Four records arrived this week that share almost nothing except a disinterest in being loud.

Shinichi Atobe's Silent Way is the third release on his own Plastic & Sounds label and 69 minutes of the deep house and dub techno he's been refining since his Chain Reaction debut in 2001. Ten tracks, mastered by Rashad Becker, and a nearly twelve-minute centrepiece called "Rain 1" that does exactly what you'd expect from the title. Atobe doesn't surprise. He deepens. The record picks up where Discipline left off, which is to say it arrives already fully formed, patient, and entirely uninterested in explaining itself.

Rabit's Stranger in a Strange Land is 31 minutes of analog tape loops released on his own Halcyon Veil label, and it's his strongest work since Les Fleurs du Mal. Eric Burton describes it as "the most minimal in the discography but requires loud playback. Grounded in this paradox." That's accurate. The Houston producer has taken his DJ Screw worship and filtered it through something closer to The Caretaker, building codeine-soaked choral motifs and trunk-rattling sub-bass into compositions that feel like memories dissolving in real time. He lists his influences as UGK, the Screwed Up Click, and Coil. That combination shouldn't work. It does. Boomkat called it "one of the year's first great albums," which for a record this deliberately quiet is the right kind of praise.

Concrète Waves pairs Actress on laptop and drum machines with Suzanne Ciani on Buchla, fully improvised across two live sets at the Barbican and Sonar. Ninety minutes, 21 tracks. The Barbican sessions blur the two voices so completely that Cunningham's greyscale textures become indistinguishable from Ciani's analogue tones. Ciani designed the Coca-Cola pop-and-pour sound and the Xenon pinball machine audio, which is one of those facts that recontextualises everything once you know it. This is the inaugural release on a revived Werkdiscs, and the vinyl ships in June.

Fennesz's The Last Days of May is a single 24-minute piece composed for an installation at the Art Gallery of New South Wales. Originally on Longform Editions before the label closed, now reissued on Touch. If Mosaic was his most reflective album, this is its quieter companion: hand-turned knobs rather than laptop processing, an eerie recurring melody, and a focus on physical modelling synthesis that he says was inspired by Birthday Party guitarist Rowland S. Howard. It builds like a brisk wind and then it stops. Twenty-four minutes is the right length for something this focused.

Sources:

The Safety Company That Leaked Itself

Anthropic's content management system has a toggle. Public or private. Someone forgot to flip it.

That's how roughly 3,000 unpublished assets ended up publicly searchable on the open web. Draft blog posts, images, PDFs. Among them: a detailed draft announcing Claude Mythos, described as "by far the most powerful AI model we've ever developed." Security researchers Roy Paz and Alexandre Pauwels found the cache. Fortune broke the story on Thursday. Anthropic called it "human error" and removed public access.

The irony needs no elaboration. A company that has made AI safety its founding identity, that walked away from a $200 million Pentagon contract over surveillance and weapons concerns, exposed its most sensitive model details through a checkbox.

The model sits above Opus in Anthropic's hierarchy. Two versions of the draft existed, one calling it Mythos, the other Capybara. The leaked documents claim "dramatically higher scores on tests of software coding, academic reasoning, and cybersecurity" compared to Opus 4.6. Anthropic confirmed they're developing "a general purpose model with meaningful advances in reasoning, coding, and cybersecurity" and called it "a step change."

The cybersecurity angle is what moved markets. The draft warned Mythos is "currently far ahead of any other AI model in cyber capabilities" and "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." CrowdStrike dropped 7 percent on Friday. Palo Alto Networks fell 6. Stifel analyst Adam Borg called it "the ultimate hacking tool."

This is the dual-use problem made concrete. A model that discovers zero-day vulnerabilities helps defenders patch them. It also hands attackers a map to every unlocked door. Anthropic says they're rolling out to cybersecurity organizations first, giving defenders a head start. Whether that advantage survives broader availability is the question nobody can answer yet.

Futurism raised a fair point: frontier AI companies routinely claim breakthrough capabilities, and OpenAI's underwhelming GPT-5 launch should temper expectations. The difference is that Anthropic didn't choose to make these claims publicly. The draft was written for internal use, which makes the language harder to dismiss as marketing. Companies tend to be more honest in documents they don't expect anyone to read.

The model is reportedly expensive to serve, with no public release date. Anthropic is being "deliberate," which is the right word for a company whose safety reputation just absorbed an unforced error. The leak didn't expose model weights or API access. But for a company whose entire brand rests on the claim that it handles powerful AI more carefully than anyone else, a misconfigured CMS toggle is a difficult look.

Sources: