Skip to content

Plutonic Rainbows

The Anniversary Collection Nobody Rushed

Valentino presented his Fall/Winter 1991-1992 haute couture collection in Paris in July of that year, and the timing was not incidental. Weeks earlier, the house had celebrated its thirtieth anniversary with a three-day gala in Rome: a garden lunch at Valentino's villa on the Appian Way, an exhibition called "Thirty Years of Magic" at the Capitoline Museum, and a formal ball at the Villa Medici where Elizabeth Taylor, in a crystal-embroidered Valentino gown, told the New York Times the show was "so beautiful it makes you want to cry." The couture collection that followed carried the weight of all that ceremony without buckling under it.

The silhouettes were architectural. Silk gazar shaped into sculptural forms, hand-applied embroidery, capes that framed the body rather than clinging to it. The belted grey dress with exaggerated cape sleeves in this photograph is representative of the collection's restraint: the colour palette muted, the construction precise, the drama coming entirely from proportion. Necklines revealed the collarbone. Fabrics held their shape without assistance. Everything was built rather than styled.

The runway was stacked with the names that defined the era. Christy Turlington, Linda Evangelista, Karen Mulder, Naomi Campbell, Claudia Schiffer. Valentino had been showing couture in Paris since 1975, one of the first Italian designers accepted onto the French calendar, and by 1991 he occupied a position that required neither explanation nor defence. The fashion press rated him alongside Yves Saint Laurent and Karl Lagerfeld. His clientele included Princess Diana and Jackie Kennedy Onassis.

What makes the collection interesting in retrospect is what it refused to do. Martin Margiela and the Antwerp Six were already rewriting the vocabulary of fashion. Deconstruction was gathering force. Valentino's response was to build another couture collection with the same discipline he had applied for three decades: scalloped trims, circular ruffles, Valentino Red anchoring even the most restrained compositions. He did not chase reaction. He did not attempt irony. The garments existed as arguments for continuity in a year when continuity felt increasingly unfashionable.

Thirty years of the same conviction, presented in a city that was not his own, to an audience that kept returning. The supermodels who walked his runway that season would scatter across a dozen other shows within days, but for that afternoon in July, the proposition was singular: refinement does not expire.

Sources:

Built, Not Borrowed

Microsoft shipped three AI models on Thursday. Not OpenAI's models repackaged with Azure branding. Its own.

MAI-Transcribe-1 handles speech-to-text across 25 languages with a 3.8% word error rate on the FLEURS benchmark, lower than Whisper across all 25 languages, lower than Gemini Flash on most of them. MAI-Voice-1 generates a minute of speech in under a second from a ten-second voice sample. MAI-Image-2 landed third on the Arena.ai leaderboard for image generation on arrival. All three are available now through Microsoft Foundry, the rebranded Azure AI platform.

The teams that built them were small. Mustafa Suleyman said the transcription model was the work of ten people. The image team, roughly the same size. His MAI Superintelligence group didn't exist until November 2025, which means Microsoft went from forming the unit to shipping production models in about six months.

That timeline only makes sense in context. Until October 2025, Microsoft was contractually unable to build its own frontier models because the OpenAI partnership agreement explicitly carved out AGI and superintelligence research as OpenAI's domain. The September renegotiation changed the terms. Five weeks later, Suleyman had a team. Five months after that, three models.

None of them are large language models. Transcription, voice synthesis, image generation. These are adjacent territories, the kind of work that doesn't directly threaten GPT or o-series. A diplomatic first move. Suleyman said the goal is state-of-the-art performance across text, image, and audio by 2027, which means the LLM is coming. He just isn't leading with it.

The pricing tells its own story. MAI-Transcribe-1 costs $0.36 per hour with roughly half the GPU overhead of competitors. When you're spending hundreds of billions on AI infrastructure, undercutting on price isn't generosity. It's leverage. Microsoft can afford to run these models at margins that would bleed a startup dry, and the integration points are already live: Copilot, Bing, PowerPoint.

The OpenAI relationship, officially, remains strong. A February joint statement said as much. Azure stays the exclusive cloud provider for OpenAI's APIs through 2032. But OpenAI signed deals with AWS, and Microsoft just shipped models that beat Whisper on every benchmark they tested. The word "partnership" is doing increasingly heavy lifting.

What's interesting isn't the models themselves. Speech transcription and image generation aren't unsolved problems. What's interesting is the speed, the signal, and the silence from Redmond about what comes next. Suleyman's team has twelve months before his own deadline. The LLM-shaped gap in the lineup won't stay empty.

Sources:

The Diamond W in the Lobby Tiles

It took ninety-nine years to build and forty-two days to obliterate. Between 27 December 2008 and 6 January 2009, 807 Woolworths stores closed across Britain. Staff learned of the collapse not from the company but from BBC television. The chain died ten months before its centenary.

Every British town has the scar. Not always visible from the pavement. Sometimes a Poundland now, sometimes a B&M, sometimes just boards and a letting agent's number fading in the window. But the building remembers. Historic England documented eight architectural features that identify a former Woolworths even decades after the signage came down: bronze-framed shopfronts with curved glass corners, hammered cathedral glass on the first floor, Art Deco faience cladding with chevron patterns. And in the lobby, if you look down, the Diamond W in the floor tiles.

One hundred and forty-seven of the 807 sites became Poundlands. Nearly a fifth of the estate, absorbed into a chain that offers a diminished echo of what Woolworths provided. The floor plan is often unchanged. The Art Deco shell remains. What disappeared is harder to name. Something about the range, the ambition, the seven million weekly shoppers who treated it as public infrastructure rather than retail.

The pick 'n' mix counter is the thing everyone remembers. Not because the sweets were exceptional but because the act was. You stood at a shared counter and chose for yourself from an abundance that belonged to no one in particular. No algorithm. No delivery window. A physical, tactile, democratic transaction with sugar. Everyone over thirty-five can place themselves at one. No child born after 2008 has any material referent for it.

Andy Latham, a former manager, tried to resurrect it. His chain Alworths opened in eighteen former locations on 5 November 2009, timed deliberately to the centenary of the original Liverpool store. Pick 'n' mix, music, games. It collapsed after eighteen months. You cannot will back into existence the thing the market killed. The attempt is haunted by the original, a copy whose failure only confirms the irreversibility of loss.

Seventy-four of the 807 units sit fully vacant. Many were occupied at some point before falling empty again, a second death quieter than the first. Forty-eight have left retail entirely: housing, leisure, pubs. The building stops pretending to be a shop. That might be the honest answer, the only one that does not involve wearing the dead thing's clothes.

The cancelled futures that haunt British public space are not always grand. Sometimes they are a laminate counter at child height, a pressed steel ceiling hidden behind suspended tiles, a Diamond W that nobody sweeps but nobody removes. The building carries what the high street has forgotten how to say.

Sources:

The Leak Anthropic Couldn't DMCA Away

A 59.8 megabyte source map file. That is what separated Anthropic's most sophisticated product from the public domain. The @anthropic-ai/claude-code npm package shipped with a .map file that pointed to a zip archive sitting on Anthropic's own Cloudflare R2 storage bucket. Anyone could download it. Inside: approximately 1,900 TypeScript files, 512,000 lines of unobfuscated code, and the complete architectural blueprint of the agentic harness that makes Claude Code work.

Security researcher Chaofan Shou found it on March 31. By the time Anthropic responded, the source had been forked 41,500 times on GitHub.

The root cause was not exotic. Bun, the JavaScript runtime Claude Code uses, generates source maps by default. Somebody needed to add *.map to .npmignore or the files field of package.json. Nobody did. Gabriel Anhaia, a software engineer who analysed the leak, put it plainly: "A single misconfigured .npmignore or files field in package.json can expose everything." Anthropic engineer Boris Cherny later acknowledged "a manual deploy step that should have been better automated." The identical vector had leaked source code thirteen months earlier, in February 2025. The fix was never properly automated.

This was Anthropic's second public exposure in five days. I wrote last week about the CMS misconfiguration that left 3,000 unpublished files searchable, including draft blog posts revealing the internal codename Mythos for an unreleased model family above Opus. That leak was embarrassing. This one was structural.

The distinction matters. A CMS toggle is a configuration error. Shipping your entire source tree to npm is a pipeline failure, one that had already happened before and was supposedly addressed. The question of whether the Mythos leak was accidental is interesting in its own right, but nobody is suggesting Anthropic wanted 512,000 lines of TypeScript indexed on every package manager mirror on Earth.

What the code revealed is more interesting than how it escaped.

The leak exposed Claude Code's full tool system, fewer than twenty default tools and up to sixty-plus total, including file editing, bash execution, and web search. It revealed a three-tier memory architecture designed around context window conservation: an index layer always loaded into the conversation, topic files pulled on demand, and transcripts searchable via grep but never loaded directly. The system treats memory as hints rather than truth, which is a surprisingly honest design philosophy for a product that markets itself on reliability.

More revealing was KAIROS, an unreleased autonomous daemon mode that runs continuously via a heartbeat prompt asking "anything worth doing right now?" It integrates with GitHub webhooks, operates on five-minute cron cycles, and includes a /dream command for background memory consolidation. Forty-four hidden feature flags gate unreleased capabilities including voice commands, browser control via Playwright, and multi-agent orchestration. The source comments reference internal model codenames: Capybara for v8 with a one-million-token context window, Numbat and Fennec for upcoming releases, and Tengu, which appears in connection with something called "undercover mode."

Undercover mode deserves its own paragraph. It is enabled by default for Anthropic employees working in public repositories. The system suppresses internal codenames, unreleased version numbers, references to "Claude Code," and Co-Authored-By attribution lines. The leaked configuration exposed 22 private Anthropic repository names. The opacity is not inherently sinister, companies routinely scrub internal references from public commits, but for a lab that has built its brand on transparency and careful stewardship, the discovery of a system specifically designed to hide AI involvement in public code contributions is not a great look.

The codebase also contained anti-distillation defences: decoy tool definitions injected into system prompts to poison any training data scraped from Claude Code sessions, plus cryptographically signed server-side summaries that prevent access to full reasoning chains. A 9,707-line bash security system uses tree-sitter WASM AST parsing with 22 unique validators. And buried in the source comments, a documented parser differential vulnerability where carriage return characters could bypass command validation, because shell-quote and bash disagree on what constitutes whitespace.

An internal BigQuery comment, timestamped March 10, noted that 1,279 sessions had experienced fifty or more consecutive compaction failures, wasting approximately 250,000 API calls daily before a cap of three retries was applied. That is the kind of detail that transforms a leak from an IP issue into a product credibility question.

One function in the codebase spans 3,100 lines with 486 branch points of cyclomatic complexity. The Hacker News thread, which accumulated 2,074 points and over a thousand comments, featured a lively debate about whether traditional code quality standards apply to AI-generated software. Some argued that velocity matters more than structure when models write the code. Others pointed out that humans still have to maintain it. I find myself in the second camp, but the argument is genuinely unsettled.

The community response was immediate and aggressive. The primary mirror repository hit 32,600 stars before Anthropic's legal team intervened. A developer using the handle @realsigridjin released Claw Code, a ground-up Python port built using OpenAI's Codex to sidestep copyright claims. It reached 75,000 stars and remains online. SafeRL-Lab published nano-claude-code, a minimal 900-line reimplementation supporting Claude, GPT, Gemini, DeepSeek, and local models. Multiple analysis repositories appeared, mapping the architecture in detail. The genie is not going back in the bottle.

Between 00:21 and 03:29 UTC on March 31, attackers published typosquatted npm packages targeting users attempting to compile the leaked code, bundling a remote access trojan. The supply chain attack was discovered quickly, but it illustrates a second-order risk that Anthropic's official statement did not address. "No sensitive customer data or credentials were involved" is technically accurate and completely beside the point when your leaked code is being weaponised as a lure within hours.

The DMCA response made things worse. Anthropic filed takedown notices that accidentally removed approximately 8,100 GitHub repositories, including legitimate forks of Anthropic's own public Claude Code repository that contained none of the leaked source. Boris Cherny acknowledged: "This was not intentional, we've been working with GitHub to fix it." Anthropic retracted notices for all but one repository and 96 forks containing the actual leaked material. The formal DMCA filing is publicly visible on GitHub's transparency repository. Nuking eight thousand innocent repos to protect code that was already mirrored across dozens of platforms is not a strategy. It is damage compounding.

The broader pattern is what concerns me. Anthropic has positioned itself as the careful lab, the one that thinks about safety before shipping, the one that walks away from defence contracts over ethical concerns. Two major leaks in five days, one of them a repeat of a known vector from thirteen months earlier, followed by a DMCA overreach that punished thousands of uninvolved developers. The engineering quality of the leaked codebase was broadly praised, the memory architecture is clever, the anti-distillation measures are sophisticated, but operational security is not about how good your code is. It is about whether your release pipeline remembers to exclude the source map.

Security researcher Roy Paz, writing for LayerX, noted that the exposure reveals "nonpublic details about how the systems work, such as internal APIs and processes," potentially informing attempts to circumvent existing safeguards. The compaction system's inability to distinguish user instructions from injected file content was specifically flagged as an attack surface. The bash parser differential is a concrete, exploitable vulnerability.

Competitors now have a detailed map of Anthropic's product direction. The feature flags, the model codenames, the KAIROS architecture, the anti-distillation approach. This is the kind of intelligence that normally costs months of reverse engineering or a well-placed hire. Anthropic handed it out for free, twice in one week, because somebody forgot a line in a config file.

I keep thinking about the Cursor situation from the week before, where a model identifier leaked through an API endpoint and revealed that Composer 2 was running on Moonshot AI's open-source Kimi K2.5. The AI developer tools space has a transparency problem that runs deeper than any single incident. Companies build proprietary products on foundations they do not fully disclose, then act surprised when the seams show. The difference with Anthropic is that the seams showed everything.

Sources:

Gemma 4 and the Apache Pivot

Google released Gemma 4 today. Four model sizes, multimodal across the board, and a license change that matters more than any benchmark number on the page.

The headline spec is a family of open-weight models built from the same research as Gemini 3. There is a 31B dense model, a 26B mixture-of-experts variant that activates only 4 billion parameters at inference time, and two edge-optimised models (E4B and E2B) small enough to run on a Raspberry Pi 5. The context windows stretch to 256K tokens on the larger models and 128K on the smaller ones. All four handle images and text natively. The edge models add audio input. The larger two process video.

None of that is the story.

The story is Apache 2.0.

Gemma 3 shipped under a custom license, Google's own "Gemma Terms of Use," which imposed restrictions that made legal teams nervous and hobbyists indifferent. It was open in the way that a restaurant with a dress code is open. You could walk in, but the terms reminded you this was someone else's house. Gemma 4 drops all of that. Apache 2.0 means no usage caps, no commercial restrictions, no "contact us if you exceed 700 million monthly active users" clause like Meta's Llama license carries. Fork it, ship it, sell it, modify it without asking. The freedom is unconditional.

This is Google choosing to compete on capability rather than control. And the capability argument is strong. The 31B dense model ranks third on the Arena AI text leaderboard. The 26B MoE variant, running on just 4 billion active parameters, sits sixth, outperforming models with twenty times its effective compute budget. Google's own framing is "intelligence per parameter," and the numbers back it up. A model that small matching frontier-class open weights running at 100B+ parameters is not incremental progress.

The architecture has some genuinely interesting choices. Alternating attention layers split work between local sliding-window attention (512 or 1024 tokens depending on model size) and global full-context layers. Each attention type gets its own RoPE configuration: standard for local, proportional for global. A feature called Per-Layer Embeddings feeds a secondary signal into every decoder layer, combining token identity with contextual information, which seems to be how they squeeze so much quality out of fewer parameters. The shared KV cache reuses key-value tensors from earlier layers in later ones, cutting memory without obvious quality loss. It is a dense collection of efficiency tricks that compound.

The on-device numbers are where this gets practical. On a Raspberry Pi 5, the E2B model hits 133 tokens per second on prefill and 7.6 tokens per second on decode, using less than 1.5GB of memory with 2-bit quantization. Four thousand input tokens across two distinct skills process in under three seconds on mobile GPU. These are not synthetic benchmarks designed to flatter a press release. Raspberry Pi inference is the kind of thing people will actually try within hours of a release, and if those numbers hold, this becomes the default local model for a lot of embedded and mobile work.

I keep circling back to the agentic framing. Google is not positioning Gemma 4 as a chatbot engine. The marketing language says "purpose-built for advanced reasoning and agentic workflows," and the tooling reflects it: constrained decoding for structured outputs, multimodal function calling, GUI element detection, object detection and pointing. These are the primitives you need for an AI agent that can look at a screen, understand what it sees, decide what to do, and call the right function. The fact that it works offline, on a phone, without phoning home to a cloud endpoint, makes the agentic pitch credible in a way that server-dependent agents never quite were.

The ecosystem support at launch is unusually comprehensive. Day-one availability across Hugging Face Transformers, llama.cpp, MLX for Apple Silicon, Ollama, mistral.rs, ONNX, and browser-based inference through WebGPU via transformers.js. Google clearly pre-coordinated with the major frameworks. When I wrote about model discovery and pricing a couple of weeks ago, the friction was still in finding and deploying the right model. Gemma 4 arrives already integrated into every tool people actually use.

What Google is doing here has a clear strategic logic. The Gemini 3.1 Pro updates showed them closing the gap with Claude and GPT on their proprietary side. Now the open side gets a model built from the same research foundations, under the most permissive license in the major open-weight landscape. Meta's Llama has its commercial threshold. Mistral has been ambiguous about which models are truly open. Google just removed every legal obstacle at once.

The 140+ language support is quietly significant. Most open models optimise for English with a handful of other languages bolted on. Google's multilingual training infrastructure, built for Search over two decades, gives Gemma 4 a natural advantage here. For developers building products outside the English-speaking world, this might be the deciding factor regardless of benchmark position.

I'm less certain about the video capabilities in the larger models. Processing video natively is useful, but the context window arithmetic gets expensive fast. A few minutes of video at reasonable frame rates will consume a large fraction of that 256K window, leaving limited room for reasoning about what was seen. The image and audio capabilities feel more immediately practical, especially on the edge models where audio input enables real-time speech understanding directly on device.

The competitive pressure this creates is substantial. Llama 4 from Meta is the obvious comparison, and Meta's response will need to address both the licensing gap and the efficiency gap. A 4B active parameter model matching 100B+ models on quality is the kind of result that forces everyone else to rethink their architecture, not just their marketing. Qwen, Phi, and the rest of the open-weight field now have a new bar to clear, set by a company with functionally unlimited compute and training data.

Whether Gemma 4 becomes the default open model depends on what happens in the next few weeks as developers actually stress-test these claims. Arena scores and launch-day benchmarks are one thing. Sustained performance across real workloads, fine-tuning stability, and the texture of outputs on tasks that benchmarks do not measure will determine if this is the model people reach for by default or just another strong option in an increasingly crowded field.

The Apache 2.0 move, though, is irreversible. Google cannot walk that back without destroying trust. And for every developer who avoided Gemma 3 because of licensing uncertainty, the door is now wide open.

Sources:

The World Before the Index

Most of what humanity has written, recorded, and published does not exist on the internet. Not even close. Large language models, search engines, recommendation algorithms: they all treat the web as though it were a reasonable proxy for human knowledge. It is not. It is a shallow, recent, and spectacularly incomplete sample.

Google has scanned tens of millions of books, but most sit behind copyright walls, neither fully searchable nor publicly readable. The rest exist on shelves, in basements, in charity shops where nobody is looking. The vast majority of the world's cultural heritage has never been digitized in any form. Not suppressed, not restricted. Just absent.

The pre-internet age was not merely analogue. It was geographically bounded. John Holbo, writing on Crooked Timber, described it as a kind of epistemic accident: you knew what the six people around you knew, what your local library stocked, what your local record shop carried. A left-handed guitarist might never discover that left-handed guitars existed. That accidental ignorance, that texture of ordinary life, was never documented in a form that any crawler could find. It was the water, not the fish.

The physical record is vanishing too. When the Chicago Sun-Times consolidated its suburban papers, photographs from the Aurora Beacon-News and Elgin Courier-News were thrown in the bin. The Louisville Courier Journal's archive of roughly ten million photographs nearly followed before the University of Louisville negotiated a last-minute donation. These aren't edge cases. They are the norm for local journalism across America and, by extension, for any community record that depended on newsprint.

Meanwhile, born-digital content fares no better. Pew Research found in 2024 that a quarter of all web pages that existed between 2013 and 2023 have already disappeared. MySpace's 2019 migration destroyed millions of songs, videos, and photographs in what the Long Now Foundation described as irreversible data loss. Andy Warhol's digital artwork from the 1980s sat stranded on obsolete Commodore hardware for decades.

The gap is self-reinforcing. If knowledge isn't online, AI can't learn it. If AI can't surface it, fewer people encounter it. If fewer people encounter it, there's less incentive to digitize it. The loop tightens and the memory without metadata that defined most of human experience drifts further from retrieval.

I think about this when people describe AI as a knowledge tool. It is a tool for a particular kind of knowledge, overwhelmingly English-language, overwhelmingly post-1990s, overwhelmingly sourced from the kind of person who publishes on the internet. Everything else, the vast majority of what humans have thought and made and recorded, sits in formats that no model will ever ingest. Not because the technology couldn't handle it, but because nobody is going to scan it.

Sources:

The Thinker and the Talker

Alibaba released Qwen3.5-Omni on Monday and the most interesting thing about it is not what the model can do. It is what Alibaba chose to keep.

The Qwen family has been downloaded over 700 million times on Hugging Face, with more than 100,000 derivative models. That makes Alibaba the most-downloaded open-weight AI provider on the platform, and it was deliberate — a land grab disguised as generosity. Now, with Qwen3.5-Omni, the generosity has limits.

The model splits into two components the team calls the Thinker and the Talker. The Thinker handles reasoning across text, images, audio, and video. The Talker converts that reasoning into streaming speech, frame by frame, through a lightweight convolutional renderer called Code2Wav. The separation is not just clean design. It means external systems (safety filters, retrieval pipelines, function calls) can intervene between cognition and output. Enterprise deployment teams will notice.

The numbers are aggressive. A 256,000-token context window that can absorb ten hours of continuous audio or four million frames of 720p video. Speech recognition in 113 languages. Voice cloning via the API. An emergent capability the team calls audio-visual vibe coding: the model writes functional code by watching screen recordings with spoken instructions, without having been trained on that task. That last detail sounds like marketing until you remember that emergent capabilities in large models have a track record of being real and unsettling in equal measure.

On benchmarks, it outperforms Gemini 3.1 Pro on music understanding (72.4 to 59.6) and edges it on audio comprehension. Voice stability scores undercut ElevenLabs by an order of magnitude. These are not incremental wins.

But only the Light variant ships as open weights. Plus and Flash, the versions you would actually deploy, are API-only through Alibaba's DashScope. No technical paper has been published. No weights to inspect. The 700 million download count was built on open licensing, and the moment the Qwen team produced something genuinely frontier in multimodal, they pulled it behind a paywall.

This is not hypocrisy. It is strategy. Open-weight text models seed the ecosystem, create dependency, train a generation of developers on your API surface. Then, when voice and video become the competitive edge, you charge for access. Alibaba built the largest open-source AI distribution network in history specifically so they could close it at the right moment.

The Thinker reasons for free. The Talker costs money. That might be the most honest thing about the whole architecture.

Sources:

Thirty-Three Million for a Suggestion Box

Pankaj Gupta built a product that let 1.3 million people vote on which AI model gave the best answer. Jeff Dean invested. Biz Stone invested. The CEO of Perplexity invested. a16z crypto's Chris Dixon led a $33 million seed round. On Tuesday, Gupta announced Yupp.ai is winding down, less than ten months after launch. Platform access ends April 15.

The stated reason is the one every failed startup reaches for: product-market fit. "The AI model capability landscape has changed dramatically in the last year alone," Gupta wrote. Which is a polite way of saying the product was a leaderboard for a race where the runners kept swapping positions between refreshes.

Yupp's premise made a kind of sense when it launched in June 2025. Back then, picking between Claude and GPT and Gemini and whatever Mistral was calling itself that week felt consequential. You'd paste a prompt into three chat windows, squint at the results, and develop superstitions about which one "got you." Yupp crowdsourced that process across 800 models. Millions of preference signals a month, all feeding into a ranking system that was supposed to help ordinary people navigate the model landscape.

The problem is that ordinary people stopped caring. Not because the models got worse, but because they got interchangeably good enough. When the gap between first place and eighth place on a benchmark is statistical noise, a consumer taste-test platform becomes a thermometer for a room that's already at temperature.

There's a crueller reading. AI labs figured out that crowdsourced preferences from casual users are a blunt instrument. The shift toward agentic workflows meant models needed to impress other models, not people scrolling on their phones. For the kind of reinforcement learning that matters now, labs hire domain experts and run evaluations against PhD-level feedback. The crowd was never going to be precise enough.

Forty-five angel investors. DeepMind's chief scientist. A $33 million cheque from one of the most connected funds in Silicon Valley. And the thing it bought was ten months of server time and a blog post titled "winddown." The economics of wrapping someone else's API haven't changed since Anthropic started enforcing its terms of service. If anything, the lesson has sharpened. The thinner your layer, the faster the substrate makes you irrelevant.

Some of Yupp's employees are reportedly joining a "well-known" AI company. Which sounds like a soft landing until you consider that it's the same trajectory the product followed: absorbed back into the infrastructure it was built to evaluate.

Sources:

The Skating Rink That Soundtracked Tomorrow

Room 13, BBC Maida Vale Studios. Before it held oscillators and tape machines, the building was a roller skating palace. Opened in 1909 on Delaware Road, converted by 1934, given to a handful of BBC engineers in 1958 with two thousand pounds and whatever surplus military electronics they could find at Portobello Market.

Delia Derbyshire joined the Workshop in 1962 with a mathematics and music degree from Cambridge and a rejection letter from Decca Records, who did not employ women in their studios. In eleven years she created sound for roughly 200 programmes. The Doctor Who theme remains the most famous: Ron Grainer handed her a single sheet of A4 manuscript paper with annotations like "wind bubble" and "cloud," and she realised it from tape-spliced fragments of a plucked string, white noise, and test-tone oscillators meant for calibrating equipment. When Grainer heard it he asked, "Did I really write this?" She said, "Most of it." The BBC would not credit her for another fifty years.

None of this is news. The Workshop's history has been thoroughly documented. What interests me is what those sounds have become now that the context they were made for no longer exists.

The Radiophonic Workshop did not just make television themes. It soundtracked a specific institutional vision of Britain: Open University lectures, schools broadcasts, public information films. The BBC under its post-war mandate believed that educating the nation was a public good, and these electronic textures were the sonic furniture of that belief. Mark Fisher identified this precisely. Hauntological music, he wrote, constitutes "an oneiric conflation of weird fiction, the music of the BBC Radiophonic Workshop, and the lost public spaces of the so-called postwar consensus." That consensus ended in 1979.

The Workshop itself held on until 1998, killed by John Birt's internal market policies. Elizabeth Parker, the last remaining composer, switched off the lights. The archive was nearly discarded.

When Derbyshire died in 2001, 267 reel-to-reel tapes were found in her attic. They sat there like letters from someone who had stopped writing decades earlier. She left the BBC in 1973 and abandoned music entirely by 1975.

Julian House of Ghost Box Records described the Workshop's older material as "the reverb of a reverb of a reverb." That phrase captures how these sounds circulate now. They are not nostalgic. Nostalgia implies you want to go back. This is different. The sounds point forward, toward a public future that was defunded and dismantled, and the fact that they still sound futuristic is the cruel part. They describe a destination cancelled while the signal was still transmitting.

Simon Reynolds called the tension in Ghost Box's work a pull between "heathen heritage" and "modernizing socialism." The Workshop operated at the intersection of state-funded infrastructure and radical experimentation, and both feel equally impossible now.

I keep returning to those 267 tapes in the attic. An entire career's parallel output, boxed and unlabelled, surviving because nobody thought to throw them away.

Sources:

The Night Four Women Became One Sentence

Fiera Milano, March 1991. An exhibition hall on the city's outskirts, a fifteen-metre marble runway, and a U-shaped seating plan that separated press from celebrities from international buyers. Gianni Versace had staged shows before, obviously. But nothing like what happened at the end of this one.

The collection itself was pure Versace at full volume. Boxy cropped jackets over Lycra catsuits printed with baroque scrollwork. Studded leather cut alongside pleated skirts. Thigh-high boots that had no business being paired with silk but somehow were. The colour ran from black through to saturated reds, greens, oranges, and yellows, all of it rendered in that specific register Versace owned: sexy, loud, and entirely uninterested in apology.

Then the finale. George Michael's Freedom! '90 hit the speakers and out came Linda Evangelista, Cindy Crawford, Naomi Campbell, and Christy Turlington. Not walking individually. Not one after another. Arm in arm, four across, lip-syncing the lyrics, laughing, mugging for the front row. They wore dresses in red, yellow, and black. George Michael watched from his seat.

The four supermodels at the Versace AW91 finale

The previous October, David Fincher had released the music video for the same song, starring all four (plus Tatjana Patitz). No George Michael in frame, just supermodels lip-syncing in a stripped-down loft while a jukebox exploded. The video made them icons outside fashion. The Versace finale made that iconography physical, live, happening in a room full of people who understood they were watching something that couldn't be repeated.

The backstory matters. Liz Tilberis, then editor of British Vogue, had told Versace to stop splitting the top models across different slots. Book them together. Let their combined weight collapse the room. He listened. And the result was not just a fashion show but a proof of concept: the runway could function as spectacle, as cultural event, as something people who had never touched a copy of Vogue would eventually see and remember.

Before this night, runway shows were trade events. After it, they were content. Every designer who stages a celebrity-packed front row, every brand that livestreams its collection, every fashion week headline that leads with a name rather than a garment owes a debt to what happened at Fiera Milano. Versace understood something his contemporaries didn't, or wouldn't admit: the models were the collection. The clothes were spectacular. But four women walking in sync to a pop song, grinning like they owned the building (they did), turned a presentation into a cultural marker that outlived the season, the decade, and eventually the designer himself.

Cindy Crawford later said it felt like all the stars had aligned. She wasn't wrong. But stars don't align by accident. Someone has to set the stage.

Sources: