Skip to content

Plutonic Rainbows

Cold Fusion in Dorset

In 1981, the guitarist from the biggest band in the world drove to a small studio in Parkstone, Dorset, to make a record his label actively did not want him to make. Andy Summers had known Robert Fripp since their teens in Bournemouth. They'd kept in touch through the decades, jamming occasionally, circling something neither had quite articulated. With The Police between Ghost in the Machine and Synchronicity, and Fripp reconvening King Crimson for the Discipline trilogy, a window opened. Summers booked a week at Arny's Shack, a studio run by an engineer named Tony Arnold who smoked a pipe while he recorded, with further sessions at Island Studios in London. Fripp joined for the second week. They made it up as they went.

The result, I Advance Masked, came out in October 1982 on A&M Records, the same label that had tried to kill it. "The label didn't want me to do it," Summers told Louder Sound in 2025, "but didn't want to piss me off." The album reached number 52 on the Billboard 200, where it spent eleven weeks. For a wholly instrumental record built on guitar synthesisers and tape loops, that was, in Summers' words, "the ultimate FU to the record company."

What makes the album strange, and what keeps it interesting forty-three years later, is how little it resembles either musician's day job. This isn't The Police plus King Crimson. The Neuguitars Substack called it "a cold fusion of two very distinct and mutually opposed sounds", which gets at the chemistry without overselling it. Fripp laid down polyrhythmic lines in odd metres, using his Frippertronics tape loop system to build layered, self-decaying textures. Summers described Fripp's parts as "the bones of a piece," onto which he'd graft harmony chords, guitar synth washes, and the occasional bluesy solo that wandered in from some other record entirely.

The Frippertronics technique deserves a moment here, because it's central to why these records sound the way they do. Two Revox reel-to-reel decks, spaced apart. Tape travels from the supply reel of the first to the take-up reel of the second. Sound records on one, plays back on the other, feeds back to the first. Delays of three to six seconds, decaying gradually, building loops in real time. Terry Riley pioneered the method in 1963. Pauline Oliveros expanded it. Fripp encountered it through Brian Eno during the No Pussyfooting sessions in 1972 and made it his own. On I Advance Masked, you hear it most clearly on "Under Bridges of Silence" and "In the Cloud Forest," tracks where the technique creates something closer to weather than music, atmospheric systems that shift and resettle.

I keep returning to this: Frippertronics is a palimpsest machine. Every new phrase writes over the last, but the last never fully disappears. It degrades, blurs, becomes a ghost of itself while the next layer takes its place. The loops don't erase; they haunt. If you wanted to design a technology purpose- built for hauntological sound, for music that carries the residue of its own past within it, you'd struggle to improve on two tape decks and a length of quarter-inch tape.

Nobody has written the hauntological reading of these records, which surprises me. The raw material is sitting right there. The guitar synth timbres on I Advance Masked instantly date the album to 1982, the same way that a BBC Radiophonic Workshop piece dates itself to its decade through the technology available. But the compositional thinking, the textural ambition, points somewhere else, somewhere that hadn't arrived yet. The albums occupy a temporal crack: too experimental for Police fans who wanted "Every Breath You Take," too pop-adjacent for the avant-garde who dismissed anything on a major label. They fell between audiences, between eras, between the identities of the men who made them. That kind of commercial orphan status is exactly where hauntological objects tend to reside, in the margins where culture forgets to look.

The criticism of I Advance Masked is real and worth acknowledging. The Moving the River review is blunt: "under-produced, tentative and unfinished-sounding." The drum programming is limp. The bass playing is, charitably, amateurish. These were two guitarists playing everything themselves, and it showed. The shorter ambient pieces lack coherence, drifting without arriving. But I think the roughness is part of what makes the album age well. Polished records from 1982 sound like 1982. Rough ones sound like drafts from a future that didn't quite materialise, which is more or less the definition of hauntology.

Bewitched, released in 1984, is a different animal. Summers had a clearer sense of how to work with what he called Fripp's "idiosyncratic genius," and the album brought in session musicians: Sara Lee on bass, who'd played in both Gang of Four and Fripp's own League of Gentlemen, real drums, actual song structures. The result is more conventional and, track for track, more consistent. "Parade" opens with New Wave percussion and a synth-guitar melody that evokes Bowie's Low. "What Kind of Man Reads Playboy?" layers wah-funk, harmonic textures, bebop, and blues into something Moving the River called "a perfect distillation of the state of the electric guitar in the mid-'80s." Side one of Bewitched is genuinely excellent.

Side two is not. Multiple reviewers note the drop-off: short, poorly recorded tracks that sound like outtakes rather than finished pieces. Fripp himself acknowledged the shift in balance: "The album is a lot more Andrew than it is me." He'd assumed a deliberately recessive role, providing textural framework rather than competing for the spotlight, and some critics found this admirable but disappointing. The locked-room intimacy of the debut, two guitarists and their machines, had been traded for something more produced but less distinctive.

What sits between these albums now, in 2025, is a literal ghost. During preparation for a Complete Recordings 1981-1984 box set on DGM/Panegyric, Summers found four tape reels in a Los Angeles vault. Thirteen tracks. Enough for a full album, titled Mother Hold the Candle Steady and newly mixed by David Singleton. "I was sort of shocked that we had never used them," Summers said. The tapes had been gathering dust for forty years.

A lost album, discovered by accident, assembled decades after the fact from material that was never intended to be heard. If the original two records were ghosts of a future that didn't arrive, Mother Hold the Candle Steady is something stranger: a past that didn't happen, recovered and presented as though it always existed. The box set also includes "Can We Record Tony?," an audio documentary assembled from Fripp's archival cassettes of their earliest improvisations, sessions so preliminary they barely qualify as recordings. These are signals from before the signal, pre-echoes.

Summers and Fripp lost touch entirely after Bewitched. "Our lives just shot off in different directions," Fripp said. There is something fitting about that, two musicians who made spectral, time-displaced music together, then vanished from each other's lives completely, leaving behind a body of work that sounds increasingly out of its own time. The albums aren't nostalgic. They aren't period pieces. They exist in a space that Summers, in a 2025 Guitar Player interview, described with more accuracy than he probably intended: "It was a time when you could still pull off new stuff that people really hadn't heard yet." That sentence carries a quiet grief for the moment it describes. A time when new stuff was still possible. A future that was still open.

The Neuguitars writer admitted to entering what they called "a hauntological, nostalgic, middle-aged phase" while listening to the reissues, and I think that's honest in a way that most music criticism isn't. These records don't just sound like the early 1980s. They sound like what the early 1980s thought the future would sound like, played on instruments that now feel as analogue and irretrievable as a reel of quarter-inch tape feeding through two Revox decks in a shack in Dorset.

Sources:

Nine Weeks at 30 Avenue Montaigne

Bernard Arnault had owned LVMH for four months when he fired Marc Bohan. Bohan found out by reading the newspaper. After twenty-nine years steering Dior's couture output, he was replaced by a forty-four-year-old Italian architect who had never worked for a French house.

Gianfranco Ferré graduated from the Politecnico di Milano in 1969 with an architecture degree he never intended to use in the traditional sense. He spent three years in India, came back to Milan, started making jewelry, then dresses, then entire collections. By the late eighties Women's Wear Daily was calling him "the Frank Lloyd Wright of Italian fashion." Arnault noticed.

A grey glen-check suit from the debut collection — structured shoulders, oversized bow, closed umbrella. Cecil Beaton's Ascot scene from My Fair Lady, rebuilt in three dimensions.

The appointment provoked exactly the reaction Arnault probably wanted. Pierre Bergé, chairman of Yves Saint Laurent, told the press he didn't think "opening the doors to a foreigner — and an Italian — is respecting the spirit of creativity in France." French couture was a national institution, and Arnault had handed the keys to someone from the wrong side of the Alps.

Ferré had nine weeks to answer. Ninety-one looks, all built around a theme he called Ascot-Cecil Beaton. The reference was specific: that black-and-white Royal Ascot sequence in My Fair Lady where Beaton dressed every extra in grey, ivory, and black. Ferré translated it into austere masculine fabrics — tweed, barathea, Prince of Wales check — cut against billowing white silk blouses and organza bows that defied the tailoring beneath them. The Arbitre suit, houndstooth wool with balloon sleeves and a silk organza bow that looked structurally impossible, became the collection's emblem.

He called his method "architecture in fabric." Clothing built from the inside out, where the internal construction shaped the body before a single visible seam appeared. That same year, fashion was tilting hard toward maximalism. Ferré went the other direction. Discipline first, flourish second.

Le Figaro called it "the resurrection of the great Dior." The 27th Dé d'Or jury voted 13-8 in his favour over Paco Rabanne. A Golden Thimble on the first attempt, for a collection assembled in nine weeks, by a man the French press had spent the summer resenting.

He stayed seven years. Designed fifteen haute couture collections. Created the bag Princess Diana carried so often it was eventually renamed after her. Then Arnault replaced him with John Galliano, on Anna Wintour's recommendation, and Ferré went back to Milan and kept making white shirts until he died in 2007.

Sources:

Seven Songs and a Hi-Fi Company

There's a turntable company in Glasgow called Linn Products. They make some of the most expensive record players in the world. In 1982 they started a record label, mostly as a way to demonstrate what their hardware could do. Their first signing was a local band called The Blue Nile. That band's second album, released seven years later, turned out to be one of the most meticulously crafted records of the 1980s — an album so sonically pristine that audiophile reviewers still use it as a diagnostic tool for testing speaker systems.

Hats came out on 9 October 1989. Seven tracks, thirty-eight minutes. Paul Buchanan on vocals and guitar, Robert Bell on bass and synths, Paul Joseph Moore on keyboards. Calum Malcolm engineering at Castlesound Studios in East Lothian, who some fans consider an honorary fourth member. The recording took five years. Most of those years produced nothing.

The gap between the debut, A Walk Across the Rooftops, and Hats wasn't just slow, it was paralysing. Virgin Records, who licensed the band's releases from Linn, actually initiated legal proceedings demanding new material. Buchanan later described the pressure as the worst possible circumstances for making anything. They scrapped roughly an album's worth of recordings. The band was eventually forced out of Castlesound to make room for another session.

The breakthrough came when they stopped trying. Back in Glasgow, Buchanan's writer's block lifted. Bell and Moore started laying ideas down on a portastudio at home. When they returned to the studio in 1988, they knew exactly what they wanted. Buchanan has claimed that half of Hats was recorded in about a week.

I don't know what to do with that information, honestly. Three years of nothing, a legal threat from the label, then a week.

The seven songs on Hats all seem to take place after dark. Six of them reference a time of day, and it's always late. "Over the Hillside" opens with the sun going down. "The Downtown Lights" is exactly what the title promises, an urban nocturne built from synth pads and longing. "Let's Go Out Tonight" is as direct as Buchanan ever gets, which is still not very direct. "From a Late Night Train" closes the album with a view through a window at something you can't quite reach.

Glasgow is everywhere in this record. Not in any flag-waving sense, but in the way Buchanan treats the city as emotional architecture. "Whatever happiness or sadness you're feeling," he once said, "you project it on to the streets and buildings that are around you." The album turns rain-wet streets and orange sodium lights into something close to sacred.

TNT-Audio, an audiophile review site, noted that Buchanan's voice should appear "between the loudspeakers, in good evidence and very, very natural." The minimal processing on his vocals makes them a direct test of playback quality. The electronic instruments are described as "smooth as silk and warm as velvet." This is pop music built with the tolerances of a precision instrument, which makes sense when your label's day job is manufacturing turntables.

Hats came out the same year as the Stone Roses' debut, Doolittle by Pixies, Disintegration by The Cure, and 3 Feet High and Rising by De La Soul. It peaked at number 12 on the UK Albums Chart, higher than the Stone Roses managed initially, then quietly receded. Melody Maker ranked it eighth that year. NME put it at eighteen. Q gave it five stars out of five. Rolling Stone gave it three, the only major outlier.

The reputation has done nothing but grow. Uncut gave it 10 out of 10 on reappraisal. Mojo, five stars. Pitchfork, 8.8. Matty Healy of The 1975 called it his favourite album of the 1980s and cited it as an influence. Annie Lennox and Rod Stewart both covered "The Downtown Lights." In 2024, Taylor Swift name-checked the song in "Guilty as Sin?" from The Tortured Poets Department, a reference traced back through Healy, who she'd briefly been dating. Buchanan's response, when asked, was that he was "touched."

There's a Buchanan quote I keep coming back to: "You never leave anything thinking it's completely done, you just stop." That's a strange thing for a perfectionist to say. But it might be the most honest description of how mastering works, the idea that finished is a decision, not a state. The original 1989 pressing was apparently so good that Dohmann Audio, a turntable manufacturer, says it "have not required any upgrades as it was minted perfectly first time." They stopped at exactly the right moment.

The Blue Nile's entire catalogue is four albums across twenty-two years. Buchanan once said their goal was to "stay out of the way of the music, to let people react to it in their own way." Most bands would consider that commercial suicide. Given how Hats sounds at two in the morning with the lights off, I'd say they knew exactly what they were doing.

Sources:

Built, Not Borrowed

Microsoft shipped three AI models on Thursday. Not OpenAI's models repackaged with Azure branding. Its own.

MAI-Transcribe-1 handles speech-to-text across 25 languages with a 3.8% word error rate on the FLEURS benchmark, lower than Whisper across all 25 languages, lower than Gemini Flash on most of them. MAI-Voice-1 generates a minute of speech in under a second from a ten-second voice sample. MAI-Image-2 landed third on the Arena.ai leaderboard for image generation on arrival. All three are available now through Microsoft Foundry, the rebranded Azure AI platform.

The teams that built them were small. Mustafa Suleyman said the transcription model was the work of ten people. The image team, roughly the same size. His MAI Superintelligence group didn't exist until November 2025, which means Microsoft went from forming the unit to shipping production models in about six months.

That timeline only makes sense in context. Until October 2025, Microsoft was contractually unable to build its own frontier models because the OpenAI partnership agreement explicitly carved out AGI and superintelligence research as OpenAI's domain. The September renegotiation changed the terms. Five weeks later, Suleyman had a team. Five months after that, three models.

None of them are large language models. Transcription, voice synthesis, image generation. These are adjacent territories, the kind of work that doesn't directly threaten GPT or o-series. A diplomatic first move. Suleyman said the goal is state-of-the-art performance across text, image, and audio by 2027, which means the LLM is coming. He just isn't leading with it.

The pricing tells its own story. MAI-Transcribe-1 costs $0.36 per hour with roughly half the GPU overhead of competitors. When you're spending hundreds of billions on AI infrastructure, undercutting on price isn't generosity. It's leverage. Microsoft can afford to run these models at margins that would bleed a startup dry, and the integration points are already live: Copilot, Bing, PowerPoint.

The OpenAI relationship, officially, remains strong. A February joint statement said as much. Azure stays the exclusive cloud provider for OpenAI's APIs through 2032. But OpenAI signed deals with AWS, and Microsoft just shipped models that beat Whisper on every benchmark they tested. The word "partnership" is doing increasingly heavy lifting.

What's interesting isn't the models themselves. Speech transcription and image generation aren't unsolved problems. What's interesting is the speed, the signal, and the silence from Redmond about what comes next. Suleyman's team has twelve months before his own deadline. The LLM-shaped gap in the lineup won't stay empty.

Sources:

The Anniversary Collection Nobody Rushed

Valentino presented his Fall/Winter 1991-1992 haute couture collection in Paris in July of that year, and the timing was not incidental. Weeks earlier, the house had celebrated its thirtieth anniversary with a three-day gala in Rome: a garden lunch at Valentino's villa on the Appian Way, an exhibition called "Thirty Years of Magic" at the Capitoline Museum, and a formal ball at the Villa Medici where Elizabeth Taylor, in a crystal-embroidered Valentino gown, told the New York Times the show was "so beautiful it makes you want to cry." The couture collection that followed carried the weight of all that ceremony without buckling under it.

The silhouettes were architectural. Silk gazar shaped into sculptural forms, hand-applied embroidery, capes that framed the body rather than clinging to it. The belted grey dress with exaggerated cape sleeves in this photograph is representative of the collection's restraint: the colour palette muted, the construction precise, the drama coming entirely from proportion. Necklines revealed the collarbone. Fabrics held their shape without assistance. Everything was built rather than styled.

The runway was stacked with the names that defined the era. Christy Turlington, Linda Evangelista, Karen Mulder, Naomi Campbell, Claudia Schiffer. Valentino had been showing couture in Paris since 1975, one of the first Italian designers accepted onto the French calendar, and by 1991 he occupied a position that required neither explanation nor defence. The fashion press rated him alongside Yves Saint Laurent and Karl Lagerfeld. His clientele included Princess Diana and Jackie Kennedy Onassis.

What makes the collection interesting in retrospect is what it refused to do. Martin Margiela and the Antwerp Six were already rewriting the vocabulary of fashion. Deconstruction was gathering force. Valentino's response was to build another couture collection with the same discipline he had applied for three decades: scalloped trims, circular ruffles, Valentino Red anchoring even the most restrained compositions. He did not chase reaction. He did not attempt irony. The garments existed as arguments for continuity in a year when continuity felt increasingly unfashionable.

Thirty years of the same conviction, presented in a city that was not his own, to an audience that kept returning. The supermodels who walked his runway that season would scatter across a dozen other shows within days, but for that afternoon in July, the proposition was singular: refinement does not expire.

Sources:

The Leak Anthropic Couldn't DMCA Away

A 59.8 megabyte source map file. That is what separated Anthropic's most sophisticated product from the public domain. The @anthropic-ai/claude-code npm package shipped with a .map file that pointed to a zip archive sitting on Anthropic's own Cloudflare R2 storage bucket. Anyone could download it. Inside: approximately 1,900 TypeScript files, 512,000 lines of unobfuscated code, and the complete architectural blueprint of the agentic harness that makes Claude Code work.

Security researcher Chaofan Shou found it on March 31. By the time Anthropic responded, the source had been forked 41,500 times on GitHub.

The root cause was not exotic. Bun, the JavaScript runtime Claude Code uses, generates source maps by default. Somebody needed to add *.map to .npmignore or the files field of package.json. Nobody did. Gabriel Anhaia, a software engineer who analysed the leak, put it plainly: "A single misconfigured .npmignore or files field in package.json can expose everything." Anthropic engineer Boris Cherny later acknowledged "a manual deploy step that should have been better automated." The identical vector had leaked source code thirteen months earlier, in February 2025. The fix was never properly automated.

This was Anthropic's second public exposure in five days. I wrote last week about the CMS misconfiguration that left 3,000 unpublished files searchable, including draft blog posts revealing the internal codename Mythos for an unreleased model family above Opus. That leak was embarrassing. This one was structural.

The distinction matters. A CMS toggle is a configuration error. Shipping your entire source tree to npm is a pipeline failure, one that had already happened before and was supposedly addressed. The question of whether the Mythos leak was accidental is interesting in its own right, but nobody is suggesting Anthropic wanted 512,000 lines of TypeScript indexed on every package manager mirror on Earth.

What the code revealed is more interesting than how it escaped.

The leak exposed Claude Code's full tool system, fewer than twenty default tools and up to sixty-plus total, including file editing, bash execution, and web search. It revealed a three-tier memory architecture designed around context window conservation: an index layer always loaded into the conversation, topic files pulled on demand, and transcripts searchable via grep but never loaded directly. The system treats memory as hints rather than truth, which is a surprisingly honest design philosophy for a product that markets itself on reliability.

More revealing was KAIROS, an unreleased autonomous daemon mode that runs continuously via a heartbeat prompt asking "anything worth doing right now?" It integrates with GitHub webhooks, operates on five-minute cron cycles, and includes a /dream command for background memory consolidation. Forty-four hidden feature flags gate unreleased capabilities including voice commands, browser control via Playwright, and multi-agent orchestration. The source comments reference internal model codenames: Capybara for v8 with a one-million-token context window, Numbat and Fennec for upcoming releases, and Tengu, which appears in connection with something called "undercover mode."

Undercover mode deserves its own paragraph. It is enabled by default for Anthropic employees working in public repositories. The system suppresses internal codenames, unreleased version numbers, references to "Claude Code," and Co-Authored-By attribution lines. The leaked configuration exposed 22 private Anthropic repository names. The opacity is not inherently sinister, companies routinely scrub internal references from public commits, but for a lab that has built its brand on transparency and careful stewardship, the discovery of a system specifically designed to hide AI involvement in public code contributions is not a great look.

The codebase also contained anti-distillation defences: decoy tool definitions injected into system prompts to poison any training data scraped from Claude Code sessions, plus cryptographically signed server-side summaries that prevent access to full reasoning chains. A 9,707-line bash security system uses tree-sitter WASM AST parsing with 22 unique validators. And buried in the source comments, a documented parser differential vulnerability where carriage return characters could bypass command validation, because shell-quote and bash disagree on what constitutes whitespace.

An internal BigQuery comment, timestamped March 10, noted that 1,279 sessions had experienced fifty or more consecutive compaction failures, wasting approximately 250,000 API calls daily before a cap of three retries was applied. That is the kind of detail that transforms a leak from an IP issue into a product credibility question.

One function in the codebase spans 3,100 lines with 486 branch points of cyclomatic complexity. The Hacker News thread, which accumulated 2,074 points and over a thousand comments, featured a lively debate about whether traditional code quality standards apply to AI-generated software. Some argued that velocity matters more than structure when models write the code. Others pointed out that humans still have to maintain it. I find myself in the second camp, but the argument is genuinely unsettled.

The community response was immediate and aggressive. The primary mirror repository hit 32,600 stars before Anthropic's legal team intervened. A developer using the handle @realsigridjin released Claw Code, a ground-up Python port built using OpenAI's Codex to sidestep copyright claims. It reached 75,000 stars and remains online. SafeRL-Lab published nano-claude-code, a minimal 900-line reimplementation supporting Claude, GPT, Gemini, DeepSeek, and local models. Multiple analysis repositories appeared, mapping the architecture in detail. The genie is not going back in the bottle.

Between 00:21 and 03:29 UTC on March 31, attackers published typosquatted npm packages targeting users attempting to compile the leaked code, bundling a remote access trojan. The supply chain attack was discovered quickly, but it illustrates a second-order risk that Anthropic's official statement did not address. "No sensitive customer data or credentials were involved" is technically accurate and completely beside the point when your leaked code is being weaponised as a lure within hours.

The DMCA response made things worse. Anthropic filed takedown notices that accidentally removed approximately 8,100 GitHub repositories, including legitimate forks of Anthropic's own public Claude Code repository that contained none of the leaked source. Boris Cherny acknowledged: "This was not intentional, we've been working with GitHub to fix it." Anthropic retracted notices for all but one repository and 96 forks containing the actual leaked material. The formal DMCA filing is publicly visible on GitHub's transparency repository. Nuking eight thousand innocent repos to protect code that was already mirrored across dozens of platforms is not a strategy. It is damage compounding.

The broader pattern is what concerns me. Anthropic has positioned itself as the careful lab, the one that thinks about safety before shipping, the one that walks away from defence contracts over ethical concerns. Two major leaks in five days, one of them a repeat of a known vector from thirteen months earlier, followed by a DMCA overreach that punished thousands of uninvolved developers. The engineering quality of the leaked codebase was broadly praised, the memory architecture is clever, the anti-distillation measures are sophisticated, but operational security is not about how good your code is. It is about whether your release pipeline remembers to exclude the source map.

Security researcher Roy Paz, writing for LayerX, noted that the exposure reveals "nonpublic details about how the systems work, such as internal APIs and processes," potentially informing attempts to circumvent existing safeguards. The compaction system's inability to distinguish user instructions from injected file content was specifically flagged as an attack surface. The bash parser differential is a concrete, exploitable vulnerability.

Competitors now have a detailed map of Anthropic's product direction. The feature flags, the model codenames, the KAIROS architecture, the anti-distillation approach. This is the kind of intelligence that normally costs months of reverse engineering or a well-placed hire. Anthropic handed it out for free, twice in one week, because somebody forgot a line in a config file.

I keep thinking about the Cursor situation from the week before, where a model identifier leaked through an API endpoint and revealed that Composer 2 was running on Moonshot AI's open-source Kimi K2.5. The AI developer tools space has a transparency problem that runs deeper than any single incident. Companies build proprietary products on foundations they do not fully disclose, then act surprised when the seams show. The difference with Anthropic is that the seams showed everything.

Sources:

The Diamond W in the Lobby Tiles

It took ninety-nine years to build and forty-two days to obliterate. Between 27 December 2008 and 6 January 2009, 807 Woolworths stores closed across Britain. Staff learned of the collapse not from the company but from BBC television. The chain died ten months before its centenary.

Every British town has the scar. Not always visible from the pavement. Sometimes a Poundland now, sometimes a B&M, sometimes just boards and a letting agent's number fading in the window. But the building remembers. Historic England documented eight architectural features that identify a former Woolworths even decades after the signage came down: bronze-framed shopfronts with curved glass corners, hammered cathedral glass on the first floor, Art Deco faience cladding with chevron patterns. And in the lobby, if you look down, the Diamond W in the floor tiles.

One hundred and forty-seven of the 807 sites became Poundlands. Nearly a fifth of the estate, absorbed into a chain that offers a diminished echo of what Woolworths provided. The floor plan is often unchanged. The Art Deco shell remains. What disappeared is harder to name. Something about the range, the ambition, the seven million weekly shoppers who treated it as public infrastructure rather than retail.

The pick 'n' mix counter is the thing everyone remembers. Not because the sweets were exceptional but because the act was. You stood at a shared counter and chose for yourself from an abundance that belonged to no one in particular. No algorithm. No delivery window. A physical, tactile, democratic transaction with sugar. Everyone over thirty-five can place themselves at one. No child born after 2008 has any material referent for it.

Andy Latham, a former manager, tried to resurrect it. His chain Alworths opened in eighteen former locations on 5 November 2009, timed deliberately to the centenary of the original Liverpool store. Pick 'n' mix, music, games. It collapsed after eighteen months. You cannot will back into existence the thing the market killed. The attempt is haunted by the original, a copy whose failure only confirms the irreversibility of loss.

Seventy-four of the 807 units sit fully vacant. Many were occupied at some point before falling empty again, a second death quieter than the first. Forty-eight have left retail entirely: housing, leisure, pubs. The building stops pretending to be a shop. That might be the honest answer, the only one that does not involve wearing the dead thing's clothes.

The cancelled futures that haunt British public space are not always grand. Sometimes they are a laminate counter at child height, a pressed steel ceiling hidden behind suspended tiles, a Diamond W that nobody sweeps but nobody removes. The building carries what the high street has forgotten how to say.

Sources:

Gemma 4 and the Apache Pivot

Google released Gemma 4 today. Four model sizes, multimodal across the board, and a license change that matters more than any benchmark number on the page.

The headline spec is a family of open-weight models built from the same research as Gemini 3. There is a 31B dense model, a 26B mixture-of-experts variant that activates only 4 billion parameters at inference time, and two edge-optimised models (E4B and E2B) small enough to run on a Raspberry Pi 5. The context windows stretch to 256K tokens on the larger models and 128K on the smaller ones. All four handle images and text natively. The edge models add audio input. The larger two process video.

None of that is the story.

The story is Apache 2.0.

Gemma 3 shipped under a custom license, Google's own "Gemma Terms of Use," which imposed restrictions that made legal teams nervous and hobbyists indifferent. It was open in the way that a restaurant with a dress code is open. You could walk in, but the terms reminded you this was someone else's house. Gemma 4 drops all of that. Apache 2.0 means no usage caps, no commercial restrictions, no "contact us if you exceed 700 million monthly active users" clause like Meta's Llama license carries. Fork it, ship it, sell it, modify it without asking. The freedom is unconditional.

This is Google choosing to compete on capability rather than control. And the capability argument is strong. The 31B dense model ranks third on the Arena AI text leaderboard. The 26B MoE variant, running on just 4 billion active parameters, sits sixth, outperforming models with twenty times its effective compute budget. Google's own framing is "intelligence per parameter," and the numbers back it up. A model that small matching frontier-class open weights running at 100B+ parameters is not incremental progress.

The architecture has some genuinely interesting choices. Alternating attention layers split work between local sliding-window attention (512 or 1024 tokens depending on model size) and global full-context layers. Each attention type gets its own RoPE configuration: standard for local, proportional for global. A feature called Per-Layer Embeddings feeds a secondary signal into every decoder layer, combining token identity with contextual information, which seems to be how they squeeze so much quality out of fewer parameters. The shared KV cache reuses key-value tensors from earlier layers in later ones, cutting memory without obvious quality loss. It is a dense collection of efficiency tricks that compound.

The on-device numbers are where this gets practical. On a Raspberry Pi 5, the E2B model hits 133 tokens per second on prefill and 7.6 tokens per second on decode, using less than 1.5GB of memory with 2-bit quantization. Four thousand input tokens across two distinct skills process in under three seconds on mobile GPU. These are not synthetic benchmarks designed to flatter a press release. Raspberry Pi inference is the kind of thing people will actually try within hours of a release, and if those numbers hold, this becomes the default local model for a lot of embedded and mobile work.

I keep circling back to the agentic framing. Google is not positioning Gemma 4 as a chatbot engine. The marketing language says "purpose-built for advanced reasoning and agentic workflows," and the tooling reflects it: constrained decoding for structured outputs, multimodal function calling, GUI element detection, object detection and pointing. These are the primitives you need for an AI agent that can look at a screen, understand what it sees, decide what to do, and call the right function. The fact that it works offline, on a phone, without phoning home to a cloud endpoint, makes the agentic pitch credible in a way that server-dependent agents never quite were.

The ecosystem support at launch is unusually comprehensive. Day-one availability across Hugging Face Transformers, llama.cpp, MLX for Apple Silicon, Ollama, mistral.rs, ONNX, and browser-based inference through WebGPU via transformers.js. Google clearly pre-coordinated with the major frameworks. When I wrote about model discovery and pricing a couple of weeks ago, the friction was still in finding and deploying the right model. Gemma 4 arrives already integrated into every tool people actually use.

What Google is doing here has a clear strategic logic. The Gemini 3.1 Pro updates showed them closing the gap with Claude and GPT on their proprietary side. Now the open side gets a model built from the same research foundations, under the most permissive license in the major open-weight landscape. Meta's Llama has its commercial threshold. Mistral has been ambiguous about which models are truly open. Google just removed every legal obstacle at once.

The 140+ language support is quietly significant. Most open models optimise for English with a handful of other languages bolted on. Google's multilingual training infrastructure, built for Search over two decades, gives Gemma 4 a natural advantage here. For developers building products outside the English-speaking world, this might be the deciding factor regardless of benchmark position.

I'm less certain about the video capabilities in the larger models. Processing video natively is useful, but the context window arithmetic gets expensive fast. A few minutes of video at reasonable frame rates will consume a large fraction of that 256K window, leaving limited room for reasoning about what was seen. The image and audio capabilities feel more immediately practical, especially on the edge models where audio input enables real-time speech understanding directly on device.

The competitive pressure this creates is substantial. Llama 4 from Meta is the obvious comparison, and Meta's response will need to address both the licensing gap and the efficiency gap. A 4B active parameter model matching 100B+ models on quality is the kind of result that forces everyone else to rethink their architecture, not just their marketing. Qwen, Phi, and the rest of the open-weight field now have a new bar to clear, set by a company with functionally unlimited compute and training data.

Whether Gemma 4 becomes the default open model depends on what happens in the next few weeks as developers actually stress-test these claims. Arena scores and launch-day benchmarks are one thing. Sustained performance across real workloads, fine-tuning stability, and the texture of outputs on tasks that benchmarks do not measure will determine if this is the model people reach for by default or just another strong option in an increasingly crowded field.

The Apache 2.0 move, though, is irreversible. Google cannot walk that back without destroying trust. And for every developer who avoided Gemma 3 because of licensing uncertainty, the door is now wide open.

Sources:

The Night Four Women Became One Sentence

Fiera Milano, March 1991. An exhibition hall on the city's outskirts, a fifteen-metre marble runway, and a U-shaped seating plan that separated press from celebrities from international buyers. Gianni Versace had staged shows before, obviously. But nothing like what happened at the end of this one.

The collection itself was pure Versace at full volume. Boxy cropped jackets over Lycra catsuits printed with baroque scrollwork. Studded leather cut alongside pleated skirts. Thigh-high boots that had no business being paired with silk but somehow were. The colour ran from black through to saturated reds, greens, oranges, and yellows, all of it rendered in that specific register Versace owned: sexy, loud, and entirely uninterested in apology.

Then the finale. George Michael's Freedom! '90 hit the speakers and out came Linda Evangelista, Cindy Crawford, Naomi Campbell, and Christy Turlington. Not walking individually. Not one after another. Arm in arm, four across, lip-syncing the lyrics, laughing, mugging for the front row. They wore dresses in red, yellow, and black. George Michael watched from his seat.

The four supermodels at the Versace AW91 finale

The previous October, David Fincher had released the music video for the same song, starring all four (plus Tatjana Patitz). No George Michael in frame, just supermodels lip-syncing in a stripped-down loft while a jukebox exploded. The video made them icons outside fashion. The Versace finale made that iconography physical, live, happening in a room full of people who understood they were watching something that couldn't be repeated.

The backstory matters. Liz Tilberis, then editor of British Vogue, had told Versace to stop splitting the top models across different slots. Book them together. Let their combined weight collapse the room. He listened. And the result was not just a fashion show but a proof of concept: the runway could function as spectacle, as cultural event, as something people who had never touched a copy of Vogue would eventually see and remember.

Before this night, runway shows were trade events. After it, they were content. Every designer who stages a celebrity-packed front row, every brand that livestreams its collection, every fashion week headline that leads with a name rather than a garment owes a debt to what happened at Fiera Milano. Versace understood something his contemporaries didn't, or wouldn't admit: the models were the collection. The clothes were spectacular. But four women walking in sync to a pop song, grinning like they owned the building (they did), turned a presentation into a cultural marker that outlived the season, the decade, and eventually the designer himself.

Cindy Crawford later said it felt like all the stars had aligned. She wasn't wrong. But stars don't align by accident. Someone has to set the stage.

Sources:

Thirty-Three Million for a Suggestion Box

Pankaj Gupta built a product that let 1.3 million people vote on which AI model gave the best answer. Jeff Dean invested. Biz Stone invested. The CEO of Perplexity invested. a16z crypto's Chris Dixon led a $33 million seed round. On Tuesday, Gupta announced Yupp.ai is winding down, less than ten months after launch. Platform access ends April 15.

The stated reason is the one every failed startup reaches for: product-market fit. "The AI model capability landscape has changed dramatically in the last year alone," Gupta wrote. Which is a polite way of saying the product was a leaderboard for a race where the runners kept swapping positions between refreshes.

Yupp's premise made a kind of sense when it launched in June 2025. Back then, picking between Claude and GPT and Gemini and whatever Mistral was calling itself that week felt consequential. You'd paste a prompt into three chat windows, squint at the results, and develop superstitions about which one "got you." Yupp crowdsourced that process across 800 models. Millions of preference signals a month, all feeding into a ranking system that was supposed to help ordinary people navigate the model landscape.

The problem is that ordinary people stopped caring. Not because the models got worse, but because they got interchangeably good enough. When the gap between first place and eighth place on a benchmark is statistical noise, a consumer taste-test platform becomes a thermometer for a room that's already at temperature.

There's a crueller reading. AI labs figured out that crowdsourced preferences from casual users are a blunt instrument. The shift toward agentic workflows meant models needed to impress other models, not people scrolling on their phones. For the kind of reinforcement learning that matters now, labs hire domain experts and run evaluations against PhD-level feedback. The crowd was never going to be precise enough.

Forty-five angel investors. DeepMind's chief scientist. A $33 million cheque from one of the most connected funds in Silicon Valley. And the thing it bought was ten months of server time and a blog post titled "winddown." The economics of wrapping someone else's API haven't changed since Anthropic started enforcing its terms of service. If anything, the lesson has sharpened. The thinner your layer, the faster the substrate makes you irrelevant.

Some of Yupp's employees are reportedly joining a "well-known" AI company. Which sounds like a soft landing until you consider that it's the same trajectory the product followed: absorbed back into the infrastructure it was built to evaluate.

Sources: