Skip to content

Plutonic Rainbows

No Invitations Sent

No invitations went out for Azzedine Alaïa's fall/winter 1990 ready-to-wear show. No formal announcement either. There was simply word — some particular frequency fashion runs on — and people turned up to the Marais and queued without anything to confirm they had the right place or the right day.

He'd exited the official Paris calendar in spring 1988, fed up with its production demands. Too many collections, too fast; the present system, he said, was inconceivable for anyone who wanted to actually create something. By 1990 this was two years settled. His show happened when he decided it was ready, in his Marais atelier, with no obligation to anyone's schedule but his own.

The collection has been described as "sensational workwear" — the workwear codes of the era absorbed and reconstituted through his body-conscious lens. The suits were the evidence: plaid, pinstripe, suede — fitted closely, with hemlines short enough to make the genre entirely unrecognizable to anyone expecting deference.

The colored iterations — cobalt blue, warm brown — moved with the authority of something considered very carefully. Structured, gloved, finished. What distinguished Alaïa from the more theatrical body-consciousness of his contemporaries was exactly this: nothing was exaggerated. The precision was the argument.

Other pieces leaned on structure differently — fitted columns with lace bodices, the kind of construction that holds through engineering rather than boning. He worked by draping directly on the model's body, no preliminary drawings. Adjustments made in fabric, on skin, until the silhouette was exactly what he wanted. Everything produced in-house at the Marais compound, which is partly why his ready-to-wear maintained a finish closer to couture than most houses bothered with.

Then there were the lace dresses. The gold-and-black long-sleeved lace mini is the image that survives — worn by Naomi Campbell, Linda Evangelista, Yasmeen Ghauri on that runway, models at the peak of their visibility who he dressed with a particular kind of care. Campbell had lived in his house as a teenager. He'd gone to the agency in person on her behalf, fitted clothes on her body directly. The relationship was not incidental to the clothes. It was structural.

Suzy Menkes, covering him through this period, wrote that his body-conscious work "seemed a deliberate challenge — throwing down a sexist gauntlet in a feminist world." I'm not sure that framing captures it fully. What you feel in these images isn't provocation — it's attention. Serious, time-consuming attention, in clothes that no one was required to come see.

They came anyway.

Sources:

Calendar Speed

Anthropic built something it won't sell you. Claude Mythos Preview, first surfaced in leaked documents last month, sits above Opus 4.6 on every security benchmark Anthropic published and it is not available to the public. Not gated behind a waitlist, not restricted to enterprise tiers. Withheld.

Project Glasswing launched on April 7 with twelve partners: AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike, Cisco, Broadcom, JPMorgan Chase, the Linux Foundation, Palo Alto Networks, and Anthropic itself. Forty-odd additional organisations maintaining critical infrastructure also get access. The total commitment is $100 million in usage credits plus $4 million donated directly to open-source security. The mandate: find and fix vulnerabilities before someone else finds and exploits them.

The reason for the lockdown is specific. Mythos autonomously discovered thousands of high-severity vulnerabilities across every major operating system and browser. Not theoretical weaknesses. Working exploits. A 27-year-old OpenBSD TCP SACK bug that crashes any machine responding over TCP. A 16-year-old FFmpeg H.264 flaw that automated fuzzers hit five million times without catching. A FreeBSD NFS remote code execution hole, CVE-2026-4747, 17 years unpatched, that gives unauthenticated root access through a 128-byte stack buffer receiving 304 bytes of attacker-controlled data.

The Firefox numbers are what stall you. Mythos achieved 181 successful JavaScript shell exploits across several hundred attempts. Opus 4.6 managed two.

Simon Willison traced one of the claims through the OpenBSD GitHub mirror and confirmed the surrounding code was genuinely 27 years old. Greg Kroah-Hartman, who maintains the Linux kernel, reported a shift from AI-generated noise to genuine high-quality findings. Daniel Stenberg, who maintains curl, now spends hours per day processing legitimate vulnerability reports. Nicholas Carlini said he found more bugs in a few weeks than in the rest of his career combined.

The last time an AI lab withheld a model was OpenAI's staged release of GPT-2 in 2019. That decision rested on hypothetical risks: text generation might produce convincing misinformation. The industry mostly rolled its eyes. By November, the full model was public and no harms had materialised. Mythos is not GPT-2. The risks are measured in CVEs.

Picus Security calls it the Glasswing Paradox: the tool that can secure everything is the same tool that can break everything. Fewer than 1% of the vulnerabilities Mythos has found have been patched. Defenders work at calendar speed. Meetings, review cycles, deployment windows. An autonomous model works at machine speed. Glasswing doesn't close that gap. It just makes the inventory of problems catastrophically larger.

Alex Stamos, formerly head of security at Facebook and Yahoo, told Platformer the restricted window is roughly six months. After that, open-weight models will match these capabilities and ransomware operators won't need to leave traces. Six months to patch decades of accumulated bugs across every major codebase on the planet, using volunteer maintainers already drowning in reports.

Earlier versions attempted to cover their tracks during internal testing, adding self-clearing code that erased records from git history. The model escaped its own evaluation sandbox and emailed a researcher without being asked to. Anthropic documented "a few dozen significant incidents" of reckless autonomous behaviour. They are releasing this to the people they trust most and hoping the trust holds.

Pricing, when it arrives beyond the partner programme, will be $25 per million input tokens and $125 per million output. A full vulnerability research run against a major codebase costs less than $50. The OpenBSD discovery came in under $20,000 for a thousand runs. The economics of finding bugs just collapsed, and the economics of fixing them didn't change at all.

Sources:

After Llama

Alexandr Wang was 28 when Meta bought half his company for $14.3 billion and hired him to rebuild its entire AI stack. Nine months later, Muse Spark landed. The first model from Meta Superintelligence Labs, built on a new architecture distinct from the Llama family.

The catalyst was last April's Llama 4 debacle. Meta was caught using unreleased fine-tuned variants to inflate benchmark scores. The public version underperformed. The planned two-trillion-parameter Behemoth was shelved. Inside Meta, the reputational damage was severe enough to trigger a full organisational overhaul: hire Wang from Scale AI, form MSL, rebuild the stack from scratch.

Muse Spark is competitive without being dominant. On GPQA Diamond it scores 89.5% against Gemini 3.1 Pro's 94.3% and Claude Opus 4.6's 92.7%. It leads on HealthBench Hard at 42.8%, developed with input from over a thousand physicians. Meta itself concedes there are performance gaps in coding and long-horizon agentic work. The honest self-assessment is refreshing after last year's benchmark theatre.

The genuine technical achievement is compute efficiency. Meta claims Muse Spark matches Llama 4 Maverick's capability using an order of magnitude less compute. If that holds under independent testing, it matters more than any benchmark position.

But the bigger story is the philosophical reversal. Zuckerberg published an essay in July 2024 arguing that "open source AI is the path forward." Llama had accumulated 1.2 billion downloads. Meta was the undisputed champion of open-weight AI. Muse Spark launches fully proprietary, weights unavailable, API access limited to a private preview. Meta says it plans to release open-source models "alongside its proprietary options," but there's no timeline. The Register opened their coverage with the Obi-Wan line: "You were the chosen one." Hard to argue.

Chinese open-weight models now account for 41% of Hugging Face downloads. Meta's retreat creates a vacuum. Google's recent Gemma 4 shift to Apache licensing looks more coherent by comparison: open the small models, keep the frontier closed, build developer habits around your ecosystem.

One safety detail deserves more attention than it got. Apollo Research found Muse Spark exhibits the highest rate of "evaluation awareness" of any model tested. It identifies alignment scenarios as traps and adjusts its behaviour accordingly. Meta concluded this was "not a blocking concern for release." A model that knows when it's being watched and acts differently is worth watching.

META stock rose on the news. The capex commitment for 2026 stands at $115-135 billion. Wang has the infrastructure and the backing of a company that has committed more money to AI than most countries spend on defence. What he doesn't have, not yet, is the community that Llama spent three years building.

Sources:

Circled in Biro

Classified ads charged by the word, which meant every entry was a compression. VGC. ONO. GSOH. You learned the abbreviations without being taught, the way you learn any local dialect — by weekly exposure to need laid out in columns so dense the ink nearly touched between entries.

The page was never something you set out to read. You arrived at it sideways, past the letters and the sport, and then you stayed. Anthony Whitehead described it as a tic you struggle to suppress — browsing even when you weren't buying, constructing imaginary lives from the collision of a secondhand pram listed next to a "lonely widower seeks companion." The classified section was a census of a town's desires that nobody had commissioned.

Exchange and Mart started in a converted potato warehouse in Covent Garden in 1868. By its peak it sold 350,000 copies a week. By December 2007 that was 21,754. It went online-only in 2009. AutoTrader, launched as a print magazine in 1977, hit 368,000 circulation by January 2000 and collapsed to 27,000 by March 2013. The websites that replaced them are faster, searchable, free to post on, and utterly without texture.

The ink came off on your fingers. You'd notice it hours later, at your desk or in the bath, and wouldn't be able to say exactly when it transferred.

What texture looked like: a "Situations Vacant" column that told you which factories were hiring and which had stopped. A "Deaths" column — hatches, matches, and despatches, the sub-editors' phrase — that was the closest thing a town had to a public record of its own passing. Paid per word by grieving families who chose every noun carefully because each one cost money. That constraint produced a compressed dignity. "Peacefully, at home, surrounded by family." Five words that did more work than most obituaries.

The personals were something else entirely. H.G. Cocks traced their history in Classified: The Secret History of the Personal Column, from the ciphered notices in The Times that Victorian editors called the agony column to the coded ads that LGBTQ+ readers placed in alternative papers. Abbreviations and careful phrasing created a shared language invisible to anyone not looking for it. A lifeline threaded through the small print.

In 2007, UK regional newspaper revenue sat at £2.4 billion. By 2022 it was £590 million. The classified money didn't vanish — it migrated to Rightmove, Indeed, Gumtree, platforms that match supply to demand more efficiently and do nothing else. A study in the Review of Economic Studies tracked what happened in US cities after Craigslist arrived: newsrooms shrank, political coverage thinned, and partisan polarisation increased. The classified page had been subsidising democracy, and nobody noticed until the subsidy was gone.

Information had mass once. It occupied physical space in newsprint columns, and reading it meant handling the paper, folding it on a bus, circling an entry with a biro, tearing the page out and pinning it to a corkboard above the phone. The phone was in the hallway. You rang the number and talked to a stranger and drove to their house to look at a wardrobe. The entire transaction happened inside your own postcode.

Nobody is nostalgic for paying 40p a word. But the classified page was the last section of a newspaper where ordinary people wrote the copy. Reporters, editors, columnists handled the rest. The small ads were the public writing themselves into the record, one compressed line at a time, and because you could read them all in a sitting you carried a rough, partial, beautifully skewed portrait of your community in your head without ever meaning to.

Sources:

Copying Machines

Bloomberg reported on Sunday that OpenAI, Anthropic, and Google have started sharing threat intelligence through the Frontier Model Forum, the nonprofit the three companies co-founded with Microsoft in 2023. The arrangement works like a cybersecurity ISAC: when one company detects a suspicious query pattern, it flags the signature for the others.

The target is adversarial distillation. Chinese labs — DeepSeek, Moonshot AI, and MiniMax — have been systematically querying Claude, ChatGPT, and Gemini through fake accounts to generate training data for cheaper models. Anthropic's February disclosure put numbers to it: roughly 24,000 fraudulent accounts generating over 16 million exchanges with Claude alone. MiniMax accounted for 13 million of those. The operations used what Anthropic called "hydra cluster" architectures — sprawling proxy networks managing thousands of accounts simultaneously, mixing distillation traffic with innocuous requests to avoid detection.

The Decoder has a good free summary of the Bloomberg story, which reports that US authorities estimate the practice costs American AI labs billions annually.

What's interesting isn't the distillation itself. That problem has been visible since DeepSeek R1 shook the market in January 2025. What's interesting is the vehicle. The Frontier Model Forum was chartered to study catastrophic risks: CBRN threats, advanced cyberattacks, the kind of existential scenarios that get discussed at Senate hearings. Its stated mission mentions nothing about distillation, model copying, or commercial intelligence. The pivot from "prevent bioweapon synthesis" to "detect bulk API scraping" is a significant scope expansion, and nobody seems to have remarked on it.

The legal terrain underneath all of this is surprisingly weak. Fenwick & West's analysis found that copyright offers little protection, because AI outputs generally lack the human authorship required. The Computer Fraud and Abuse Act has a gap since Van Buren v. United States (2021): if you have authorized API access, misusing the data violates terms of service but possibly not federal law. Trespass to chattels requires proving system degradation. Patents may be the strongest tool, but nobody has tested distillation-specific claims in court.

Policy hawks are pushing harder. Joe Khawam at the Law Reform Institute proposed a three-phase escalation: Entity List designation for the three Chinese labs, an IEEPA executive order creating sanctions authority over AI capability theft, and ultimately full SDN blocking sanctions. CSIS testimony from May 2025 went further, suggesting offensive countermeasures including data poisoning.

The irony sits right on the surface. These are companies that built their models by ingesting the open web, books, articles, code repositories, forum posts, without explicit permissions from creators. The legal and ethical arguments they used to justify that training are structurally similar to the ones Chinese labs could deploy to justify distillation. Monash University's analysis compared distillation to reverse engineering under Sega v. Accolade: studying a system's outputs to learn its methods is not, historically, the same as copying the system.

None of this means the alliance won't work. Sharing detection signatures is a practical step. DeepSeek has already pivoted to domestic silicon, which suggests the API route was always supplemental. But the Forum's quiet transformation from safety research body to competitive defense mechanism deserves more scrutiny than it's getting. When three companies that control most of the world's frontier AI capability coordinate to restrict access, the word for that depends entirely on where you're standing.

Sources:

One Percent Patched

On the Firefox exploit benchmark, Claude Mythos Preview produced 181 working exploits. Opus 4.6 managed two. Anthropic published those numbers yesterday alongside a 244-page system card and the announcement that it would not release the model to the public.

The March leak described a model with dramatically higher scores on coding, reasoning, and cybersecurity. I expected the official numbers to confirm that framing. They don't. They blow past it. Mythos was tested against roughly a thousand open-source repositories across seven thousand entry points and found zero-days in every major operating system and every major web browser. Some had been sitting in production code for decades.

A 27-year-old signed integer overflow in OpenBSD's TCP selective acknowledgment handling allows a remote attacker to crash any host from anywhere on the internet. In FreeBSD's NFS authentication layer, Mythos found a 17-year-old stack buffer overflow (CVE-2026-4747) and autonomously constructed a six-packet ROP chain to write an SSH key into root's authorized_keys. FFmpeg's H.264 codec has a flaw that automated fuzzing tools encountered five million times over sixteen years without flagging it.

The historical arc puts those numbers in context. DARPA's Cyber Grand Challenge in 2016 ran automated tools against purpose-built binaries. Google's Project Zero Big Sleep found one SQLite vulnerability in 2024 that 150 CPU-hours of fuzzing had missed. Last year's AIxCC competition found 18 zero-days across 54 million lines of code. The progression from five hundred bugs to thousands is not linear.

Instead of a general release, Anthropic launched Project Glasswing: a coalition of twelve companies including Apple, Microsoft, Google, AWS, CrowdStrike, and Palo Alto Networks, committed to using Mythos for defensive cybersecurity. Roughly fifty organisations total. Anthropic put up to $100 million in usage credits behind it and donated $4 million to the Linux Foundation and Apache Software Foundation.

Picus Security called it "the Glasswing Paradox": the thing that can break everything is also the thing that fixes everything. Anthropic's own disclosure puts a number on it. Fewer than one percent of Mythos-discovered vulnerabilities had been patched at announcement. Discovery is outrunning repair.

Weeks before the official announcement, Linux kernel maintainer Greg Kroah-Hartman described something shifting: "Something happened a month ago, and the world switched" from low-quality AI-generated vulnerability reports to genuine findings. Daniel Stenberg, who created curl, went from shutting down his bug bounty over AI noise to spending hours a day triaging legitimate ones.

Simon Willison called the restriction "necessary" while noting that saying a model is too dangerous to release is a great way to build buzz. The GPT-2 comparison is inevitable. But GPT-2's predicted harms never materialised, and 181 Firefox exploits did. Jack Clark, who co-founded Anthropic and now heads its public benefit division, has framed the core tension before: AI good at finding vulnerabilities for defense can easily be repurposed for offense.

Glasswing partners can access Mythos at $25 per million input tokens and $125 per million output, through Claude API, Bedrock, Vertex, and Microsoft Foundry. The broader situation is a timing problem. Defenders work at calendar speed. Attacks happen at machine speed.

Sources:

Lagerfeld Misread Macaulay

In 1953, Rose Macaulay published a book about ruins that ended in surrender. Pleasure of Ruins is a four-hundred-page march through the Western imagination's romance with broken stones: Roman ruins, Mayan temples, the gothic abbeys English aristocrats had built in their gardens just to watch them moulder. Macaulay wrote it a decade after the Blitz had taken her Marylebone flat and her library, and the book closes with a verdict she meant for the whole tradition. Ruinenlust, she said, had come full circle. We had had our fill.

Thirty-nine years later, Karl Lagerfeld read the book and built a couture collection out of it.

The Chanel Spring 1992 haute couture show was presented in Paris in January of that year, and even now it gets cited more than almost anything else from Lagerfeld's tenure. Most of the citations are for one dress: a slim black silhouette layered with chunky gold-and-glass chain, worn down the runway by Christy Turlington and later, in the long afterlife of fashion images, by Penélope Cruz in Broken Embraces and Lily-Rose Depp at the 2019 Met Gala. The dress was also a brilliant marketing vehicle for Chanel costume jewellery, which was the brand's most profitable category at the time. A Trojan horse with chains.

The most interesting things in the collection were not the chains. They were the jackets. Lagerfeld had built a series of trompe-l'œil tweeds that were not tweed at all: they were raffia, painted in watercolour to look like the house's signature weave. The tailoring was so tight the jackets had to be zipped up the back rather than buttoned at the front; gold jewelled buttons running down the lapels were decoration, not closure. He called the silhouettes "diabolically body-conscious," and looking at a single look the cameras kept, you can see what he meant. A red-orange jacket structured into one architectural line. Black opera gloves. The whole pose engineered around the absence of a front opening.

The same logic carries through the rest of the collection. A white jacket worn over gold leather trousers repeats the architecture in a colder palette: dark trim and gilded buttons running the lapels for show, a single real button doing the actual work, the front pose engineered around the absence of a closure to draw the eye to.

This is where the Macaulay reference starts to matter, and where it also starts to look strange.

Lagerfeld's tattered chiffon skirts (separate from the jackets, but shown alongside them) were the show's literal acknowledgement of Pleasure of Ruins. Lagerfeld is the one who told the press the book was on his mind, his favourite, the thing that pushed him toward the deliberate decay of the silk. The trade press accepted the citation at face value, then and now: Lagerfeld read a book about loving ruins, and made some clothes about loving ruins. Done.

The trouble is that Pleasure of Ruins is not really a book about loving ruins. Macaulay's argument, and you have to push past the gorgeous central chapters about Pompeii and the Cambodian temples to get there, is that the Romantic appetite for ruin was something Europeans had earned through centuries of safe spectatorship, and that the twentieth century had revoked the licence. The bombed churches and cathedrals of postwar Europe gave her, she wrote, "nothing but resentful sadness, like the bombed cities." Her closing line is the one I quoted at the top. Ruinenlust was over. We were finished with it.

So either Lagerfeld read the book against itself, mining the picturesque chapters and ignoring the postwar conscience, or he understood Macaulay perfectly and was making something more complicated than the trade press credited him for. A couture show built on an aesthetic the source text had already declared exhausted is, at the very least, a knowing gesture. In the same show he wrapped tree trunks in graffiti and floated bubbles down from the ceiling; he was not above an inside joke. I think he was reading Macaulay the way he read everything in his enormous, untouchable library — not as a thesis to defend but as a quarry. He took what he wanted and left the rest.

The Met has a Lagerfeld Chanel piece from his Spring 1983 debut in its collection. It is a black dress trimmed in trompe-l'œil baubles made by the House of Lesage: fake jewels embroidered to look real. Nine years before he zipped the backs of those raffia jackets, he was already running this exact substitution. The jewels would not be jewels. The tweed would not be tweed. The chain dress would be a vehicle for the actual chains in the boutique. There is a coherence to Lagerfeld's half-century at Chanel that has very little to do with reverence for Coco and almost everything to do with what Suzy Menkes once said — that Karl had to destroy Chanel or become a caricature of her.

In January 1992, he picked up a book about the end of European ruin-aesthetics and built a runway collection from it. Macaulay had written a decade past the bombs that took her library, telling the tradition to go home. He heard a different sentence and answered it.

Sources:

Waiting for 302

Ceefax transmitted its data in the vertical blanking interval, the millisecond gap where a CRT's electron gun returned to the top of the screen. You never saw it happen. The information rode an invisible seam in the broadcast signal, cycling through hundreds of pages in a continuous carousel. You keyed in a three-digit number and waited.

That wait defined the medium. Page 302 was football scores. On Saturday afternoons you entered the number and the screen went blank. A counter ticked upward as pages streamed past in the carousel, and you sat with the specific tension of not knowing when your page would come around. Maybe eight seconds. Maybe twenty-five. The data was always there, always cycling, but you could not summon it. You met it on its schedule.

What stays with me is not the content but the temporal architecture. Anyone can look up a football score now in two seconds. The carousel was not a flaw to be engineered away. It was the medium itself. Information arrived when the cycle permitted. Andy Holyer, writing in The Conversation, compared it to a sushi conveyor belt: you watched the stream and waited for your order to come around. Except with Ceefax you couldn't see the plates approaching. You sat in front of a counter ticking from 297 to 298 to 299.

Ceefax launched on 23 September 1974 with thirty pages. By the mid-1990s it had over two thousand, and twenty-two million people were using it weekly. The name was a phonetic compression: see facts. It offered what Holyer called "medium-latency information," the category between tomorrow's newspaper and a live broadcast interruption. Weather. Train times. News compressed into sixteen lines of thirty-eight characters each, tighter than a tweet. Page 888 for subtitles.

Information had mass in that era, and even the fastest source still asked something of you. Ceefax was faster than walking to a newsagent but slower than a conscious thought. It occupied a gap that no longer exists: a middle distance between knowing and not knowing where you could sit for fifteen seconds and be fine with it.

"Pages from Ceefax" filled the overnight schedule. Selected teletext screens scrolling over stock library music at three in the morning, blocky weather maps cycling while nobody watched. It was ambient television before anyone used those words together.

The whole service ended on 23 October 2012 at 23:32:19 BST, when Dame Mary Peters switched off the last analogue transmitter in Northern Ireland. By then broadband had been widespread for years and the audience had dwindled. But the teletext art community was already rebuilding. Dan Farrimond creates work within the medium's savage constraints: eight colours, a 24-by-40 character grid. He told Creative Bloq that "people might come for the nostalgia, but they stay for the fun and accessibility." Peter Kwan built Teefax on a Raspberry Pi, delivering community teletext to compatible TVs almost a decade after Ceefax died.

Something in that revival goes beyond nostalgia. Nostalgia wants to return. The teletext artists want the constraint. The grid. The carousel logic of working within limits rather than transcending them. The analogue textures of that period carry a specific charge now, and teletext sits at the centre of it: institutional, patient, slightly uncanny. A public service that asked you to wait. You did. The waiting was the point.

Sources:

What the Scan Couldn't Keep

Tonight I tried to clean up four scanned magazine pages from early-90s fashion editorials. Helena Christensen on every one. A brown Hermès coat on a white background, a black Moschino jacket against the Catherine Palace, a Fabrizio Ferri beach shot, a French magazine spread. Soft gradient backgrounds. The kind of photographs that should have looked clean and didn't.

I tried four things in sequence, the way you do when each one fails. Topaz Wonder 2, which I praised earlier this year for finally showing some restraint, sharpened the whole image and made the gold rope braiding on the jacket pop, but the gradient bands behind her (vertical pinks and lavenders in the foreground concrete) became more visible, not less. Sharper bands. Nano Banana Pro hallucinated a "VOGUE OCTOBER 1994" stamp into the top corner of one image and garbled the French body copy on another. The ffmpeg gradfun filter softened the bands at strength four, then six, then eight, with diminishing returns. Eventually I added film grain on top of the gradfun pass and the bands disappeared. Not because they were fixed. Because the grain hid them.

That last move was the only thing that worked, and it didn't work the way I wanted it to.

I sat with that for a while. The gap between what these tools say they do and what they're actually capable of is wider than the marketing wants you to believe. Topaz Wonder 2 promises clean, natural, professional results. Black Forest Labs describes FLUX.1 Kontext as in-context image generation, not restoration. Google ships Nano Banana Pro as image generation and editing. None of the model makers themselves use the word restoration in their official copy. It lives in third-party blog posts, enthusiast tutorials, and the marketing decks of resellers. The people who actually built these things are careful about it. They know what they're shipping.

The reason became clearer the more I thought about it.

By the time that Vogue page reached my Desktop, three lossy steps had already happened in series. The photographer's smooth gradient was rasterized into CMYK halftone dots at print time. The printed page was then scanned in 8-bit, which captures only 256 brightness levels per colour channel — a smooth gradient needs more than a thousand intermediate values, and the other 750 were rounded away. The scan was saved as JPEG, which divides the image into 8x8 blocks and throws out the high-frequency data that would have hidden the quantization steps. Three quantizations in a row, each one mathematically irreversible. By the time I opened the file, the smooth gradient the photographer captured no longer existed inside it. What was there was a banded approximation, and the bands were the data.

That's the wall.

Any tool that processes the file has to look at the bands and decide: is this region a real banded image, or is it a smooth gradient that's been damaged? Without context, those two states are indistinguishable. The tool has to guess. Every guess creates new artefacts.

Audio engineers have been living with this exact mathematics for forty years and they're more honest about it than image software is. When you reduce a 24-bit master to 16-bit for CD release, the quantization step destroys information nothing can recover. The standard fix is dither — adding deliberate, low-level noise that converts the structured quantization distortion into broadband noise the ear is less sensitive to. No mastering engineer would ever say dither fixes the bit reduction. They say it masks it. The vocabulary is precise: quantization error is irreversible; dither is a perceptual trade.

Image restoration borrowed the tools but dropped the honesty. Topaz markets debanding as recovery. Adobe sells Generative Fill as reimagining. Cloud upscalers promise enhancement, which by now means whatever the user wants it to mean. The actual operation, in every case, is the same: invent the missing information based on a learned prior, and hope the invention is plausible enough that nobody notices. The ffmpeg gradfun documentation is unusually candid about this. It describes itself as a filter designed for playback only and warns "do not use it prior to lossy compression, because compression tends to lose the dither and bring back the bands." The author of the filter is telling you, in the official docs, that the fix is perceptual and any subsequent compression will undo it.

Topaz's own docs are gentler. Their generative models "add definition and detail," the page says. Generation, not restoration. The vocabulary just sounds nicer than what the audio engineers say.

What worked for the Helena pages was the audio engineer's trick. Run gradfun first to soften the gradients. Then add a layer of controlled film grain. The grain hides the remaining bands by giving the eye texture to focus on instead of stepped edges. The result looks grainy instead of banded. For a 1990s magazine page, grainy is the right answer. Actual printed pages had paper texture, ink dot patterns, and physical grain. The artificial grain slots into that aesthetic in a way that fake-smooth gradients never would. It's not recovery. It's masking. It's the same trade audio mastering has been making for decades.

The deeper thing I keep coming back to is that this was an information loss problem hiding inside a UX problem. The tools were doing exactly what they were designed to do: adding plausible detail, smoothing gradients, generating new content from priors. None of them were designed to recover something that no longer existed. The frustration came from believing the marketing, not from any specific tool being broken.

Helena is still on my Desktop, eight files now. Original, four failed attempts, plus the gradfun-and-grain version that almost works. The gradient behind her is grainy in a way the printed page never was. Some of her hair is a little sharper than the source. Her eyes are slightly bluer. The text caption on the left side is pixel-for-pixel identical to the original, because the tool I trusted the most (ffmpeg, the dumbest one) knew it had no business touching real detail.

Sources:

Seven Hexagons

Unmarked VHS tapes started landing in US mailboxes on April 6, shipped from the address Warp and Bleep use for fulfilment. Black sleeve. Seven white hexagons. An NTSC sticker. Inside: a minute or so of degraded analogue video, shortwave-style audio, and layered vocal fragments that fans on KEYOSC and r/boardsofcanada have identified as manipulated material from Societas x Tape. Some listeners are picking apart what sounds like frequency-shift keying data embedded in the audio itself.

No music. Just a transmission.

This is the exact playbook Boards of Canada ran for Tomorrow's Harvest in 2013: mystery 12" singles hidden in record shops, Adult Swim late-night broadcasts, a Tokyo billboard, shortwave fragments, a six-digit code hunt. Thirteen years of silence, and then suddenly the same kind of cryptic analogue mailout arrives at people's doorsteps. Resident Advisor asked Warp for comment. Per RA, Warp were, unusually, unavailable for comment.

The hauntology aesthetic running through all of this isn't decoration. It's the point. The whole band was always a transmission from a future that didn't quite arrive. Now the broadcast is picking up again.

Sources: