Skip to content

Plutonic Rainbows

What the Scan Couldn't Keep

Tonight I tried to clean up four scanned magazine pages from early-90s fashion editorials. Helena Christensen on every one. A brown Hermès coat on a white background, a black Moschino jacket against the Catherine Palace, a Fabrizio Ferri beach shot, a French magazine spread. Soft gradient backgrounds. The kind of photographs that should have looked clean and didn't.

I tried four things in sequence, the way you do when each one fails. Topaz Wonder 2, which I praised earlier this year for finally showing some restraint, sharpened the whole image and made the gold rope braiding on the jacket pop, but the gradient bands behind her (vertical pinks and lavenders in the foreground concrete) became more visible, not less. Sharper bands. Nano Banana Pro hallucinated a "VOGUE OCTOBER 1994" stamp into the top corner of one image and garbled the French body copy on another. The ffmpeg gradfun filter softened the bands at strength four, then six, then eight, with diminishing returns. Eventually I added film grain on top of the gradfun pass and the bands disappeared. Not because they were fixed. Because the grain hid them.

That last move was the only thing that worked, and it didn't work the way I wanted it to.

I sat with that for a while. The gap between what these tools say they do and what they're actually capable of is wider than the marketing wants you to believe. Topaz Wonder 2 promises clean, natural, professional results. Black Forest Labs describes FLUX.1 Kontext as in-context image generation, not restoration. Google ships Nano Banana Pro as image generation and editing. None of the model makers themselves use the word restoration in their official copy. It lives in third-party blog posts, enthusiast tutorials, and the marketing decks of resellers. The people who actually built these things are careful about it. They know what they're shipping.

The reason became clearer the more I thought about it.

By the time that Vogue page reached my Desktop, three lossy steps had already happened in series. The photographer's smooth gradient was rasterized into CMYK halftone dots at print time. The printed page was then scanned in 8-bit, which captures only 256 brightness levels per colour channel — a smooth gradient needs more than a thousand intermediate values, and the other 750 were rounded away. The scan was saved as JPEG, which divides the image into 8x8 blocks and throws out the high-frequency data that would have hidden the quantization steps. Three quantizations in a row, each one mathematically irreversible. By the time I opened the file, the smooth gradient the photographer captured no longer existed inside it. What was there was a banded approximation, and the bands were the data.

That's the wall.

Any tool that processes the file has to look at the bands and decide: is this region a real banded image, or is it a smooth gradient that's been damaged? Without context, those two states are indistinguishable. The tool has to guess. Every guess creates new artefacts.

Audio engineers have been living with this exact mathematics for forty years and they're more honest about it than image software is. When you reduce a 24-bit master to 16-bit for CD release, the quantization step destroys information nothing can recover. The standard fix is dither — adding deliberate, low-level noise that converts the structured quantization distortion into broadband noise the ear is less sensitive to. No mastering engineer would ever say dither fixes the bit reduction. They say it masks it. The vocabulary is precise: quantization error is irreversible; dither is a perceptual trade.

Image restoration borrowed the tools but dropped the honesty. Topaz markets debanding as recovery. Adobe sells Generative Fill as reimagining. Cloud upscalers promise enhancement, which by now means whatever the user wants it to mean. The actual operation, in every case, is the same: invent the missing information based on a learned prior, and hope the invention is plausible enough that nobody notices. The ffmpeg gradfun documentation is unusually candid about this. It describes itself as a filter designed for playback only and warns "do not use it prior to lossy compression, because compression tends to lose the dither and bring back the bands." The author of the filter is telling you, in the official docs, that the fix is perceptual and any subsequent compression will undo it.

Topaz's own docs are gentler. Their generative models "add definition and detail," the page says. Generation, not restoration. The vocabulary just sounds nicer than what the audio engineers say.

What worked for the Helena pages was the audio engineer's trick. Run gradfun first to soften the gradients. Then add a layer of controlled film grain. The grain hides the remaining bands by giving the eye texture to focus on instead of stepped edges. The result looks grainy instead of banded. For a 1990s magazine page, grainy is the right answer. Actual printed pages had paper texture, ink dot patterns, and physical grain. The artificial grain slots into that aesthetic in a way that fake-smooth gradients never would. It's not recovery. It's masking. It's the same trade audio mastering has been making for decades.

The deeper thing I keep coming back to is that this was an information loss problem hiding inside a UX problem. The tools were doing exactly what they were designed to do: adding plausible detail, smoothing gradients, generating new content from priors. None of them were designed to recover something that no longer existed. The frustration came from believing the marketing, not from any specific tool being broken.

Helena is still on my Desktop, eight files now. Original, four failed attempts, plus the gradfun-and-grain version that almost works. The gradient behind her is grainy in a way the printed page never was. Some of her hair is a little sharper than the source. Her eyes are slightly bluer. The text caption on the left side is pixel-for-pixel identical to the original, because the tool I trusted the most (ffmpeg, the dumbest one) knew it had no business touching real detail.

Sources:

Seven Hexagons

Unmarked VHS tapes started landing in US mailboxes on April 6, shipped from the address Warp and Bleep use for fulfilment. Black sleeve. Seven white hexagons. An NTSC sticker. Inside: a minute or so of degraded analogue video, shortwave-style audio, and layered vocal fragments that fans on KEYOSC and r/boardsofcanada have identified as manipulated material from Societas x Tape. Some listeners are picking apart what sounds like frequency-shift keying data embedded in the audio itself.

No music. Just a transmission.

This is the exact playbook Boards of Canada ran for Tomorrow's Harvest in 2013: mystery 12" singles hidden in record shops, Adult Swim late-night broadcasts, a Tokyo billboard, shortwave fragments, a six-digit code hunt. Thirteen years of silence, and then suddenly the same kind of cryptic analogue mailout arrives at people's doorsteps. Resident Advisor asked Warp for comment. Per RA, Warp were, unusually, unavailable for comment.

The hauntology aesthetic running through all of this isn't decoration. It's the point. The whole band was always a transmission from a future that didn't quite arrive. Now the broadcast is picking up again.

Sources:

Waiting for 302

Ceefax transmitted its data in the vertical blanking interval, the millisecond gap where a CRT's electron gun returned to the top of the screen. You never saw it happen. The information rode an invisible seam in the broadcast signal, cycling through hundreds of pages in a continuous carousel. You keyed in a three-digit number and waited.

That wait defined the medium. Page 302 was football scores. On Saturday afternoons you entered the number and the screen went blank. A counter ticked upward as pages streamed past in the carousel, and you sat with the specific tension of not knowing when your page would come around. Maybe eight seconds. Maybe twenty-five. The data was always there, always cycling, but you could not summon it. You met it on its schedule.

What stays with me is not the content but the temporal architecture. Anyone can look up a football score now in two seconds. The carousel was not a flaw to be engineered away. It was the medium itself. Information arrived when the cycle permitted. Andy Holyer, writing in The Conversation, compared it to a sushi conveyor belt: you watched the stream and waited for your order to come around. Except with Ceefax you couldn't see the plates approaching. You sat in front of a counter ticking from 297 to 298 to 299.

Ceefax launched on 23 September 1974 with thirty pages. By the mid-1990s it had over two thousand, and twenty-two million people were using it weekly. The name was a phonetic compression: see facts. It offered what Holyer called "medium-latency information," the category between tomorrow's newspaper and a live broadcast interruption. Weather. Train times. News compressed into sixteen lines of thirty-eight characters each, tighter than a tweet. Page 888 for subtitles.

Information had mass in that era, and even the fastest source still asked something of you. Ceefax was faster than walking to a newsagent but slower than a conscious thought. It occupied a gap that no longer exists: a middle distance between knowing and not knowing where you could sit for fifteen seconds and be fine with it.

"Pages from Ceefax" filled the overnight schedule. Selected teletext screens scrolling over stock library music at three in the morning, blocky weather maps cycling while nobody watched. It was ambient television before anyone used those words together.

The whole service ended on 23 October 2012 at 23:32:19 BST, when Dame Mary Peters switched off the last analogue transmitter in Northern Ireland. By then broadband had been widespread for years and the audience had dwindled. But the teletext art community was already rebuilding. Dan Farrimond creates work within the medium's savage constraints: eight colours, a 24-by-40 character grid. He told Creative Bloq that "people might come for the nostalgia, but they stay for the fun and accessibility." Peter Kwan built Teefax on a Raspberry Pi, delivering community teletext to compatible TVs almost a decade after Ceefax died.

Something in that revival goes beyond nostalgia. Nostalgia wants to return. The teletext artists want the constraint. The grid. The carousel logic of working within limits rather than transcending them. The analogue textures of that period carry a specific charge now, and teletext sits at the centre of it: institutional, patient, slightly uncanny. A public service that asked you to wait. You did. The waiting was the point.

Sources:

A Memory from the Past Keeps Pulling Me Back

There's a specific kind of missing that doesn't behave like other missings. Most loss negotiates with you. You can argue with it, substitute around it, find a version of the thing or a version of yourself that makes do. Temporal loss doesn't negotiate. It sits there, complete, refusing to be anything except what it was.

The strange part is that the pull has almost nothing to do with the past itself. The past isn't pulling you backward because you want to live it again. You already did, and by the time you want it back you already know how it ends. What pulls is the difference between where you are standing now and where some other version of you once stood. You occupied a moment without knowing you were occupying it. By the time you notice, the door has closed and the key went with whoever was carrying it, which was not you, because that person is not here anymore.

Memory doesn't help, because memory doesn't preserve. It curates. You don't remember the morning you're reaching for. You remember a version of that morning, smoothed, with the low-grade dread and ordinary exhaustion of being a person in the middle of their own life quietly edited out. What survives isn't the morning. It's an emotional after-image of the morning, which is a different object. So you're missing something that wasn't fully assembled while it was happening. The original isn't in a room you can return to. The original isn't anywhere. There is only the shape memory gave it once it was safe to give it a shape.

Mark Fisher circled around this for years in Ghosts of My Life. The grief in hauntological thinking isn't only grief for the lost thing. It's grief for a way of being whose enabling conditions have evaporated. You can't return to the place because the conditions that made you possible in that place are not there to meet you. Even if the place is still physically standing, you are arriving at it as somebody else, and the version of you who could have met it as it was doesn't exist anywhere now, not even as a possibility. It's the same frame that makes certain music unlistenable in a particular way. The feeling that what you're hearing is signalling back from a future that didn't arrive.

That is the shape of it. Not just "the past is gone". Everyone knows that. The self that existed inside the past is also gone, and that's the version of you doing most of the work when the pull gets bad. You're grieving yourself. Specifically, you're grieving a briefly-existing person who was made possible by conditions that no longer exist, and whose absence is more total than almost any other absence you encounter.

Most things that go away leave the imagination something to do. A friendship fractures and somebody maybe repairs it. A place changes and you visit and find some remnant. A body breaks, and medicine and time and acceptance take over. Having work for the imagination is what makes most grief survivable, because the work is what spaces out the loss and gives it a corridor to move through.

Time doesn't give the imagination any work. The autumn you miss can't be repaired. It can't be recovered. It isn't a thing that was stolen or hidden from you. It's a thing that simply stopped being, in a way that leaves no mechanism by which it could start being again. The closest you can get is a rhyme. A similar quality of light. A similar smell when the air changes. A similar quiet at the same hour. And the rhyme is worse than nothing, because the rhyme reminds you that rhyming is the most you're ever going to get.

This is why nostalgia sometimes feels less like sadness and more like a floor that wasn't there when you put your weight on it. Sadness has a shape, a direction, an object. This doesn't. What you're feeling isn't the loss of a particular thing. It's the shape of absolute irreversibility pressing against what you were thinking about. For a second you understand what the word actually means. Then you look away, because you have to.

The other thing is that the memory keeps getting heavier. The more stories you've told yourself about a moment in the years since it happened, the more you've used it to explain other things about yourself, the more weight the moment ends up carrying. Eventually the moment isn't carrying its own weight anymore. It's carrying the weight of everything you've made it mean. When you reach for it you aren't reaching for a moment. You're reaching for a cumulative thing. An invented weight, almost, though it never feels invented from the inside.

I think about this more often than is strictly useful. That tends to be how these things keep you.

Defenders First

Anthropic just handed Claude Mythos to eleven launch partners. Not a public preview. Not a research release. A controlled handoff, named Project Glasswing, with AWS, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks on the inside, plus around forty other organisations getting access behind them.

Twelve days ago, a draft of the Mythos announcement leaked through a CMS toggle. That document called Mythos "currently far ahead of any other AI model in cyber capabilities" and warned it "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders." CrowdStrike fell 7 percent on the news. Palo Alto Networks fell 6. Stifel analyst Adam Borg called it "the ultimate hacking tool."

Both of those companies are now Glasswing partners.

That isn't subtle. Anthropic spent twelve days watching their own model get described in the financial press as a vulnerability factory, and their answer is to put it directly in the hands of the firms whose stock prices moved.

The benchmarks earn the framing. On CyberGym, a vulnerability reproduction test, Mythos scored 83.1 percent against Opus 4.6's 66.6 percent. That's a sixteen-point jump on a benchmark where prior frontier models had been clustered tightly. More telling is the Firefox 147 JavaScript engine work. Anthropic's own writeup notes that Opus 4.6 turned its findings into working JavaScript shell exploits "only two times out of several hundred attempts." Mythos developed working exploits 181 times in the same setup, and achieved register control on 29 more. That isn't an incremental improvement. It's a different kind of capability.

OSS-Fuzz tells the same story from another angle. Across roughly seven thousand entry points, Sonnet 4.6 and Opus 4.6 each reached tier 1 between 150 and 175 times and hit tier 2 about 100 times, but each landed only a single tier 3 crash. Mythos hit 595 crashes at tiers 1 and 2 and achieved full control flow hijack on ten separate, fully patched targets. Some of the vulnerabilities it found in major operating systems had survived decades of human review.

So Anthropic has a model that reliably finds and exploits the kind of bugs that ship in every browser and kernel. They're committing $100 million in usage credits to the Glasswing partners, plus $4 million in direct donations to open-source security organisations. And they aren't releasing it publicly.

Whether the head start works is the real question.

Defenders patching with Mythos help everyone, because patches ship to all users. Attackers exploiting with Mythos help only themselves, until the patches catch up. The asymmetry favours the defenders if they move fast and if Mythos stays inside Glasswing. Both of those conditions are doing a lot of work.

The first one I believe in. CrowdStrike and Palo Alto Networks aren't slow. Cisco has incident response teams that move on weekends. JPMorganChase has the budget to throw a model at every internal codebase they own. If Mythos can find decades-old browser bugs in testing, it can find decades-old bugs in proprietary banking infrastructure too, and the patches will quietly ship inside the partner organisations long before anything equivalent becomes public.

The second condition is harder. Anthropic's last two weeks haven't been a triumph of operational security. The same company that shipped 512,000 lines of unobfuscated TypeScript through a missing .npmignore is now the gatekeeper for the most cyber-capable model anyone has talked about publicly. Forty-plus additional organisations are getting access behind the named eleven. That's forty-plus opportunities for a misconfigured CMS toggle, a forgotten npm publish step, or a researcher leaving a laptop in a hotel.

The dual-use problem isn't solved by picking the right first eleven companies. It's delayed. And the delay is the entire strategy. Give defenders enough lead time, the thinking goes, and the security baseline rises before the attackers catch up. It's a reasonable bet. It's also a bet that has to keep being placed, because every Glasswing-style program eventually expires when the model becomes public.

One detail I can't stop thinking about. The system card notes that Mythos found vulnerabilities in cryptographic libraries. Cryptographic library bugs are the worst kind. They break silently, they affect everything downstream, and they often sit undiscovered for years because reviewing crypto code requires specific expertise that almost nobody has. If Mythos is finding these autonomously and the patches flow through Glasswing partners first, the Linux kernel maintainers and the Mozilla security team are about to have a very busy month.

The lab that tried to walk away from defence work over surveillance concerns just picked up a different kind of weapon and handed it to the people who run incident response for half the Fortune 500. The framing is defensive. The capability isn't. Whether those two things stay aligned depends on what happens between now and the public release date that Anthropic hasn't announced yet.

Sources:

Shallow End, Deep Time

Concordia Leisure Centre in Cramlington opened in 1977 with a steel spaceframe roof and barrel-vaulted glazing over a tropical pool lined with live palm trees. Within four months, half the town's population had enrolled as members. The Twentieth Century Society later described buildings like it as "some of the most architecturally innovative structures of the late twentieth century." Most of them are car parks now.

Over a thousand publicly accessible pools have closed in England since 2010. The most deprived areas lost 169; the wealthiest lost 49. A further fifteen hundred are over forty years old and approaching end of life, which is the kind of phrase councils use when they mean the money isn't there and nobody is going to find it.

What stays with me is not the loss itself but its texture. Municipal pools had a sensory architecture that nothing else replicated. Chlorine — involuntary, industrial, immediate — is one of the strongest institutional smell triggers that exists. The echo of voices against wet tile. Light through wired glass. The specific cold of changing cubicles with their wooden benches and broken locks.

These spaces already felt like a memory while you were still in them. Something about institutional tile, fluorescent lighting, and the acoustic distortion of water created a temporal slippage: you were simultaneously eight years old and however old you actually were, and neither version felt entirely real. A kind of sensory haunting that didn't require the building to be demolished first.

What happened in those spaces had a name: naked democracy. Stripped of consumer identity, they forced genuine equality. You took off your clothes. You took off your watch. You entered a space where status had no purchase and time moved differently. The C20 Society described them as "an intensely evocative part of our shared social heritage," which understates it. They were among the last truly communal, non-transactional public spaces we had left.

Leeds International Pool, brutalist, designed by a man later convicted of fraud, opened in 1967. Two hundred and twenty thousand visitors in its first six months, nearly half the population of Leeds. Closed 2007. Demolished 2009. Surface car park for a decade. Coventry's Sports Centre, nicknamed "The Elephant" for its zoomorphic silhouette, shut in 2020. Sunderland's Crowtree had an 800-ton space-frame roof that rivalled a jumbo jet hangar. Gone 2013.

The Derelict London catalogue records Peckham Rye Lido, closed 1987, its pool buried under earth. Only the fountain remains visible. Somewhere underneath, tile and concrete still hold the shape of water that hasn't been there for forty years.

Sources:

Robot Tax, Self-Assessed

OpenAI released a thirteen-page document on April 6 called "Industrial Policy for the Intelligence Age." Twenty specific proposals. Robot taxes. A public wealth fund modelled on Alaska's Permanent Fund. Government-backed pilots of a thirty-two-hour workweek at full pay. Automatic benefit triggers when AI displacement hits preset thresholds. Portable healthcare and retirement that follow workers between jobs instead of binding them to one employer.

The framing is New Deal. The language is Progressive Era. Altman told Axios that large tax system changes are "in the Overton window, but near the edges." The document reads like it was written by people who believe superintelligence is imminent and want to be remembered as the ones who tried to warn everyone.

The problem is who wrote it. OpenAI is the largest developer of the technology it warns about, a newly for-profit company preparing an IPO north of $800 billion, and a political actor whose Leading the Future PAC has lobbied against AI safety legislation in practice. Nathan Calvin at Encode AI documented opposition to New York's RAISE Act and alleged intimidation during California's SB 53 debate. The company proposing auditing regimes for frontier models is the same company fighting the audits.

Anton Leicht at the Carnegie Endowment called it "comms work to provide cover for regulatory nihilism." Lucia Velasco at the Inter-American Development Bank noted that OpenAI is "one of the least neutral parties in this ongoing discussion." Soribel Feliz, a former Senate AI policy advisor, said the ideas are not new. They have been the framework for every governance conversation since ChatGPT launched in 2022.

Then there is the timing. The document dropped on the same day The New Yorker published an investigation into Altman's leadership based on over a hundred interviews and internal documents, alleging systematic deprioritisation of safety commitments. A coincidence of scheduling, presumably.

The economics face pressure from a different direction. A Brookings paper by Anton Korinek and Lee Lockwood argues that taxing AI infrastructure is like taxing steel during the industrial revolution. Consumption-based approaches (digital services taxes, token taxes on AI output) would generate revenue without discouraging exactly the investment OpenAI asks the government to fast-track in the same document.

Some of the proposals are genuinely worth studying. A public wealth fund has precedent. Portable benefits address a real structural weakness. Automatic safety net triggers are smart mechanism design. But policy authored by the entity most incentivised to shape its own oversight is lobbying dressed in academic prose. White-collar payrolls have contracted for twenty-nine consecutive months. The entry-level pipeline keeps hollowing out. The people losing those jobs did not publish a thirteen-page blueprint. They just lost the job.

Sources:

Information Had Mass

On 30 April 1993, two CERN administrators signed a document releasing the World Wide Web into the public domain. Almost nobody noticed. The web was a tool used by physicists, and the document sat in an archive for years before anyone thought to frame it as a hinge point. That same year, a team at the University of Illinois released NCSA Mosaic, the first browser with inline images, and the National Science Foundation would later call it the start of "an internet revolution." But in 1993, the revolution was invisible. Everything else was still physical.

Information had mass that year. It arrived through letterboxes, sat on shelves, accumulated in filing cabinets. If you wanted to know something, the wanting itself took effort: a bus ride to a library, a phone call, a trawl through back issues of a magazine you might not find. Two people in the same city could hold completely different understandings of the same subject simply because of what they had happened to access. There was no equalising flood. Knowledge was distributed by geography, by class, by the accident of which shelves you stood in front of. I've written about the world before the index before, about what it meant when finding things required physical movement rather than keystrokes. The version of that world that existed in 1993 was the last one.

This gave expertise a texture it no longer carries. Knowing things, really knowing them, having absorbed a subject slowly over years, constituted genuine social capital. The autodidact who'd spent a decade reading around a topic occupied a position that doesn't exist in quite the same way anymore. A 2021 study in PNAS found that people who use Google cannot reliably distinguish between what they know and what the internet knows. Before search engines, that confusion was structurally impossible. You knew exactly where your knowledge ended because the boundary had physical dimensions: the books you owned, the libraries you could reach, the people you could ask.

Equally significant was the experience of not knowing and being comfortable with it. A film would come up in conversation. Nobody could remember who directed it. That question would just sit there, unresolved, sometimes for days, until someone found a reference book or it surfaced from memory on its own. The gaps were inhabited rather than instantly filled. Conversation moved differently when facts had latency. Memory was exercised differently. This sounds trivial. It isn't. The texture of thought changes when every question can be answered in four seconds.

There was a specific pleasure in the library, too, where you went looking for one thing and came back with something else entirely, ambushed by a spine on a shelf. That mode of discovery, fundamentally inefficient and genuinely irreplaceable, was already beginning its decline.

Nothing in 1993 assumed it would be remembered. A local news broadcast went out and was gone. A conversation in a pub was gone. A performance in a theatre. The instinct to document wasn't absent, but it was selective in a way that required effort and expense. A disposable camera had twenty-four shots. You thought about what you pointed it at.

Most of life simply evaporated. Not tragically, not even consciously. It did what life had always done: passed through and left only the traces that chance or intention preserved. Rob Horning, writing in The New Inquiry, observed that ephemerality was once "unremarkable, as virtually everything about our everyday lives was ephemeral: unmonitored, unrecorded, not saved." The archive of 1993 is full of holes, and those holes carry as much meaning as what remains. Most of what happened that year is gone in the same way most of what happened in 1893 is gone: contingently, irreversibly, without remedy.

What's strange is that this had always been true, but 1993 was approximately the last year it would be true as a default condition. Within a decade, the assumption would quietly reverse. Everything would be presumed recordable, searchable, retrievable. The burden of proof shifted from preservation to deletion.

There is a specific sensory world attached to that year and no equivalent exists now. The particular silence of waiting for a letter, for a phone call, for news to travel at the speed a human could carry it. The weight of the Radio Times as a physical object, consulted and annotated, the planning document for a household's entire week. Music arrived in physical form that had to be sought out, bought, carried home. If you missed something on television, you missed it. No catch-up. No clip appearing somewhere online two hours later.

Shops closed on Sundays. The Sunday Trading Act wouldn't arrive until 1994. You couldn't buy anything at midnight. Boredom was structural rather than optional, because the infrastructure of distraction was less total. People spent more time alone with their thoughts not because they were more contemplative by nature but because there was less available to pull them away. I sometimes wonder whether interiority itself was different when it wasn't competing with a feed.

The rupture was invisible as it happened. Nobody framed Mosaic as civilisational change. Netscape didn't arrive with a warning label. People adopted the web for practical reasons, email mostly, looking things up, and only later registered what had been traded. Kevin Driscoll, writing in Flow, has argued convincingly that we misremember the standard narrative of online paradise corrupted by newcomers. The pre-web internet was already hostile, class- stratified by email domain, and the "Eternal September" that supposedly ruined everything actually began in February 1994, not September 1993. The golden age never existed. But what ended wasn't a digital paradise. What ended was a particular mode of being in the world that had been continuous for centuries: living in local time, with local knowledge, at the pace information could physically travel.

By the time anyone thought to mourn the textures of the pre-internet world, those textures were already unreachable. You can't go back and document what information scarcity felt like from the inside, because the very tools you'd use to document it are the tools that ended it. The world that existed in 1993 didn't know it was about to become the past. It forgot itself in the ordinary way, without the archive waiting, without anyone yet thinking to press record.

Sources:

Yohji, 15ml

Jean Kerleo spent thirty-one years as the in-house perfumer at Jean Patou. He created 1000 in 1972, Sublime in 1992, and co-founded the Osmothèque in Versailles — a physical archive of perfumes that no longer exist. A man who preserved scents for posterity accepted a commission, in 1996, from a designer who once told AnOther Magazine he didn't really like any perfume.

The result was Yohji.

I own the 15ml parfum. Splash format, not spray. This matters more than it should. Spraying distributes a fragrance evenly across skin. Splashing concentrates it. You dab on pulse points and the opening arrives unevenly, galbanum landing sharp and metallic in one spot while the fruit notes bloom somewhere else. This is not a fragrance that announces itself uniformly.

Galbanum was already unfashionable by 1996. The market belonged to aquatics and transparencies: L'Eau d'Issey in 1992, CK One in 1994, all that clinical freshness designed to smell like clean rather than like anything in particular. Kerleo's choice of galbanum works the way Yamamoto chose black as a default palette. Not because it was easy, but because it communicated refusal. One retrospective called it "an act of deliberate counter-programming," and that phrase is exactly right.

Then the heart opens.

Dark fruit, compressed and ink-like, stripped of sugar. Heliotrope and jasmine underneath, structural rather than sweet. The base: vanilla, sandalwood, benzoin, and coumarin at concentrations that pre-IFRA regulations permitted and modern reformulations cannot touch. The dry-down is creamy and melancholic and lasts twelve hours minimum on skin. Longer on fabric. Some collectors insist the parfum reaches its truest expression on a wool scarf, where slower evaporation reveals depths that body heat obscures.

The contradiction is structural. The opening is austere, almost architectural in its precision. The base is intimate and enveloping. The fragrance moves from distance to closeness as it develops, from something that pushes you back to something that draws you in. Yamamoto's collaborator Caroline Fabre-Bazin described his garments as offering "shelter." The parfum operates on the same principle. It does not seduce. It rewards patience. Something comforting lives inside something haunting, and neither quality cancels the other.

The glass column beside its clear acrylic case is the eau de toilette, not the parfum. The 30ml spray, Yamamoto's signature running vertically along the body, the packaging giving nothing away. No gold, no ornamentation, no attempt to signal luxury through conventional codes. The glass itself is the statement. Thin-walled and elegant, the lettering prone to wear on bottles that have actually been handled, which is how collectors distinguish preservation quality. The parfum came wrapped in tissue paper inside the same architectural box. I remember unwrapping mine with the kind of care you reserve for things you suspect you will not find again.

That suspicion proved correct. Patou held the fragrance license, and when P&G acquired the house, the entire Yohji line disappeared by 2005. A reissue surfaced in 2013, reformulated by Givaudan's Olivier Pescheux. The IFRA restrictions on coumarin alone make faithful reproduction structurally impossible. What Kerleo built required ingredients at concentrations modern regulations prohibit.

He died in July 2025, aged ninety-three. The Osmothèque he co-founded now holds more than 4,000 perfumes, including 800 that exist nowhere else. I don't know whether the original Yohji formula is among them.

The parfum concentration has zero reviews on Parfumo. Not one. Not because it is inferior to the EDT, which has hundreds, but because almost nobody owns it. The 15ml splash was always the rarest format. Rarity compounds after discontinuation. What I have is something fewer people will smell with each passing year, as bottles empty or degrade or disappear into collections that never get opened. There is a particular quality to wearing a fragrance that is leaving the world. It shares something with what sealed bottles preserve about time held in suspension, except this bottle is not sealed. I wear it. It diminishes.

Sources

Good Enough Is a Strategy

The Information reported last week that DeepSeek's V4 model will run entirely on Huawei's Ascend 950PR chips. No NVIDIA. No CUDA. A trillion parameters trained and deployed on Chinese silicon, with Alibaba, ByteDance, and Tencent ordering hundreds of thousands of units in anticipation.

The reflexive Western reading is that this proves export controls failed. The reflexive Chinese reading is that domestic chips have caught up. Both are wrong, and the actual situation is more interesting than either.

Huawei's 950PR delivers roughly 1.56 petaflops at FP4 and carries 112 GB of proprietary HiBL memory. Real numbers, not aspirational ones. But the memory bandwidth sits at 1.4 TB/s against the H100's 3.35 TB/s, and a Council on Foreign Relations report projects NVIDIA will be seventeen times more powerful by 2027. The gap is not closing. It is widening.

This matters because DeepSeek's entire thesis since V3 has been that architectural efficiency compensates for hardware disadvantage. Mixture-of-experts, multi-token prediction, custom numeric formats designed months in advance for chips that hadn't shipped yet. When DeepSeek shook Silicon Valley last year, the V3 training bill was $5.6 million. The V4 figure, if accurate, is $5.2 million for a trillion parameters.

There is a complication. Reports suggest V4 may have been trained on NVIDIA Blackwell chips, with the Huawei optimization focused on inference and deployment rather than training itself. DeepSeek's own R2 model reportedly suffered persistent training failures on Ascend hardware, forcing a reversion to NVIDIA H800s. The headline says "entirely on Huawei." The footnotes are less certain.

None of this diminishes the strategic signal. DeepSeek spent months with Huawei and Cambricon rewriting core code from CUDA to CANN, Huawei's compute framework. They withheld early V4 access from NVIDIA and AMD entirely. The best analysis piece on this framed it simply: when you restrict access to a tool, the people who need it do not stop working. They build a different tool.

The question was never whether Huawei could match NVIDIA chip for chip. It cannot, and the CFR numbers make that plain for at least the next three years. The question is whether a parallel ecosystem can sustain frontier-class AI development at commercially viable cost, on hardware that is worse but available. DeepSeek's answer, backed by trillion-parameter ambition and bulk orders from every major Chinese cloud provider, is that good enough is a strategy. The circular investment logic of the Western AI stack makes this bet look less absurd every quarter.

Sources: