Skip to content

Plutonic Rainbows

Five Will Never Return Home

Apaches was shot fast on a Home Counties farm in February 1977, twenty-six minutes long, six children from a Maidenhead junior school cast as themselves. The Health and Safety Executive had counted around thirty children killed in farm accidents the year before, and commissioned the Central Office of Information to do something about it. What they got back was a piece of folk horror with the BFI's catalogue number attached.

John Mackenzie directed it, three years before The Long Good Friday. Neville Smith wrote it. Phil Méheux shot it. The production values were below Poverty Row, the slurry pit was real, and the title sequence used the Playbill typeface from Stagecoach. Six children play cowboys and Indians on a working farm. Five never get home.

The deaths are not edited around. Kim falls from a tractor and is run over. Tom slips into a slurry pit, which is liquefied cow excrement, and goes under. Sharon drinks chemicals from an unmarked bottle while pretending it is alcohol, and dies in the night, screaming, off-camera, while her parents stand in the bedroom doorway. Robert is crushed by a gate that Michael has knocked over. Danny, the narrator, crashes a tractor into a ditch and goes through the windscreen. Michael, the cousin, the one who knocked the gate, is the only child left alive at the end.

Danny narrates the film after he is dead. This is the part that does the damage. He is calm, almost pleased, walking us through what we have just watched, while his family arrives for what he calls a party. We see the table laid for the wake, the sandwiches under cling film, the relatives in their dark clothes. The film does not break the spell to tell you Danny is wrong about the party. It lets him keep talking. The register is wrong in a way that only an English public information film could get wrong on purpose.

The COI made dozens of these. Most have curdled into nostalgia, a YouTube reel of teatime menaces. Apaches has not. It still plays as something a state did to its children with a clear budget line and a director's chair and a clapperboard. It was shown in schools all over Britain, broadcast by ITV companies on slow Sunday afternoons, and exported to Canada, Australia, and the United States. Prints kept being struck on 16mm long after every other PIF had been retired to videotape.

The strangeness is not that it is grim. Plenty of films are grim. The strangeness is the genre confusion, a western-themed death drama wearing the costume of a teaching aid. The form is didactic. The content is folk horror. The narrator is dead. The accidents are the point. There is no third-act reveal where the children turn out to have been spared, no closing title card that softens the lesson. The lesson is the deaths, and the deaths are filmed with the patience of someone who already knows how they end.

If you grew up rural and were shown this in a darkened school hall, you remember it. If you grew up urban and never saw it, you might wonder how a country could decide that scaring children to death was the responsible choice. Both reactions are correct.

Sources:

Partly True, Says Musk

On the stand in Oakland federal court last week, Elon Musk conceded, under cross-examination by OpenAI's lead counsel William Savitt, that it was "partly" true xAI had used some of OpenAI's technology to train Grok through distillation. He then softened the concession into a shrug. "It is standard practice to use other AIs to validate your AI," he said, as if the distinction between validating a model and copying its behaviour were self-evident, and as if the room had not just heard him spend three days arguing that OpenAI was a stolen charity owed him roughly thirty-eight million dollars in moral damages plus, by his lawyers' arithmetic, a hundred and thirty-four billion in the for-profit value the conversion produced.

The contradiction did not seem to bother him. It bothered Judge Yvonne Gonzalez Rogers, who opened the trial on Tuesday by asking Musk how the court could get its work done "without you making things worse outside the courtroom," and it bothered, in a quieter way, the cross-examining attorney, who walked Musk through his own xAI valuation (two hundred and fifty billion at the SpaceX merger in February) and his own boasts about Grok's capabilities. The picture that emerged was not the picture Musk's opening narrative had drawn. He had cast himself as a founder defending a charity from corporate capture. The cross painted him as a competitor, valued in the hundreds of billions, who had built his competing model in part on the very outputs he says were stolen from a public mission.

I wrote yesterday that generally, AI companies distill, because the practice is now baseline industry behaviour rather than a deviation from it. The major labs all do versions of it, sometimes openly, more often through quiet evaluation pipelines that nobody itemises in a press release. So Musk's admission, on its technical merits, is not a scandal. It is a statement of how the industry actually works.

The scandal is the framing. To sue OpenAI for one hundred and thirty-four billion dollars on the theory that the company betrayed its founding promise to benefit humanity, while simultaneously running a competitor that benefited, in part, from OpenAI's outputs, is to argue both sides of the same case at once. The mission was sacred enough to litigate. The model weights, or their behavioural shadow, were available enough to use. Both can be true. Neither sits well with the other.

Whether the jury cares is a different question. The CNBC summary of the first week noted that Altman, Satya Nadella and Greg Brockman are still to testify, and that the outcome could threaten OpenAI's anticipated IPO. A finding for Musk would not just unwind the for-profit conversion; it would establish a kind of moral lien on every dollar the company has raised since 2019. A finding for OpenAI would let Altman walk into the IPO roadshow having beaten the most litigious billionaire in technology in open court, on his own terms.

What I keep getting stuck on is the smaller, weirder fact at the centre of all this. The man suing to recover a charity used the charity's outputs to train his rival. He did not deny it. He did not apologise for it. He called it standard practice, which it is. The case will turn on whether that answer is enough.

Sources:

No Threshold to Call the Police

Seven families filed lawsuits against OpenAI in San Francisco last Wednesday, alleging that ChatGPT and its CEO bear direct responsibility for the February shooting in Tumbler Ridge, British Columbia, which killed eight people including six children. The complaints argue something narrower and stranger than the headlines suggest. They argue that OpenAI's own safety staff, in June 2025, flagged the shooter's account for "gun violence activity and planning", urged senior leadership to call Canadian police, and were overruled. The account was deactivated instead. The shooter opened a second one and went on talking to the model for another seven months.

That is the procedural fact at the centre of the cases. The emotional fact is the letter Altman published the Friday before, on the local news site Tumbler RidgeLines, saying he was "deeply sorry that we did not alert law enforcement to the account that was banned in June." David Eby, the BC premier, posted the letter to social media with the comment that the apology was "necessary, and yet grossly insufficient." Cia Edmonds, whose twelve-year-old daughter remains in hospital, said the apology read like it had been written by ChatGPT.

The question the apology accidentally raises is what it concedes. If "deeply sorry that we did not alert law enforcement" is the right thing to say in May 2026, then there is some implied threshold above which the company believes it should have called the Mounties, and below which it should not. That threshold has never been published. It is not in the usage policy, not in the model spec, not in any white paper from the Frontier Model Forum. The industry has spent the past two years building elaborate public language about safety teams, evaluation suites, and red-teaming, but no part of that vocabulary describes a duty to report a specific user to a specific police force in a specific country.

There is a reason for the silence. A formal threshold creates a formal liability. Once an AI lab publishes the rule it uses to decide when to call the police, it can be sued for failing to follow that rule, and it can be sued for the rule being too narrow. So the practice has been to have the rule operationally, inside the company, while not committing to it externally. The internal Slack messages cited in the complaints suggest exactly that arrangement: a safety team with a working notion of "this one warrants a call", senior management with a competing notion of "the privacy and PR cost is too high", and a unilateral deactivation as the compromise that satisfies neither.

What makes the gap concrete is the second account. Treating deactivation as the response presumes that an account is identity-bearing in a way it isn't. If the threat lives in a person and the person can sign up again with a new email in ninety seconds, deactivation is a containment theatre directed at auditors rather than a containment measure directed at risk. The safety staff knew this. The lawsuits' theory of the case is that management knew it too.

The federal politics around this are already moving in the opposite direction. OpenAI is, as Wired reported earlier this month, backing legislation in Illinois that would shield AI companies from liability in incidents where a hundred or more people are killed or injured. There is a Florida criminal investigation in progress over a separate ChatGPT-linked shooting at Florida State University last year. The same week the Tumbler Ridge complaints landed, the Frontier Model Forum was quietly running a working group on distillation rather than a working group on mandatory reporting.

I keep thinking about the second account. Somebody at OpenAI opened a ticket in June 2025 about a person whose conversations they had read, whose plans they had inferred, whose name they might or might not have known, and decided that the right action was to revoke a token and not pick up a phone. Eight months later, six children were dead. The Illinois bill would make sure that the next time, in some sense that the lawyers will argue about, the phone does not need to be picked up either.

Sources:

Capita Holds the Frequency

About a hundred and thirty thousand pagers are still in use across the NHS, which works out, on the most-quoted government count, to roughly ten per cent of every pager left running anywhere on earth. The hospital corridor in 2026 is one of the few places in the country where you can still hear a one-way radio device beep to summon a human being. Most of the rest of British public life has moved on. The cardiac arrest team has not.

The protocol underneath the bleep is POCSAG, the Post Office Code Standardisation Advisory Group's "Radiopaging Code No. 1", adopted in 1981 out of a British Post Office working group that had been nailing down a format for radio paging. It is a low-bitrate, unencrypted, one-way broadcast standard. A central transmitter sends short numeric or alphanumeric messages over a narrow VHF channel; every receiver in range listens passively for its own seven-digit capcode, ignores the rest, and beeps when its number comes up. The architecture is closer to a radio station that talks to one listener at a time than to anything you would call a network.

What makes it still useful is exactly what makes it sound obsolete. The signal travels at frequencies that walk through the thickened walls of a hospital, including the lead-lined ones around radiology and the awkward concrete around the basement plant rooms. Mobile coverage in those parts of an estate is often nominal at best. A POCSAG transmitter sitting on the roof reaches the whole footprint reliably, including the lift shafts and the bits of the Edwardian wing nobody has rewired since the eighties. Battery life on a receiver runs to weeks. There is no app to update, no SIM to provision, no cellular handover to fail at the moment of a code blue.

Matt Hancock, as Health Secretary, announced in February 2019 that the NHS would have rid itself of the things by the end of 2021. That deadline came and went. Vodafone had already left the business in March 2018, leaving Capita's PageOne as the only wide-area paging carrier in the UK, supplemented by a handful of specialist suppliers, including Multitone and Swissphone, for the hospital-by-hospital cardiac systems. The cost of the residual estate to the NHS was put at £6.6 million a year at the time of the ban announcement. Five years later, the bleeps are still going.

The unsettling part, as TechCrunch and others reported in 2019, is that POCSAG was specified in an era when intercepting it required a few thousand pounds of radio gear and a working knowledge of VHF demodulation. A software-defined radio dongle costs well under fifty pounds now. The traffic is still in clear, because retrofitting encryption into a thirty-year installed base of one-way receivers is essentially impossible. So the same property that keeps the protocol alive, its mechanical simplicity, also keeps it readable to anyone with a laptop and a back garden.

I keep coming back to the fact that a 1981 specification is still the load-bearing communications layer for the most time-critical moments in British emergency medicine. Not as a bridge, not as a fallback, but as the thing that actually works when a patient is arresting on Ward 4. The ban did not retire the bleep. The bleep outlived the ban, because in the part of the building where seconds matter and Wi-Fi does not, a 1980s broadcast standard is still the most reliable thing in the room.

Sources:

Two Architects, One Dress

Gianfranco Ferré had been at Dior just over two years when he sat down to plan the spring-summer 1992 haute couture collection. The Ascot–Cecil Beaton debut was already behind him, the press had stopped openly questioning whether Bernard Arnault should have hired an Italian to run the most French of houses, and the third Dior atelier under his direction was settling into a routine. The collection he produced that January was titled Palladio, and it was the moment his architectural training stopped being a biographical footnote and became, briefly, the actual subject of the work.

Ferré had graduated in architecture from the Politecnico di Milano in 1969. He never practised. He went straight into accessories, then raincoats, and by 1978 had his own womenswear line in Milan. The "architect of fashion" tag followed him for the rest of his career, applied so casually by the press that it had stopped meaning very much. Palladio was the collection where he made the press take the word literally.

The reference was Andrea Palladio, the sixteenth-century Veneto architect whose villas around Vicenza turned classical proportion into a vernacular grammar that English country houses, Thomas Jefferson's Monticello, and most of nineteenth-century banking architecture spent the next four hundred years copying. Palladio's treatise, the Quattro Libri, codified column, pediment, and bay into ratios anyone could follow. Ferré read it the way an architect would: not as a style to imitate but as a system of proportional decisions you could apply to a different material.

The centrepiece, now catalogued in the Gianfranco Ferré Research Center at the Politecnico di Milano, was a sculptural off-white dress with an enormous wild-silk collar treated as a pediment. The dickey did the work of a building's facade. It announced the order, set the proportions for the rest of the body, and held the geometry in tension with the silk underneath. The silk did the opposite job, falling away in a quiet diagonal across the back, all surface and flow. Pediment above, drapery beneath. A house with weather inside it.

This was Ferré's actual method. His clothes were built around the white shirt the way a Palladian villa was built around its portico. Take the structural element seriously, decide its proportions before anything else, and the rest of the garment follows. He repeated this for the next decade in his own line in Milan, and the Phoenix Art Museum eventually built a whole exhibition around twenty-seven of his white shirts. Palladio was the moment he showed Paris the operation that produced them.

What's worth noticing is how unfashionable a Renaissance architect was as a couture reference in January 1992. The prevailing wind was already toward Helmut Lang's reductive tailoring, toward the Antwerp graduates' deconstruction, toward Miuccia Prada's nylon. Ferré went toward classical proportion. He kept doing it for another four years, through Floridante and the Extrême collection in 1995, through to the Indian Passion Indienne in July 1996 that turned out to be his last Dior show. The Palladio dress sits at the start of that arc, the moment his system was clearest, before the colour and the ornament and the eventual exit.

It is still photographed often. The collar reads in any light.

Sources:

Past Tense, By Friday

The kiosk sat in the middle of the Tesco carpark like a planning mistake. Yellow signage, sloped roof, room enough for two staff and a counter. You handed over a film canister, took a paper envelope with a pre-printed number on it, and walked off to do the weekly shop. By the time you got back to the car, your photographs already belonged to a future you weren't part of yet.

Snappy Snaps opened its first store in 1983. Don Kennedy and Tim MacAndrews put a one-hour minilab inside a small shopfront and built a franchise on it. In August 1986 SupaSnaps was running a test market in sixty-one of its shops across Scotland and the North-East for a new service called PhotoVideo, which transferred your prints onto VHS tape. KLICK Photopoint shows up in the Cambridge Yellow Pages from 1995 through 1998, then disappears. The American precursor, Fotomat, peaked at over four thousand kiosks around 1980, distinctive pyramid roofs in gold paint, and was already in decline by the time most British versions launched. Minilab technology had collapsed the wait from a fortnight to an hour, which everyone said was the future, and which mostly meant the kiosk was no longer the kiosk for very long.

What I remember is the envelope. Manilla paper, bordered red and yellow, your name biroed onto a perforated stub. The negatives came back in a strip protector you were warned not to touch. Twenty-four exposures, sometimes thirty-six. Six or seven of them blurred. Two with a thumb in the corner. One where you had closed your eyes. The processing was a kind of judgement.

Before the minilab arrived in the carpark, the wait was longer. A week, sometimes two, and during that week the photographs lived in some intermediate state nobody could see. The trip itself was already in past tense by the time the prints came home. You held a Saturday in your hand on a Friday two weekends later. The smile in the print was a smile you no longer remembered making.

This, I think, is what the kiosk was actually for. Not the prints, but the wait. The deliberate space between making the picture and seeing it, into which other things could move.

The phone, now, gives you the photograph before you have finished taking it. The image arrives faster than the moment can settle. You see yourself reacting, and you correct, and the version that survives is the version that has already been edited by the act of seeing. There is no week in which the picture quietly becomes a different thing. There is no Friday on which you discover what last Saturday looked like. The intermediate state has been deleted.

The Rutherglen branch is long gone. Most of the British high-street kiosks are gone. The pyramid huts in American carparks have mostly been turned into drive-through coffee stalls. The infrastructure of the wait, all of it, has been recommissioned as the infrastructure of the immediate. Coffee instead of negatives. A six-minute queue instead of a six-day one.

Some of the kiosks themselves are still there, though, sitting empty between the parking bays. A small flat-roofed cabin, just big enough for two people and a counter, a window where the till used to be. They look exactly like what they are, which is the place you used to go to find out who you had been.

Sources:

Dividing by T

Almost every chat API exposes a slider called temperature. The default is usually 1.0, the floor is 0.0, the ceiling is 2.0, and the documentation says something vague about creativity. Most people drag it around and watch what happens. Almost nobody explains what the number is actually doing, which is unfortunate, because it is doing exactly one thing, and the thing is small enough to fit on a postcard.

Here is the postcard. When the model finishes a forward pass, it emits a vector of raw scores called logits, one per token in the vocabulary. Logits are not probabilities. They can be negative, they can be huge, and they do not sum to anything in particular. To turn them into probabilities you run them through softmax, which exponentiates each one and normalises by the total. That is the default. Temperature inserts itself one step earlier. It divides every logit by T before the exponential. So the formula becomes P(x_i) = exp(l_i / T) over the sum of exp(l_j / T) for the whole vocabulary. That is the whole intervention. One scalar, applied uniformly, before softmax.

What this does to the distribution is the only thing worth understanding. Dividing by a small T (say 0.2) makes the gaps between logits five times bigger. After softmax, the already- high-scoring token absorbs almost all of the probability mass and everything else goes to a rounding error. The model becomes boring and consistent. Dividing by a large T (say 1.5) does the opposite: it squashes the gaps, the exponential can no longer amplify the leader, and the unlikely tokens get a real chance. The model becomes noisier and less self-consistent. T=1 is the identity, the original distribution, no scaling at all. T=0 is a special case (the formula divides by zero), so the major APIs quietly swap in greedy decoding instead, always take the top-ranked token.

There is a tidy worked example in a MachineLearningMastery walk-through where the prompt is "Today's weather is so" and the top candidate is "nice". At T=0.1, T=0.5, T=1.0 the model picks "nice" every time. At T=3.0 it drifts to "wonderful". At T=10 it lands on "delicious" and the sentence stops meaning anything. The model is not getting more creative in any meaningful sense. It is getting noisier. Some of the noise looks like creativity because human readers reach for the closest interpretation, the same way we do when staring at a Rorschach blot.

This is also where the relationship to hallucination lives. Higher temperature does not invent facts the model didn't know. It promotes lower-probability continuations the model had already considered and ranked low. Sometimes the second-best guess is genuinely useful and sometimes it is the chemistry that launches a confident, fluent, completely wrong sentence. The underlying problem (no internal verification step) is the same either way; temperature just changes how often the model gets to roll for it.

The practical upshot is duller than the slider's mystique. For factual extraction, code generation, structured output: keep T low or zero, and rely on top-k or top-p to manage the long tail if you need diversity at all. For brainstorming, fiction, playful prose: edge it up to 0.8 or 1.0, but expect to throw away more output. Above 1.5 the model is mostly rolling dice weighted by a distribution it no longer respects, and the returns are sharply diminishing.

The interesting thing about temperature is what it isn't. It is not a personality knob, not a politeness dial, not a "make the answer better" control. It is a single scalar that reshapes how much weight the model gives to its own confidence before sampling. Everything that feels like vibes (creativity, caution, weirdness) is a downstream artefact of that one division.

Sources:

Generally, AI Companies Distill

Elon Musk took the stand in Oakland on Thursday and was asked, under oath, whether xAI had distilled OpenAI's models to train Grok. His first move was to widen the question. "Generally all the AI companies" do this, he said. Pressed for a yes, he settled on "partly." Then he framed it as standard practice, the kind of thing you do to validate your own system.

That answer matters because of who has been making the loudest noise in the other direction. Anthropic spent the better part of this year naming DeepSeek, Moonshot, and MiniMax for distilling its models. OpenAI has been pursuing the same thread on DeepSeek. Google has called the practice intellectual property theft and built mitigations into its API tier. The trade press has carried the story almost entirely as a US-versus-China problem, with the labs as wronged parties and the offshore copyists as the violators.

The thing the Verge, TechCrunch, and Gizmodo all surfaced from the courtroom is that the labs themselves do not actually believe that frame. The internal assumption, the one tech workers have quietly held for two years, is that everyone with a serious model distills everyone else's. The Frontier Model Forum's distillation working group is, on paper, defensive. In practice the same companies sitting in that room have engineers on the other side of the firewall running the queries. Musk just said the quiet part on a witness stand because he had to.

The legal landscape under all this is thinner than the rhetoric suggests. A Fenwick analysis from earlier this year laid out the core picture: copyright is unlikely to apply, because the teacher's weights are not actually copied and model outputs sit outside the usual zone of protected expression. After Van Buren, the Computer Fraud and Abuse Act also struggles to bite, since the user was initially authorised to query the API. What is left is a contractual breach. Industry write-ups note that enforcement to date has consisted mostly of cease-and-desist letters and account terminations rather than litigation.

So when OpenAI sends its strongly worded letter about DeepSeek, or Anthropic publishes its blog post about MiniMax, the implicit threat is mostly atmospheric. Everyone in the room knows the case law would not survive contact with a federal docket, and everyone in the room also knows that filing the suit would mean discovery, which would mean every internal Slack channel about the rival lab's outputs becoming evidence. Mutual exposure is the actual restraint, not the contract.

Musk's "partly" is interesting partly because it is honest and partly because it punctures his own legal strategy. He is suing OpenAI for abandoning a founding mission to keep AI safe and nonprofit. The same week he is making that argument, he is admitting that his other AI company has been training on the defendant's outputs. The judge, Yvonne Gonzalez Rogers, told him on Thursday to stop with the Terminator references. The distillation question got a longer answer than the apocalypse question did.

The interesting thing is what happens to the rhetoric now. The "China is distilling our models" complaint has been a useful narrative for the labs because it justified policy asks, including export-control extensions and government enforcement proposals. It is harder to sustain that frame when an OpenAI co-founder confirms, on the record, that domestic distillation is the industry norm. Either the practice is genuinely a problem worth a federal response, in which case xAI is on the hook alongside DeepSeek, or it isn't, in which case the China framing was always partly about lobbying and partly about something else, and the word that keeps doing the work in both readings is the same one Musk reached for on the stand.

Sources:

Six Seconds of Negotiation

On 30 September 2025, AOL switched off the dial-up service it had run for thirty-four years. The shutdown was barely news. Most people assumed AOL had stopped offering dial-up sometime around Friends going off the air. The handshake sound, though, did not go quiet with it. It already lived somewhere else.

The sound itself is a brief negotiation between two pieces of hardware deciding, in audible form, how fast they can talk to each other. Dial tone, the digits in DTMF, then a back-and-forth of carrier tones, capability advertisements, and an echo cancellation phase. The Finnish engineer Oona Räisänen mapped the whole thing into a colour-coded waveform in 2012, labelling each tone with the V-series ITU recommendation it belonged to: V.8, V.8bis, V.34, the answer-tone reversal that still gives me a small shiver when I hear it cold.

What Räisänen made visible, Alexis Madrigal had written about a few months earlier in The Atlantic, in a piece I think about roughly once a year. His argument was that the modem sound was not a side-effect. It was the data being transferred. The two machines were already exchanging information, and the exchange happened to be loud enough for the room to hear. Anyone who heard it was eavesdropping on a private negotiation. Most of us did not know that at the time.

The sound persists now in places that have nothing to do with networking. It is a stock SFX cue for "computer" in television documentaries about the 1990s. It plays under voiceovers in broadband adverts that want to flatter the viewer for having upgraded. It is a ringtone. It is a meme format. It opens You've Got Mail, where Tom Hanks dialling AOL is the inciting incident of the entire romantic plot. The film is itself now twenty-eight years old, and the sound it captured was, by then, already a few years from obsolescence.

There is something specific about why this sound, of all the discontinued sounds of the late twentieth century, retained its legibility. Most obsolete machinery dies twice: once when nobody makes it, and again when nobody can identify a recording of it. The shipping forecast survives because it is still broadcast. The modem handshake survives because it was indexical. It was the exact sound of a binary state transition, offline to online, a threshold crossing that mattered enough that millions of people learned to recognise its rhythm without ever being taught.

I think this is the part that is genuinely hauntological. The sound is not nostalgic for a faster connection. It is nostalgic for a connection you had to wait for, and could fail to make, and could lose if your sister picked up the phone in the hallway. The waiting was part of the meaning. Broadband solved the waiting and threw the meaning out with it.

Packets to a Silent Modem makes the point in fictional form: the modem as a doorway whose absence reorganises everything around it. AOL closing the line in September is the inverse, the doorway shut on a building nobody had been inside in years. The sound walked out years earlier. It is still in circulation. It just has nowhere left to dial.

Sources:

Two Years per Scarf

A Hermès carré that landed in shops in spring 1992 was first sketched, in life size, on a 90 by 90 centimetre card, sometime in the autumn of 1990. That gap is the part of the object nobody sees. The square of silk you can drape over a handbag handle has already been waiting eighteen months by the time it reaches the counter. Half its life is gone before anyone has touched it.

Robert Dumas drew the first one in 1937. The design was called Jeu des Omnibus et Dames Blanches, and it was lifted from an antique parlour game in the Hermès family collection, with the horsedrawn omnibuses of nineteenth-century Paris turning back into print. By the early 1990s the house had produced hundreds of follow-on designs, each obeying the same brief, ninety centimetres on a side, hand-rolled hem, somewhere between fifteen and forty colours, a story you can read while you fold it.

The slow part is the engraving. An artist, often a freelancer working from a kitchen table somewhere in France, hands over a finished painting on card. Hermès engravers in Lyon then translate it into films, one transparent sheet per colour, traced by hand under a light box. A relatively simple thirty-colour design needs four hundred to six hundred hours of this. A complicated one can demand two thousand. Then those films become silk-screens, one per colour, and the scarf is printed on a hundred-metre table, lightest ink first, darkest ink last. Wash, set, iron, cut. The hem alone is forty minutes of stitching by one woman with one needle, and there is no machine that can do it without leaving the kind of edge a Hermès customer would notice.

Brazilian silk, oddly. The yarn comes from mulberry moth cocoons on farms the house keeps in Brazil, and the weaving in Lyon takes about three months on its own. A single 90cm scarf weighs sixty- five grams and consumes the silk of around 250 cocoons. The fineness is graded 6A, which means almost nothing to a customer and everything to a colourist trying to land thirty separate inks on a substrate that has to stay flat, take dye cleanly, and survive being knotted at the throat for fifty years.

What I find interesting about the early-90s carré program is that it ran on a clock the rest of fashion had already abandoned. Ready-to-wear in 1992 was operating on a six-month cycle and visibly straining. Magazines published trend reports in February about what people would supposedly want by April. The silk-scarf desk at Hermès was working two collections per year of roughly twelve designs each, every one of them already two years deep in production by the time the season turned. The decision about what your spring 1992 carré looked like was effectively made in the autumn of 1990, and nothing about Madonna's Blond Ambition tour, or the early signs of grunge in Seattle, or the Gulf War ending, or any of the other things that supposedly steered taste that year, could touch it.

Which is one of the things a Birkin shares with a carré, come to think of it. Both of them are objects whose internal time runs slower than the time around them. You cannot rush either, and that turns out to be most of the value.

Sources: