Skip to content

Plutonic Rainbows

Twice Across From Musk

The lawyer leading Sam Altman's defence in the Oakland federal courtroom this past week is the same one who, four years ago, made Elon Musk buy Twitter. William Savitt, a partner at Wachtell, Lipton, Rosen & Katz, represented Twitter in 2022 when the company sued to force Musk through with the forty-four billion dollar purchase he had spent the spring trying to wriggle out of. The case never got to a verdict; Musk capitulated shortly before trial, signed the cheque, and renamed the company. Savitt won by closing the exits.

He is now back across the courtroom from the same person, this time defending OpenAI and Altman against Musk's claim that the 2019 conversion to a for-profit structure betrayed the company's founding charter. According to Business Insider's profile this weekend, Altman picked Savitt specifically because of the prior result. The lore travels with him. The reasoning, told to a journalist by people who work with him, is that Musk responds poorly to opposing counsel who refuse to be impressed.

This is one of the small, structural facts about American high-stakes litigation that does not get talked about much. The top corporate trial bar is small. The same fifteen or twenty people show up across most of the cases that matter, and they remember each other. A defendant who has been across from a particular litigator before knows what that lawyer will do on cross, knows which exhibits they will dwell on, knows the register of voice they use when they are about to spring something. Savitt has had Musk's deposition strategy in his head since 2022. Most of the work was done before he walked into the Oakland building.

It also tells you something about how Altman thinks about risk. He could have hired any of the white-shoe firms in the country. The choice of the lawyer who had specifically beaten Musk in a high-profile commercial case was a tell. It said: this is going to be conducted as a contest of personalities, and we are bringing the personality who won last time. The trial coverage this week, which has spent as much time on Musk's demeanour as on the underlying corporate-governance question, suggests Altman read the room correctly. The judge, Yvonne Gonzalez Rogers, has already told Musk to stop making things worse outside the courtroom. Savitt has barely had to push.

What I find genuinely interesting is what this implies about the shape of the AI industry's coming decade in the courts. There will be many more of these cases. The IP questions around training data, the contractual questions around model distillation that came up on the stand last week, the corporate-governance questions about who actually owns a foundation-model lab when its mission and its valuation pull in opposite directions: all of these will be litigated by the same small bar, in front of the same handful of judges who handle complex commercial disputes in the relevant districts. The matchups will repeat. Reputations will compound. The lawyers who win one of these early cases will be the ones every defendant calls for the next one, and the lawyers on the other side will be the ones the plaintiffs' bar reaches for.

The trial is not over. Brockman and Altman himself are still to testify. The verdict could go either way. But one piece of the dynamic is already settled, which is that the most litigious billionaire in American technology has been put back across the table from the lawyer who got him to write a forty-four billion dollar cheque he did not want to write. That is a small, lawyerly victory in itself, and it happened before opening arguments.

Sources:

Seven Vendors, One Holdout

On Friday the Department of Defense announced agreements with seven companies to run AI on its highest-classification networks, the IL6 and IL7 tiers where the actual operational data lives. The list is the predictable one and the surprising one at the same time: SpaceX, OpenAI, Google, Microsoft, Nvidia, Amazon Web Services, and the comparatively new Reflection. Notably absent, and absent on purpose, is Anthropic. The company that has spent two years building its brand around frontier-safety commitments just got formally cut out of the most lucrative classified contract round in the sector's short history.

The exclusion is not procedural. Defense Secretary Pete Hegseth has been trying for months to label Anthropic a "supply chain risk," a designation usually reserved for foreign-adversary sabotage threats, after Anthropic insisted on contract language that would prohibit Claude from being used in fully autonomous weapons or for surveillance of Americans. Hegseth's position, as relayed through the Pentagon's CTO Emil Michael, is that vendors must allow any use the department deems lawful, full stop. That sentence does a lot of work. "Lawful" inside an IL7 environment is whatever the executive branch and its lawyers say it is on a given Tuesday, and the whole point of Anthropic's clause was to have something more durable than a memo.

OpenAI got there first. Its March deal was, in the words of people involved, structured to "replace Anthropic with ChatGPT in classified environments." That framing is unusual to see in print. Most procurement language is bloodless. This one names a loser. Reading the earlier blacklist move and the West Wing meeting that followed it, the trajectory is now legible: the administration tested whether it could pressure Anthropic into dropping the clause, found that it couldn't, and built the classified stack around the six companies that didn't push back. SpaceX and Reflection are the bonus picks; the rest of the list is just the hyperscalers.

Inside Google the deal is not landing quietly. Six hundred employees signed a letter to Sundar Pichai before the announcement asking him to keep Gemini out of classified work, and after the news broke a smaller group floated a strike before backing off over retaliation fears. DeepMind staff in particular have been here before, the Pentagon-autonomy budget piece laid out the $13.4 billion that's now flowing toward exactly the applications they were promised in 2018 their work would never support. The promise expired. The compute did not.

What's striking is how cleanly the disagreement has been allowed to surface. Anthropic could have signed and quietly carved exceptions into the statement of work, the way large vendors usually do. They didn't. They sued the administration, they publicly held the line on autonomous-weapons use, and now they're watching seven competitors split a contract pool they are not allowed to bid into. Whether that turns out to be principled or expensive is a question for the next funding round, but it is the first time in this cycle a frontier lab has paid a real commercial cost for a stated safety position rather than just gesturing at one.

The cost is the news. The position was already on the record.

Sources:

Square Outlived the Drive

There is a recurring joke on social media, the one where a Gen Z kid sees a real 3.5 inch floppy and laughs that someone has 3D printed the file save icon. The joke works because the disk and the icon have changed places. The icon is the original now. The plastic square is the artefact, an object that has been back-projected from a glyph most users have only ever seen on a toolbar.

Sony introduced the 3.5 inch floppy in 1981. By the mid 80s it was the dominant portable storage medium for personal computers, which is why the first generation of graphical interface designers reached for it when they needed an image to mean "save." The floppy was the thing in your hand that held the work. You ejected it, you carried it, you put it in a sleeve. The metaphor was direct. The icon was a picture of a real object doing the job the action described.

Then the object stopped doing the job. Through the late 1990s and into the early 2000s the floppy was gradually phased out. USB drives ate the consumer market, optical media handled bulk, and network shares quietly absorbed the rest. The icon stayed exactly where it was.

The Nielsen Norman Group has been running studies on this for years, and the finding holds up across age groups: people still read the floppy as save, even people who have never touched one. The icon shifted, in their terminology, from a resemblance icon (a picture of a thing you use) to a reference icon (a shape that points at a concept). It works because it has been used consistently for forty years, not because anyone recognises the object. The training data is the icon itself.

This is the part that gets interesting. A referent has outlived its referent. The picture is now older than the thing it pictures, in any practical sense, because the thing it pictures is in landfill and the picture is on every desktop. There is no analogy for this in older media. A book illustration of a horse does not stop being a horse just because cars arrived. But the floppy icon is no longer a picture of anything. It is a private language between the interface and the user, a sign whose referent has been quietly evacuated.

Microsoft noticed. Windows 11 ships with a save icon that has been abstracted, just enough floppy left to keep the muscle memory, not enough to look like a piece of dead hardware. The Simple Thread piece on this called it anachromorphism, the icon that has detached from its original referent and now simply means itself. Apple has done the same thing, the rounded square, the diagonal line that used to be the metal shutter, all softened into a visual logo that no longer pretends to be a disk.

You could read this as design cowardice, refusing to redesign something that does not work any more. I think it is closer to the inverse. The floppy icon is one of the few stable shapes in software, a fossil that refuses to disintegrate, and replacing it would be the cowardly move. It earned its place by working for a generation, then earned a second tenure by being the only shape that everyone, regardless of age, can read at twelve pixels square.

What the icon really preserves is not the floppy. It is the act of saving as a deliberate, separate gesture, distinct from the work itself. Modern software has largely abolished that gesture. Your phone autosaves. Google Docs autosaves. The cloud assumes it. The floppy icon points at a habit of mind from a period when saving was a thing you remembered to do, and could forget, and could lose work because you forgot. The plastic square is gone. The fear of losing work is still there, just barely, kept alive by a glyph that nobody under twenty has ever held.

Sources:

Just Like a Christmas Tree

Karl Lagerfeld showed Chanel's Fall 1991 ready-to-wear in Paris in March, and the line he gave the wire reporters that week was that the models were dressed "just like a Christmas tree." He meant the chains. Layered gold chains, stacked CC pendants, cuffs, hoops, dog collars, the camellia turned into hardware hanging off a mesh body stocking. Vogue called it taking the house "to the edge of an abyss of kitsch and funk." Reuters described models tearing off black vinyl trench coats to expose sheer mesh catsuits, to a Madonna soundtrack. The collection has been called Lagerfeld's hip-hop show ever since.

The framing is half right. There is hip-hop in the styling, the CNN retrospective on the show makes the case at length, and the layered gold and baseball caps and oversized chains were unmistakably borrowed from a New York street vocabulary that European luxury had spent the previous decade not noticing. But the Madonna soundtrack and the Boy Toy plaques and the conical-bra-adjacent silhouettes point somewhere else too. This was a "Like a Virgin" turn as much as a hip-hop one, two American provocations folded into the same Cambon set.

What made it work, and what makes it still readable now, was that Lagerfeld did not abandon the Chanel codes in order to do this. He doubled them. Quilted leather is a Chanel signature; the Fall 1991 show paired quilted biker jackets with tulle ball skirts. Tweed is a Chanel signature; Karen Mulder walked in a frayed denim mini and a curve-hugging pink tweed jacket. The chain belt is a Chanel signature; here it became a gladiator belt, gold and oversized, the CHANEL wordmark reading like a nameplate. None of these were new vocabulary. They were existing vocabulary turned up loud enough to refuse the dignified register the house had been trained into.

The cast did the rest. Linda Evangelista, Karen Mulder, Helena Christensen, Kristen McMenamy, the same names that were carrying every other March 1991 Paris show. The supermodel era is sometimes flattened into a single mood, but the Chanel show that month sat in deliberate contrast to what the rest of Paris was doing. Valentino's couture in the same window was about gravitas. Lagerfeld answered with chains over tweed and a Madonna track loud enough to make the front row flinch.

It is easy to read this now as cynical, a luxury house mining a Black American street vocabulary for runway shock value with no intention of returning the favour. That reading is also half right. The same Women's Wear Daily issue that month put a "Rap Attack" cover together, and the racial coding around the trend in mainstream fashion press was uncomfortable then and reads worse now. Lagerfeld was doing what Lagerfeld always did, which was synthesise on top of whatever signal was in the air, with limited interest in the source. The show's afterlife in hip-hop itself, which CNN traces through Dapper Dan and onward, ended up being more generous to the house than the house was to its inputs.

Still, the show stuck. It stuck because it was built out of Chanel codes rather than against them, and because Lagerfeld understood that a luxury house's signatures only stay alive if they are allowed to behave badly in public every few years. The Christmas tree line was throwaway, but it was honest. He had stacked the codes until they sparkled and clinked and looked slightly absurd, and the absurdity is what kept them legible.

Sources:

Five Will Never Return Home

Apaches was shot fast on a Home Counties farm in February 1977, twenty-six minutes long, six children from a Maidenhead junior school cast as themselves. The Health and Safety Executive had counted around thirty children killed in farm accidents the year before, and commissioned the Central Office of Information to do something about it. What they got back was a piece of folk horror with the BFI's catalogue number attached.

John Mackenzie directed it, three years before The Long Good Friday. Neville Smith wrote it. Phil Méheux shot it. The production values were below Poverty Row, the slurry pit was real, and the title sequence used the Playbill typeface from Stagecoach. Six children play cowboys and Indians on a working farm. Five never get home.

The deaths are not edited around. Kim falls from a tractor and is run over. Tom slips into a slurry pit, which is liquefied cow excrement, and goes under. Sharon drinks chemicals from an unmarked bottle while pretending it is alcohol, and dies in the night, screaming, off-camera, while her parents stand in the bedroom doorway. Robert is crushed by a gate that Michael has knocked over. Danny, the narrator, crashes a tractor into a ditch and goes through the windscreen. Michael, the cousin, the one who knocked the gate, is the only child left alive at the end.

Danny narrates the film after he is dead. This is the part that does the damage. He is calm, almost pleased, walking us through what we have just watched, while his family arrives for what he calls a party. We see the table laid for the wake, the sandwiches under cling film, the relatives in their dark clothes. The film does not break the spell to tell you Danny is wrong about the party. It lets him keep talking. The register is wrong in a way that only an English public information film could get wrong on purpose.

The COI made dozens of these. Most have curdled into nostalgia, a YouTube reel of teatime menaces. Apaches has not. It still plays as something a state did to its children with a clear budget line and a director's chair and a clapperboard. It was shown in schools all over Britain, broadcast by ITV companies on slow Sunday afternoons, and exported to Canada, Australia, and the United States. Prints kept being struck on 16mm long after every other PIF had been retired to videotape.

The strangeness is not that it is grim. Plenty of films are grim. The strangeness is the genre confusion, a western-themed death drama wearing the costume of a teaching aid. The form is didactic. The content is folk horror. The narrator is dead. The accidents are the point. There is no third-act reveal where the children turn out to have been spared, no closing title card that softens the lesson. The lesson is the deaths, and the deaths are filmed with the patience of someone who already knows how they end.

If you grew up rural and were shown this in a darkened school hall, you remember it. If you grew up urban and never saw it, you might wonder how a country could decide that scaring children to death was the responsible choice. Both reactions are correct.

Sources:

Partly True, Says Musk

On the stand in Oakland federal court last week, Elon Musk conceded, under cross-examination by OpenAI's lead counsel William Savitt, that it was "partly" true xAI had used some of OpenAI's technology to train Grok through distillation. He then softened the concession into a shrug. "It is standard practice to use other AIs to validate your AI," he said, as if the distinction between validating a model and copying its behaviour were self-evident, and as if the room had not just heard him spend three days arguing that OpenAI was a stolen charity owed him roughly thirty-eight million dollars in moral damages plus, by his lawyers' arithmetic, a hundred and thirty-four billion in the for-profit value the conversion produced.

The contradiction did not seem to bother him. It bothered Judge Yvonne Gonzalez Rogers, who opened the trial on Tuesday by asking Musk how the court could get its work done "without you making things worse outside the courtroom," and it bothered, in a quieter way, the cross-examining attorney, who walked Musk through his own xAI valuation (two hundred and fifty billion at the SpaceX merger in February) and his own boasts about Grok's capabilities. The picture that emerged was not the picture Musk's opening narrative had drawn. He had cast himself as a founder defending a charity from corporate capture. The cross painted him as a competitor, valued in the hundreds of billions, who had built his competing model in part on the very outputs he says were stolen from a public mission.

I wrote yesterday that generally, AI companies distill, because the practice is now baseline industry behaviour rather than a deviation from it. The major labs all do versions of it, sometimes openly, more often through quiet evaluation pipelines that nobody itemises in a press release. So Musk's admission, on its technical merits, is not a scandal. It is a statement of how the industry actually works.

The scandal is the framing. To sue OpenAI for one hundred and thirty-four billion dollars on the theory that the company betrayed its founding promise to benefit humanity, while simultaneously running a competitor that benefited, in part, from OpenAI's outputs, is to argue both sides of the same case at once. The mission was sacred enough to litigate. The model weights, or their behavioural shadow, were available enough to use. Both can be true. Neither sits well with the other.

Whether the jury cares is a different question. The CNBC summary of the first week noted that Altman, Satya Nadella and Greg Brockman are still to testify, and that the outcome could threaten OpenAI's anticipated IPO. A finding for Musk would not just unwind the for-profit conversion; it would establish a kind of moral lien on every dollar the company has raised since 2019. A finding for OpenAI would let Altman walk into the IPO roadshow having beaten the most litigious billionaire in technology in open court, on his own terms.

What I keep getting stuck on is the smaller, weirder fact at the centre of all this. The man suing to recover a charity used the charity's outputs to train his rival. He did not deny it. He did not apologise for it. He called it standard practice, which it is. The case will turn on whether that answer is enough.

Sources:

No Threshold to Call the Police

Seven families filed lawsuits against OpenAI in San Francisco last Wednesday, alleging that ChatGPT and its CEO bear direct responsibility for the February shooting in Tumbler Ridge, British Columbia, which killed eight people including six children. The complaints argue something narrower and stranger than the headlines suggest. They argue that OpenAI's own safety staff, in June 2025, flagged the shooter's account for "gun violence activity and planning", urged senior leadership to call Canadian police, and were overruled. The account was deactivated instead. The shooter opened a second one and went on talking to the model for another seven months.

That is the procedural fact at the centre of the cases. The emotional fact is the letter Altman published the Friday before, on the local news site Tumbler RidgeLines, saying he was "deeply sorry that we did not alert law enforcement to the account that was banned in June." David Eby, the BC premier, posted the letter to social media with the comment that the apology was "necessary, and yet grossly insufficient." Cia Edmonds, whose twelve-year-old daughter remains in hospital, said the apology read like it had been written by ChatGPT.

The question the apology accidentally raises is what it concedes. If "deeply sorry that we did not alert law enforcement" is the right thing to say in May 2026, then there is some implied threshold above which the company believes it should have called the Mounties, and below which it should not. That threshold has never been published. It is not in the usage policy, not in the model spec, not in any white paper from the Frontier Model Forum. The industry has spent the past two years building elaborate public language about safety teams, evaluation suites, and red-teaming, but no part of that vocabulary describes a duty to report a specific user to a specific police force in a specific country.

There is a reason for the silence. A formal threshold creates a formal liability. Once an AI lab publishes the rule it uses to decide when to call the police, it can be sued for failing to follow that rule, and it can be sued for the rule being too narrow. So the practice has been to have the rule operationally, inside the company, while not committing to it externally. The internal Slack messages cited in the complaints suggest exactly that arrangement: a safety team with a working notion of "this one warrants a call", senior management with a competing notion of "the privacy and PR cost is too high", and a unilateral deactivation as the compromise that satisfies neither.

What makes the gap concrete is the second account. Treating deactivation as the response presumes that an account is identity-bearing in a way it isn't. If the threat lives in a person and the person can sign up again with a new email in ninety seconds, deactivation is a containment theatre directed at auditors rather than a containment measure directed at risk. The safety staff knew this. The lawsuits' theory of the case is that management knew it too.

The federal politics around this are already moving in the opposite direction. OpenAI is, as Wired reported earlier this month, backing legislation in Illinois that would shield AI companies from liability in incidents where a hundred or more people are killed or injured. There is a Florida criminal investigation in progress over a separate ChatGPT-linked shooting at Florida State University last year. The same week the Tumbler Ridge complaints landed, the Frontier Model Forum was quietly running a working group on distillation rather than a working group on mandatory reporting.

I keep thinking about the second account. Somebody at OpenAI opened a ticket in June 2025 about a person whose conversations they had read, whose plans they had inferred, whose name they might or might not have known, and decided that the right action was to revoke a token and not pick up a phone. Eight months later, six children were dead. The Illinois bill would make sure that the next time, in some sense that the lawyers will argue about, the phone does not need to be picked up either.

Sources:

Capita Holds the Frequency

About a hundred and thirty thousand pagers are still in use across the NHS, which works out, on the most-quoted government count, to roughly ten per cent of every pager left running anywhere on earth. The hospital corridor in 2026 is one of the few places in the country where you can still hear a one-way radio device beep to summon a human being. Most of the rest of British public life has moved on. The cardiac arrest team has not.

The protocol underneath the bleep is POCSAG, the Post Office Code Standardisation Advisory Group's "Radiopaging Code No. 1", adopted in 1981 out of a British Post Office working group that had been nailing down a format for radio paging. It is a low-bitrate, unencrypted, one-way broadcast standard. A central transmitter sends short numeric or alphanumeric messages over a narrow VHF channel; every receiver in range listens passively for its own seven-digit capcode, ignores the rest, and beeps when its number comes up. The architecture is closer to a radio station that talks to one listener at a time than to anything you would call a network.

What makes it still useful is exactly what makes it sound obsolete. The signal travels at frequencies that walk through the thickened walls of a hospital, including the lead-lined ones around radiology and the awkward concrete around the basement plant rooms. Mobile coverage in those parts of an estate is often nominal at best. A POCSAG transmitter sitting on the roof reaches the whole footprint reliably, including the lift shafts and the bits of the Edwardian wing nobody has rewired since the eighties. Battery life on a receiver runs to weeks. There is no app to update, no SIM to provision, no cellular handover to fail at the moment of a code blue.

Matt Hancock, as Health Secretary, announced in February 2019 that the NHS would have rid itself of the things by the end of 2021. That deadline came and went. Vodafone had already left the business in March 2018, leaving Capita's PageOne as the only wide-area paging carrier in the UK, supplemented by a handful of specialist suppliers, including Multitone and Swissphone, for the hospital-by-hospital cardiac systems. The cost of the residual estate to the NHS was put at £6.6 million a year at the time of the ban announcement. Five years later, the bleeps are still going.

The unsettling part, as TechCrunch and others reported in 2019, is that POCSAG was specified in an era when intercepting it required a few thousand pounds of radio gear and a working knowledge of VHF demodulation. A software-defined radio dongle costs well under fifty pounds now. The traffic is still in clear, because retrofitting encryption into a thirty-year installed base of one-way receivers is essentially impossible. So the same property that keeps the protocol alive, its mechanical simplicity, also keeps it readable to anyone with a laptop and a back garden.

I keep coming back to the fact that a 1981 specification is still the load-bearing communications layer for the most time-critical moments in British emergency medicine. Not as a bridge, not as a fallback, but as the thing that actually works when a patient is arresting on Ward 4. The ban did not retire the bleep. The bleep outlived the ban, because in the part of the building where seconds matter and Wi-Fi does not, a 1980s broadcast standard is still the most reliable thing in the room.

Sources:

Two Architects, One Dress

Gianfranco Ferré had been at Dior just over two years when he sat down to plan the spring-summer 1992 haute couture collection. The Ascot–Cecil Beaton debut was already behind him, the press had stopped openly questioning whether Bernard Arnault should have hired an Italian to run the most French of houses, and the third Dior atelier under his direction was settling into a routine. The collection he produced that January was titled Palladio, and it was the moment his architectural training stopped being a biographical footnote and became, briefly, the actual subject of the work.

Ferré had graduated in architecture from the Politecnico di Milano in 1969. He never practised. He went straight into accessories, then raincoats, and by 1978 had his own womenswear line in Milan. The "architect of fashion" tag followed him for the rest of his career, applied so casually by the press that it had stopped meaning very much. Palladio was the collection where he made the press take the word literally.

The reference was Andrea Palladio, the sixteenth-century Veneto architect whose villas around Vicenza turned classical proportion into a vernacular grammar that English country houses, Thomas Jefferson's Monticello, and most of nineteenth-century banking architecture spent the next four hundred years copying. Palladio's treatise, the Quattro Libri, codified column, pediment, and bay into ratios anyone could follow. Ferré read it the way an architect would: not as a style to imitate but as a system of proportional decisions you could apply to a different material.

The centrepiece, now catalogued in the Gianfranco Ferré Research Center at the Politecnico di Milano, was a sculptural off-white dress with an enormous wild-silk collar treated as a pediment. The dickey did the work of a building's facade. It announced the order, set the proportions for the rest of the body, and held the geometry in tension with the silk underneath. The silk did the opposite job, falling away in a quiet diagonal across the back, all surface and flow. Pediment above, drapery beneath. A house with weather inside it.

This was Ferré's actual method. His clothes were built around the white shirt the way a Palladian villa was built around its portico. Take the structural element seriously, decide its proportions before anything else, and the rest of the garment follows. He repeated this for the next decade in his own line in Milan, and the Phoenix Art Museum eventually built a whole exhibition around twenty-seven of his white shirts. Palladio was the moment he showed Paris the operation that produced them.

What's worth noticing is how unfashionable a Renaissance architect was as a couture reference in January 1992. The prevailing wind was already toward Helmut Lang's reductive tailoring, toward the Antwerp graduates' deconstruction, toward Miuccia Prada's nylon. Ferré went toward classical proportion. He kept doing it for another four years, through Floridante and the Extrême collection in 1995, through to the Indian Passion Indienne in July 1996 that turned out to be his last Dior show. The Palladio dress sits at the start of that arc, the moment his system was clearest, before the colour and the ornament and the eventual exit.

It is still photographed often. The collar reads in any light.

Sources:

Past Tense, By Friday

The kiosk sat in the middle of the Tesco carpark like a planning mistake. Yellow signage, sloped roof, room enough for two staff and a counter. You handed over a film canister, took a paper envelope with a pre-printed number on it, and walked off to do the weekly shop. By the time you got back to the car, your photographs already belonged to a future you weren't part of yet.

Snappy Snaps opened its first store in 1983. Don Kennedy and Tim MacAndrews put a one-hour minilab inside a small shopfront and built a franchise on it. In August 1986 SupaSnaps was running a test market in sixty-one of its shops across Scotland and the North-East for a new service called PhotoVideo, which transferred your prints onto VHS tape. KLICK Photopoint shows up in the Cambridge Yellow Pages from 1995 through 1998, then disappears. The American precursor, Fotomat, peaked at over four thousand kiosks around 1980, distinctive pyramid roofs in gold paint, and was already in decline by the time most British versions launched. Minilab technology had collapsed the wait from a fortnight to an hour, which everyone said was the future, and which mostly meant the kiosk was no longer the kiosk for very long.

What I remember is the envelope. Manilla paper, bordered red and yellow, your name biroed onto a perforated stub. The negatives came back in a strip protector you were warned not to touch. Twenty-four exposures, sometimes thirty-six. Six or seven of them blurred. Two with a thumb in the corner. One where you had closed your eyes. The processing was a kind of judgement.

Before the minilab arrived in the carpark, the wait was longer. A week, sometimes two, and during that week the photographs lived in some intermediate state nobody could see. The trip itself was already in past tense by the time the prints came home. You held a Saturday in your hand on a Friday two weekends later. The smile in the print was a smile you no longer remembered making.

This, I think, is what the kiosk was actually for. Not the prints, but the wait. The deliberate space between making the picture and seeing it, into which other things could move.

The phone, now, gives you the photograph before you have finished taking it. The image arrives faster than the moment can settle. You see yourself reacting, and you correct, and the version that survives is the version that has already been edited by the act of seeing. There is no week in which the picture quietly becomes a different thing. There is no Friday on which you discover what last Saturday looked like. The intermediate state has been deleted.

The Rutherglen branch is long gone. Most of the British high-street kiosks are gone. The pyramid huts in American carparks have mostly been turned into drive-through coffee stalls. The infrastructure of the wait, all of it, has been recommissioned as the infrastructure of the immediate. Coffee instead of negatives. A six-minute queue instead of a six-day one.

Some of the kiosks themselves are still there, though, sitting empty between the parking bays. A small flat-roofed cabin, just big enough for two people and a counter, a window where the till used to be. They look exactly like what they are, which is the place you used to go to find out who you had been.

Sources: