Skip to content

Plutonic Rainbows

Werbos Wrote It First, 1974

The standard origin story for backpropagation gives the year as 1986 and three names on the byline: David Rumelhart, Geoffrey Hinton, Ronald Williams. The paper ran in Nature on October 9, volume 323, four pages, a tidy little letter about training networks of neuron-like units by adjusting weights to minimise a measure of error. Hidden units, the paper says, would come to represent useful features of the task domain. That sentence is the one that mattered. It promised that a network could learn its own internal vocabulary, instead of being hand-fed one by a human.

The paper is rightly famous. It is also not the first time backpropagation existed.

Twelve years earlier, in November 1974, a Harvard graduate student called Paul Werbos submitted a thesis in applied mathematics that worked the same algorithm out from the direction of optimal control theory. Werbos called it reverse-mode gradient computation, framed it as a way to steer complex dynamic systems toward a goal, and showed how the chain rule, applied backwards through a sequence of differentiable operations, gave you exact derivatives at a cost roughly equal to running the system forward once. Different vocabulary, same mathematics. He had the insight before the field he eventually joined was ready to receive it.

The thesis sat. Symbolic AI was the fashionable thing in the 1970s, expert systems were eating the funding, and neural networks were still under the cloud Marvin Minsky and Seymour Papert had thrown over them in 1969 with Perceptrons. A man with a method for training multilayer networks had nowhere useful to publish it, because nobody serious wanted multilayer networks. Werbos worked at the National Science Foundation for most of the next decade. The thesis was cited a handful of times.

What changed in 1986 was not the algorithm. It was the surrounding cast. Rumelhart, working at UC San Diego, had been trying to build a connectionist alternative to symbolic cognition since around 1979 and had spent years convinced that multilayer perceptrons with learned hidden representations were the missing piece. Hinton was at Carnegie Mellon, having shifted from Boltzmann machines back to gradient methods because Boltzmann was punishingly slow. Williams handled much of the implementation work. The three of them rediscovered the procedure independently, ran it on toy problems that produced results an outsider could see and admire, and packaged it inside the much larger Parallel Distributed Processing volumes published the same year. The Nature letter was the four-page advert. The two PDP books were the argument.

Werbos did get credit, eventually. By the late 1980s he was publishing his own extensions, including backpropagation through time for recurrent networks, and the textbooks gradually started naming him in the lineage. There is a small genre of backpropagation-history essays now, and they all reach the same verdict: the algorithm was rediscovered at least three times before 1986 (Werbos, then David Parker at MIT in 1985, then Yann LeCun in a French conference paper the same year), but it was the Rumelhart-Hinton-Williams presentation that broke through.

The instructive part is not who deserves the credit. The instructive part is what the discrepancy tells you about how ideas actually land. Werbos had the maths in 1974 and almost nobody noticed. Rumelhart, Hinton and Williams had the same maths in 1986 and the field reorganised around it inside a decade. The difference was an ecosystem: a community of researchers ready to use the result, a crisp pair of papers that made the result legible, hardware that was finally fast enough to make small networks do interesting things, and worked examples (the Nature letter itself trains a small network on a family-tree relationship task) where the network learned something a person could feel.

Twenty-six years after the Nature paper, two GPUs in Krizhevsky's bedroom took the same algorithm and embarrassed two decades of hand-engineered computer vision in a single afternoon. The maths in those CUDA kernels is still the maths in Werbos's thesis. Compute caught up. Data caught up. The story of backpropagation is not a story about who invented something. It is a story about how long good ideas can sit on a shelf before the rest of the world is ready for them.

Sources:

Capex In, Headcount Out

Meta is cutting 8,000 jobs starting May 20, roughly ten percent of its workforce, and not filling another 6,000 open roles. Microsoft is offering early retirement and voluntary buyouts to about 8,750 US employees, near seven percent of its US headcount. Both announcements landed inside the same 48-hour window. Both companies report earnings on Wednesday. And both are on track to spend a combined sum on AI infrastructure this year that makes those payroll numbers look like rounding.

The framing the companies are using is interesting because it has finally stopped pretending. Janelle Gale, Meta's chief people officer, told staff the cuts are needed to "offset the other investments we're making." She did not say which investments. She did not have to. Zuckerberg has said the quiet part out loud several times this year already: 2026 is the year of "major AI acceleration," with planned spend north of $115 billion on data centres, custom silicon, and the people who can build them. Microsoft's number is comparable. Across Meta, Microsoft, Alphabet, and Amazon the collective AI infrastructure outlay this year clears $700 billion.

The arithmetic, then, is plain. You move money from one column to another. You stop paying a recruiter and start paying for a GB200 rack. You stop paying a layer of middle management at Reality Labs and start paying TSMC for a wafer allocation. The headcount line shrinks because the capex line is eating it. There is no hidden mystery about where the money is going.

What's new is that the labour story is no longer being told through the language of a downturn. There is no recession to blame. Hyperscaler revenue is up. The cuts are not because business is bad. They are because the business has decided that a particular shape of human labour is now optional. Coding, recruiting, ops, mid-tier programme management. The kinds of work where an agent does eighty percent of the task and a smaller team does the cleanup.

I am not sure the agents are quite there yet. The shootouts between GPT-5.5 and Opus 4.7 are still close enough that nobody outside the labs can confidently call a winner on a given task, and the public benchmarks have a known habit of flattering the model that wrote them. But the executives are not waiting for proof. They are pricing the bet now, against this year's salary budget, on the assumption that the gap closes before the next fiscal year begins. If it does, the cuts look prescient. If it does not, the cuts still happened, and the people are still gone, and the inference bill arrives anyway.

There is an honesty to it that I almost respect. For most of the last decade, "efficiency" was the euphemism that companies reached for when they wanted to fire people without saying why. The word still gets used. But the underlying bookkeeping has shifted. Efficiency now means a specific trade: a payroll line exchanged for a compute line, a headcount slot exchanged for a token bill that the CFO can model with a straight face.

Wednesday's earnings will be the test. If the analysts ask about the labour impact in the same breath as they ask about Azure growth, the equation has been accepted. If they ask only about capex, it already has been.

Sources:

Long After Murray Hill

Pick up any phone made in the last forty years and look at the keypad. The 2 has ABC under it. The 6 has MNO. The 9 has WXYZ. The 1 has nothing, and neither does the 0. Nobody dials letters anymore, not really, and yet the layout is fixed. It survives every redesign of every handset. It survives the move from metal to plastic to glass. It survives the death of the keypad itself, persisting as a virtual grid on a touchscreen that could just as easily render any other arrangement and chooses not to.

The reason is a naming convention that died sixty years ago. Until the early 1960s, telephone numbers were not numbers, they were words attached to numbers. You did not call 685-9975, you called MUrray Hill 5-9975, and the operator routed you on the strength of the first two letters of the exchange name. The Ricardos on I Love Lucy had MU 5-9975 because Murray Hill was the east side of Manhattan. The whole city was a quiet atlas of these prefixes: PEnnsylvania, TRafalgar, YUkon, BUtterfield. London had WHItehall and KENsington and SLOane. San Francisco had KLondike on 55x because there were almost no other words you could build out of the letters available on those digits.

The exchange names existed because the manual-to-automatic transition of the 1920s and 1930s needed a way to make seven-digit numbers memorable in a country where most people had not yet memorised any. AT&T issued a recommended list of exchange names around 1955 in its Notes on Nationwide Dialing, including a short catalogue of neutral words (LIberty, LIncoln, KLondike) for small communities. You looked your number up in the directory and the first two letters were printed in bold. The bold told you which buttons your finger had to find on the rotary dial.

By 1960 the New York Telephone Company had started issuing all-numeric exchanges, and a small, articulate, very furious group of San Franciscans formed the Anti-Digit Dialing League to fight it. They lost. The Committee of Ten Million To Oppose All-Number Calling lost too. By the late 1970s the letter exchanges had been pushed out of the white pages even in New York, where they had clung on longest. The names were gone. The letters stayed.

Once the system did not need them, marketers found them. The toll-free 1-800-FLOWERS line went live in the mid-1980s. 1-800-COLLECT followed. Vanity phonewords became a small industry, supported entirely by a mapping that the phone company had stopped caring about decades earlier. Then T9 predictive text arrived in the late 1990s and, on a generation of feature phones, the 2-9 letter groups became the only way most teenagers wrote anything for about a decade. SMS culture was built on a keypad that had been laid out for a vanished switching system. Tap 7777 to get S, wait, tap 4 to get G. The cadence of an entire pre-iPhone adolescence was metered to a 1920s alphabet.

What I find strange is the degree of the persistence. There is no reason a smartphone keypad needs ABC on the 2. The keypad is software. Apple, Google, and every Android OEM could ship a numeric-only dialler tomorrow and almost nobody would notice for a fortnight. They never will. The letters are vestigial but load-bearing: every business with a vanity number (1-800-FLOWERS still answers, still ships), every emergency line that asks you to remember a word, every accessibility feature that lets a blind user dial by letter instead of digit, depends on a mapping that the system that produced it has not used since the Anti-Digit Dialing League gave up.

Hauntology is supposed to be about the future that did not arrive. The keypad is the opposite case. It is a past that refuses to leave because the cost of evicting it is, every year, slightly higher than the cost of letting it stay. Murray Hill is not coming back. The letters that pointed to it are not going anywhere.

Sources:

Passion Indienne, July 1996

Gianfranco Ferré had been at Dior for seven years when he showed his fifteenth haute couture collection in July 1996. He did not know it was the last one. The Galliano announcement was still three months away, the British press hadn't started speculating, and the fittings inside 30 Avenue Montaigne went on with the usual quiet tension of a house preparing its July couture week. What he produced was the largest, slowest, most ornamental collection of his tenure, and the most personal. He called it Passion Indienne.

Ferré had spent the early 1970s travelling repeatedly to India, designing accessories before he made any clothes at all. The trips became the formative period of his visual education. Twenty-five years later, sitting in the most French of all French houses, he went back to that source. The collection was an act of return rather than appropriation. He named the looks after places he had actually been: Bangalore, Shalimar, Delly, Lalita. The list reads like an atlas of a younger man's notebook.

The centrepiece was the Koh-I-Noor, a peach pleated-tulle dress with lace embroidered in arabesques and strewn with strass, crystal sequins and gold beads. It is the piece every retrospective returns to, partly because Dior Héritage still photographs it beautifully and partly because it carries the conceit of the whole collection in one garment: French couture geometry holding together a surface that wants to behave like a Mughal miniature. The Bangalore suit did the same trick in a different register, a silk jacquard cut to the strict architecture of a couture two- piece, then accessorised at the runway with a draped veil that read as a sari. Elixir was fuchsia pleated tulle with a gold- embroidered bustier. Delly was black silk crêpe and organza embroidered with gold Mughal flowers. Even the wool tailoring got the treatment: the Lalita suit was slate-grey wool, but the cuffs were trimmed in ostrich feather.

What's striking, twenty-nine years later, is how unfashionable the collection was at the time. By July 1996 the prevailing wind in Paris and Milan was already toward the ugly chic vocabulary Miuccia had introduced nine months earlier, toward Helmut Lang's anti-shoulder, toward minimal palettes and deliberate awkwardness. Ferré went the other way. He went toward volume, ornament, gold thread, embroidered flowers, silk worked to within an inch of its life. There was an argument inside the collection: that haute couture had to keep being maximal because nothing else in the system could afford to be.

The argument lost in a way that was almost immediate. The Galliano appointment was announced in October. By the time Galliano showed his first Dior couture in January 1997, the house had committed to exactly the kind of theatrical pop spectacle Ferré had spent seven years working in the opposite direction from. Passion Indienne became, retrospectively, the closing statement of an entire register of nineties couture: ornament without irony, reference without pastiche, a designer drawing on his own biography rather than the season's mood.

Ferré went back to Milan and kept making the white shirts he had always made, until he died in 2007. The Koh-I-Noor is still in the Dior Héritage collection in Paris. The collection that closed his seven years at Avenue Montaigne is, on the evidence, also the most loved of the fifteen. Time has been kinder to it than the room it was first shown in.

Sources:

Waiting for the Box to Ring

Eight o'clock on a Tuesday. You leave the house ten minutes early because the box is a quarter mile down the lane and you can't risk the last hundred yards if a cyclist is using it. You stand outside in whatever the weather is doing. You watch the handset through the glass. At one minute past, you start to wonder. At three minutes past, you start to imagine which version of the bad news it might be.

The arranged incoming call was a real thing. People don't quite believe me when I describe it now, but it was a real thing. You agreed in advance, by letter, by an earlier call, occasionally by telegram, that someone would dial a specific public kiosk at a specific time, and you would be standing beside it, ready. The phone box had its own number printed on a small label inside the door. You read it out, the other party wrote it down, and from that moment a small fixed point existed in the geography of both your weeks.

This was not a marginal practice. Students phoned home. Servicemen abroad phoned wives. Grown children phoned ageing parents in villages where no one had a private line. Couples in different towns kept the appointment with a precision the rest of their lives never required. The call was scheduled like a bus. If you missed it, you waited a week.

What strikes me, looking back, is the absoluteness of the arrangement. There was no message. There was no notification. There was no possibility of "running ten minutes late, do you mind." If the phone rang and you were not there, the other person had stood in their box for nothing and would not know why. If the phone did not ring and you were there, you could stand for half an hour in the rain and never know whether the line had failed, or the operator had failed, or whether the person at the other end had simply not loved you enough that day to leave the house.

The same kiosk that produced the smell of cast iron and Bakelite produced this other thing too, this discipline of arrival. To wait outside a phone box at a fixed hour was to take part in a ritual that organised time across distance using nothing but trust, a printed kiosk number, and a bus timetable.

I think about it often, standing in queues for trains that text me when they're delayed, watching dots ripple in a chat thread. The coordination problem the phone box solved is the same problem WhatsApp solves, but the solution had a texture. You felt the wait. You knew exactly what minute you were in. The light failed in November and you stood there anyway, because the alternative was a week of not knowing, and a week of not knowing was something people accepted as ordinary.

There is no equivalent today, none. We carry our boxes. They ring constantly, anywhere, and we resent it. What has been lost is not the kiosk and not even the appointment but the specific quality of standing somewhere in the cold, looking at a piece of public infrastructure, and trusting it to do its part of a job that two people had quietly agreed on the week before.

That is gone. The boxes are mostly defibrillators now.

Sources:

Sovereignty as a Moat

Cohere announced on Friday that it would merge with Aleph Alpha, the German enterprise AI company that pivoted away from frontier model development in 2024. The combined entity will be valued at roughly $20 billion once Cohere's pending Series E closes, with Schwarz Group, the German retail conglomerate that already co-led Aleph Alpha's 2023 Series B, putting in another $600 million. Dual headquarters in Toronto and Heidelberg. Aidan Gomez stays at the helm. The press release uses the phrase "transatlantic AI powerhouse" without flinching.

The strategic logic is more interesting than the dollar figure. Cohere was last valued at around $7 billion. Aleph Alpha at something between €500 million and $3 billion depending on which outlet you trust. Twenty billion for the combined company is a big step up for both, and it is being underwritten by something that is not a model benchmark. Neither company has a frontier LLM. Neither has a consumer surface anyone outside enterprise procurement could name. What they have, jointly, is a passport that is not American.

That passport is the entire pitch. Cohere has always positioned itself for regulated buyers (defence, energy, healthcare, public sector) and chose enterprise from day one rather than fight ChatGPT for retail attention. Aleph Alpha had German government contracts and, more importantly, a chairman who is friends with people who write procurement specifications. Stitch the two together and you have a credible non-US option for a European Commission that has spent the last eighteen months trying to work out what it actually means to have digital sovereignty when the underlying weights, the underlying chips, and the underlying researchers all come from somewhere else.

The Schwarz piece is the part to watch. Schwarz Group owns Lidl and Kaufland and, less famously, a cloud and data-centre arm that has been quietly scaling for European public-sector buyers. They are building the on-premise hosting infrastructure that German federal procurement will demand. A Cohere model running on Schwarz-operated racks inside a German data centre, sold to a Land government that has been told by Berlin to reduce dependence on American hyperscalers, is a genuinely new product category. It is not a better model. It is a model with a different return address.

I am unsure whether this works. The case for it is that European governments will pay a premium for sovereignty, that defence and healthcare and public-sector workloads are sticky enough to fund a real R&D budget, and that being merely good at enterprise plumbing is a viable place to compete when the frontier labs are pricing themselves at GPT-5.5 levels. The case against is that sovereignty is a soft moat. The moment a European cabinet office decides Claude on AWS Frankfurt is good enough, or decides Gemma 4 on Apache is good enough, the premium evaporates.

There is also a thing that I keep noticing about these consolidation announcements. The word "powerhouse" appears in roughly nine out of ten of them. Mistral and a French national champion. Cohere and Aleph Alpha. The unspoken implication is always that the new entity is now big enough to matter, and the unspoken question is always whether being big enough to matter in your home market is the same thing as being competitive in the actual market for intelligence. So far the answer has been no. The bet here is that Brussels makes it yes.

Sources:

Forty-Five Years of AT

Open the firmware of any cellular modem shipping in 2026 and you'll find it answering to a command language designed for a 300-baud modem in 1981. AT+CREG to ask the network for registration status. AT+COPS to pick an operator. AT+CGDCONT to set up a packet data context on a 5G NR carrier. The "AT" prefix stands for ATtention, a convention Dennis Hayes and Dale Heatherington coded into the Smartmodem 300 because they needed a clean way for a host computer to interrupt an in-progress phone call without ambiguity.

Hayes Microcomputer Products filed for Chapter 11 in 1998 and was liquidated the following year. The standard wasn't.

3GPP TS 27.007 is the document that keeps the language alive. The current revision, V18.6.0, was published in May 2024 and runs to several hundred pages of extensions to the Hayes set, all of them prefixed AT, all of them readable as plain text over a serial connection that no modern phone actually exposes to the user. Every 4G LTE and 5G NR chipset, every IoT cellular module, every car telematics box, every emergency satellite modem, all of them speak it. The IoT industry has built a quiet, very-large dependency on a textual interface that originally targeted a serial port on an Apple II.

What's strange about it isn't the survival. Lots of old standards survive. What's strange is that nobody has tried very hard to replace it. There is no AT-Next initiative. No working group is sketching a binary successor. The chipset vendors ship reference firmware with the same command interpreter their predecessors shipped twenty years ago, and the device makers who consume that firmware ship it forward unchanged because rewriting it would buy them nothing and break everything that depends on it.

This is what hauntology in protocol design actually looks like. The thing isn't preserved out of sentiment. It's preserved because the cost of dislodging it always exceeds the cost of one more revision. So the language accretes. +CGDCONT got added for GPRS in the late 1990s. +CEREG for LTE around 2009. +C5GREG for 5G a few years ago. The 1981 prefix never moved.

If you've ever flashed an OpenWrt router, configured a Raspberry Pi LTE hat, or watched a Quectel modem boot, you've seen it. The serial console scrolls past: AT, OK. Then AT+CFUN=1, OK. The handshake grammar of a long-dead manufacturer's late-1970s telephone-line equipment, performed silently inside a device that talks to a satellite.

There's a reasonable argument that this is a problem. The text-based interface is slow, error-prone, and difficult to extend cleanly. Modern modems do expose alternative APIs (QMI, MBIM) for the actual data path, but provisioning, diagnostics, and many control surfaces still go through AT. The replacement layers are bolted on top of, not under, the Hayes layer.

The stranger argument is that this is just what infrastructure does. It calcifies around whichever interface was good enough at the moment of consensus, and it stays calcified for as long as somebody, somewhere, still ships against it. The Smartmodem 300 sold for $279 in 1981. Forty-five years later, several billion new devices a year still type AT before they say anything else.

Sources:

Super App, Same Engine

OpenAI shipped GPT-5.5 on Thursday. The model is available now to paying ChatGPT and Codex users in standard, thinking, and pro flavours, with API access to follow. The pitch from Greg Brockman in the press briefing was that this one is built for work, coding, computer use, research, and that it can take an unclear problem and decide what needs to happen next without much hand-holding.

Stripped of the briefing-room varnish, the message is that ChatGPT, Codex, and the browser tooling are converging into a single product, and 5.5 is the engine that makes the convergence plausible. Brockman called it the foundation for "how we're going to do computer work going forward." That is super-app language, and it has been the open secret of OpenAI's product strategy for about a year. The model release is the part that gets the headlines; the strategy is the part that decides whether the next twelve months go well.

I am sympathetic to the ambition. The frustration of using six different AI surfaces to get one task done is real, and the seam-stitching tax adds up. A single thing that opens your browser, edits your repo, runs your tests, and writes the PR description is genuinely useful, more useful than another point on a benchmark. The hard part has never been the demo. The hard part has always been getting the model to know when to stop, when to ask, when to fail loudly rather than quietly produce something broken.

5.5 is priced at $5 per million input tokens and $30 per million output, which is GPT-5 territory and roughly an order of magnitude above V4 Pro. That is fine if the agentic capability is genuinely a step up, and a problem if it is not. Computer-use agents burn output tokens prodigiously. A single half-decent coding session can produce tens of thousands of tokens of tool calls, reasoning traces, and revisions. Multiply that by an enterprise rollout and the unit economics get scary fast, particularly when a Chinese open-weight model can run the same loop, less well, for pennies.

The other thing worth noticing is the cadence. Anthropic shipped Opus 4.7 the week before. DeepSeek previewed V4 the day after. CNET wrote it up as an "arms race," which is the laziest possible framing but, this week at least, accurate. Three frontier releases inside eight days, all pitching some flavour of agentic coding as the headline capability, all aiming at the same enterprise budget. The dispersion of "best at coding" across labs keeps narrowing. So does the differentiation.

Tom's Guide ran a 5.5 versus Opus 4.7 head-to-head and reported seven wins for Claude on seven impossible tasks. Single-evaluator shootouts are noise more than signal, but the noise is itself informative: nobody outside the labs is sure which model wins on which kind of work right now, and the customer-side answer is increasingly "whichever one we already have a contract with." That is not where OpenAI wants to be when it is asking for super-app trust.

The model probably is good. The strategy probably needs more than a model.

Sources:

Banal Eccentricity, 1996

The bomb scare came on the final day of Milan Fashion Week, October 1995, just as the editors were heading over to Prada. The headquarters got cleared, the police swept the building, the show went on. Whatever Miuccia put on the runway that afternoon was always going to be the news. What she actually put on the runway was a problem.

Avocado green and sludge brown. Murky 1970s tones a critic later said hovered somewhere between shades of slime and mold. Checked kitchen-tablecloth patterns paired with dirty 1950s florals, hand-drawn in a way that looked like the printer had given up halfway. The shoes were clunky T-bar sandals and unorthodoxly low-heeled sliders, the opposite of the strappy follow-me heels the rest of fashion was selling that season. The collection was called Banal Eccentricity. The press, mostly, called it Ugly Chic.

Robin Givhan ran a piece in the Washington Post the following May titled "Ugly is in." Susannah Frankel later wrote, in the AnOther cover story for S/S17, that the term belle laide could have been invented for Miuccia at that moment. Alexander Fury, in a 2014 essay reissued half a dozen times since, called the brown "faecal." All of these are compliments. Read them in sequence and you start to understand what was happening: a designer had walked onto the most commercial week of the fashion year and committed an act of taste sabotage so calculated that the trade press needed two years to catch up.

The cleverness wasn't the ugliness. The cleverness was that the ugliness was made out of the most expensive materials a luxury house could source. Cashmere, silk, the high-tech nylons Prada had been refining since the 1984 backpack. The kitchen tablecloth was hand-embroidered. The avocado wool was woven to the exact gauge a couture house would demand. None of it looked it. That was the point.

Miuccia had inherited the company from her mother in 1978 and spent the eighties quietly building a reputation for understatement. The black nylon backpack of 1984, the gauzy minimalist suits of the early 1990s, none of that prepared anyone for what S/S 1996 actually did. It threw out the playbook of seduction. It said the female silhouette did not have to flatter, and that taste itself was a kind of laziness. Miu Miu, launched three years earlier and named after the family nickname, had been her sketchbook for this. The mainline collection finally said it out loud.

What followed is the part that's hard to remember now because it became the water everyone swims in. The off-key cool Frankel describes, the ironic 1970s palette, the deliberate awkwardness around proportion and footwear, the willingness to make the model look slightly wrong on purpose, all of that is now a default mode for half the labels showing in Milan and Paris. You see a deliberately bad sandal in a 2026 lookbook and the visual grammar comes from this one show.

Miuccia's ugly-chic vocabulary outlasted the supermodel era it interrupted. It outlasted the boom that bought it. It is still, somehow, the cleverest argument a designer has made against beauty in my lifetime, and the only one that the market eventually agreed with.

Sources:

Last Drinks at Droitwich

A 106-year-old working men's club in Droitwich shut its doors earlier this month. The committee cited the usual culprits, rising operating costs, building repairs, debt. The same week, in Cleethorpes, another club went down. Monks Road in Lincoln went in 2018 after a century of trade. The Louth Conservative Working Men's Club rebranded in 2023, dropping the "Conservative", dropping "Working Men's", trying to stay alive as Louth Social Club after membership fell from a thousand to three hundred. The Club and Institute Union itself, the federation that has stitched these places together since 1862, has quietly cut "Working Men" from its own name.

Three-quarters of the country's working men's clubs have closed in the last fifty years. In the 1970s the CIU listed about four and a half thousand affiliated clubs and four million members, a tenth of the adult population. The current figure is around eleven hundred, with some recent counts putting it under a thousand.

It would be tidy to blame the 2007 smoking ban, and people do. The ban hurt, of course. But the decline was already a long-running project, sitting underneath the headline cause. The mines went first, then the mills, then the engineering shops that named the clubs they sponsored. Once the works closed, the membership pool dried up, because the membership pool was the works. A Railwaymen's Club without railwaymen is a strange room. The smoking ban only finished a thing that deindustrialisation had already arranged.

The interiors are what people miss without knowing they miss them, and the interiors are what cannot be photographed back into existence. Flock wallpaper. Formica bar tops curving round to a glass-fronted display of crisps and pork scratchings. A concert stage at one end with a curtained backdrop and a small electric organ. A bingo board screwed to the wall behind the bar, numbered cards in a wooden rack. A committee room with leatherette chairs and minutes in a ringbinder. None of this is heritage in any official sense. There is no listing, no fund, no preservation society. When the building goes, the whole grammar of the room goes with it.

What's properly hauntological about the working men's club is that it was the kind of social institution the internet has not replaced and cannot replace, and yet it has not been mourned as a loss. A bounded community, geographically anchored, with a printed membership card that admitted you to two thousand other rooms exactly like this one in towns you'd never visit. You could walk into a CIU club in Wakefield with a card from Workington and be served. The card was a passport into a country that has now closed its borders.

Like the village hall at Balcombe, these were the institutional architecture of working-class self-organisation, built before the welfare state arrived to do some of the same work, and now outliving the world that made them make sense. The buildings persist longer than their function. A shuttered concert room with the bingo board still on the wall is not a ruin yet, only a room waiting to be turned into flats.

Reverend Henry Solly, who founded the CIU in 1862, was a teetotaler who wanted alternatives to the pub. The members, within three years, voted the alcohol back in. That tells you everything about who the institution actually belonged to. It belonged to them, and they ran it, and now it is closing because there are fewer of them left, and the ones who remain are tired.

Sources: