Skip to content

Plutonic Rainbows

Shrunken on Purpose

Rei Kawakubo showed Comme des Garçons in Paris on 6 March 1994 under the title Metamorphosis. The collection ran for autumn– winter 1994–95. Cecilia Chancellor opened. Linda Evangelista closed. Christy Turlington, Kate Moss, Stella Tennant, Shalom Harlow, Amber Valletta, Nadja Auermann and Eve Salvail were in between. By the cast list alone you can read what the show was not, which is a quiet studio exercise. It was a major Paris ready-to-wear at the loudest moment of the supermodel decade, and what it put on those bodies was a series of garments built to look wrong.

The technique was boiled wool. The fabric was knitted or woven to size, then deliberately shrunk after construction. What came back from the wash was a class of garment that no longer fitted the body it had been cut for. Sleeves rode short. Shoulders sat high. Greatcoats lost their length in odd places, kept it in others. Duster coats came out of the process with frayed raw edges and crinkled cotton linings hanging below the wool. Sweaters bobbled in patches and not in others. The Met's later notes called it abject; the National Gallery of Victoria, which holds a top-and-trousers set from the show as part of the Takamasa Takahashi gift, files it under reframing fashion. Both phrases are reaching for the same thing, which is that the garment had been put through something the wearer's body could not undo.

This matters because of where it sits in the timeline. Three years later Kawakubo did the Body Meets Dress, Dress Meets Body show for spring 1997, the one with the duck-down padding and the bulges and the press conviction that the project had finally tipped into pure provocation. The shrunken-wool collection is the obvious precursor and is rarely cited as one. Metamorphosis is the same argument made with subtraction rather than addition. Kawakubo had been heating the fabric until the garment stopped behaving like a garment. The 1997 show heated nothing and added wadding. The conclusion in both cases is that the body fashion exists for is not the body inside the clothes, and the gap between the two is where the work happens.

There is a second thing the show did that is easier to miss. Boiled wool is a folk technique. It is what the Tyrolean jacket is made of, the loden coat, the heavy military greatcoat that keeps its shape because the felt has already decided what shape it will be. Kawakubo was using a craft method already coded as European, rural, and protective, and turning it on the wearer. The result reads less as deconstruction in the architectural sense, that word she has always disliked, and more as a kind of counter-tailoring, a way to make a coat that has refused the shoulder it was sewn for.

Vintage market still places these pieces. A black boiled-wool tunic dress from the show comes up at Lithe Curation; the grey- lined duster coat surfaces through JHROP; the Homme Plus suit appears at dot COMME with the original lining still hanging out. What you can't reconstruct from the surviving garments is the walk. You have to reach for the Getty image bank and the Yohji aftermath that the same Paris season was still working through to put the show in motion again. The clothes alone tell you everything is wrong. The bodies in them, in March 1994, were the most famous in the world, and the dissonance was the point.

Sources:

Selfridges Had a Cash Office

Until the 1970s, when you handed money to a sales assistant in Selfridges, the assistant did not give you change. They could not. There was no till at the counter. There was instead a small brass aperture on the wall behind the counter, and the assistant rolled your banknotes and the docket into a wooden or metal canister, screwed it into the tube, pulled a handle, and your money flew off through the building's walls to a centralised cash office somewhere out of sight, where a clerk processed the transaction, signed the receipt, screwed the change back into the canister, and fired it back. You waited at the counter. The whoosh of returning canisters was a constant retail sound, and the tubes that carried them were called Lamsons.

William Stickney Lamson, a Civil War veteran who ran a five-and-dime in Lowell, Massachusetts, patented the first cash-carrier system in 1881. The original was almost comically simple: hollow wooden balls rolling along gently sloping wood-and-leather rails, propelled by gravity from the sales counter to a cashier's loft above. He founded the Lamson Cash Carrier Company in Boston the next year. By 1884 an Irish-American agent, John Magrath Kelly, had set up the British arm in London and secured the European, African, Australian and Middle Eastern rights to the patents. By 1888 the Lamson Store Service Company Ltd was capitalised at £85,000, the equivalent of nearly ten million today.

The technology evolved fast. Wire systems came next, suspended pulleys that fired carriages between counter and office on tensioned cables. Then, in 1899, Lamson absorbed an American rival, the Bostedo Package and Cash Carrier Company, and renamed it the Lamson Pneumatic Tube Company. That was the form the technology took for the next seven decades. By 1911 there was a purpose-built factory at Hythe Road, Willesden Junction, in northwest London, and the tubes were going into Selfridges, Harrods, John Lewis, Whiteley's and the Army & Navy.

What is hauntological about Lamson is not the equipment, which is well-documented and unambiguous. It is the spatial logic. The till did not live where the sale happened. The till was a room. Money was a thing in motion through walls. The cashier was an institution rather than a piece of equipment, and the act of selling something to a customer involved temporarily losing physical possession of their payment to a separate department of the building. This required trust between assistant and customer that has no modern analogue, the till being now the thing that confirms the sale rather than the thing the sale waits on. It also required an architecture. Every counter piped or wired to a central node. Every store designed around the geometry of cash movement. Walk into a flagship interwar department store with the original Lamson layout in mind, and the floor plan suddenly makes sense in a way it cannot if you assume the till has always been a box on a shelf.

The British systems lasted longer than they should have. Lamson Engineering Ltd, formed by merger in 1937, only ceased independent operation in 1976, when it was acquired. By that point most stores had moved to electronic point-of-sale terminals, but a number of installations stayed running well into the post-war decades, sometimes for cash, sometimes downgraded to internal mail. A few survive as restored curiosities. The Up-to-Date Store at Coolamon, in rural New South Wales, still has its original ball-and-rail system in working order, the only such installation known anywhere.

There is a particular lesson here for anyone who has worked in modern retail and assumes the till is a kind of natural fact, the place where money meets transaction at the point of contact. It isn't. There was a longer era when the building counted the money for itself, in a single secret room, and you waited politely for the canister to come back.

Sources:

Hallucinations Down, Surface Area Up

OpenAI replaced GPT-5.3 Instant with GPT-5.5 Instant as the default ChatGPT model today, and the rollout pairs two things that should probably be considered separately. The model hallucinates less, by OpenAI's own measurement: 52.5% fewer hallucinated claims on high-stakes prompts in medicine, law, and finance, and a 37.3% reduction on the conversations users have explicitly flagged as factually wrong. It also draws on much more of your context by default, pulling answers from past chats, uploaded files, and Gmail for paid users on the web.

The accuracy improvement is the easier story. GPT-5.3 Instant already shaved 26.8% off the previous baseline, which I covered in an earlier post about OpenAI's release cadence, so 52.5% on top of that is a real engineering result rather than a marketing one. AIME 2025 climbs from 65.4 to 81.2. MMMU-Pro goes from 69.2 to 76.0. These are the unglamorous benchmarks that actually correlate with whether a model can be trusted to draft a discharge summary or pre-read a contract.

The personalization side is the part I keep turning over. The default ChatGPT now treats your archive as retrieval material. Ask a question, and the answer can pull from a chat you had two weeks ago, a PDF you uploaded last quarter, or a thread in your Gmail. There is a memory-source list attached to each response so you can see what was used and remove what you do not want quoted. The control surface is real and deliberately exposed. Memory sources are not visible to anyone you share a chat with, which closes the obvious leak.

Still, the cumulative effect is a chatbot that is harder to use casually. You now have to think about what you have told it across months, what is sitting in your Drive, and which of your archived emails it might surface in a quick reply. The Axios writeup made the tradeoff plain: lower hallucination rates can make people trust answers more even when the model is still capable of being wrong, and a personalization layer increases the cost of any wrong answer because you assumed the system had read your situation correctly.

The model is also trying to feel less like a chatbot. OpenAI says it has cut "gratuitous emojis" and reduced unnecessary follow-up questions, so the tone defaults closer to a colleague than to a customer service avatar. After the GPT-4o backlash earlier this year, when users campaigned to keep the model that "affirmed" them, this change is interesting. The new default is calmer and more concise, which is the opposite of what the loudest user segment demanded.

Developers get GPT-5.5 as chat-latest. Paid users keep GPT-5.3 Instant for three months before it is retired. There is no router toggle this time, no two-day rollback, no public scramble. OpenAI appears to have learned at least that part of the lesson.

Sources:

OpenAI Picks Its Bankers

The same Monday Anthropic announced its Wall Street joint venture, OpenAI announced one of its own. Different consortium, same shape. OpenAI's vehicle is called The Deployment Company, valued at $10 billion, with around $4 billion raised from TPG, Brookfield, Advent, and Bain. OpenAI keeps majority control. The partners between them carry access to more than 2,000 portfolio companies. The point of the structure is to push GPT into the operating layer of those companies, not to sell them seats.

Yesterday's post was about Anthropic doing the same thing with Blackstone, Hellman & Friedman, and Goldman, at a smaller $1.5 billion. I read that as a one-off, a clever move from the lab that has been the more enterprise-flavoured of the two. The fact that OpenAI was running the identical play in parallel changes the reading. This is not Anthropic being unusual. This is the new shape of frontier-lab commercial strategy, and both labs arrived at it at the same time.

What both companies seem to have decided is that API revenue, however large, is not enough to justify what comes next. Capex commitments at this scale need a different kind of revenue. They need integration deals, multi-year transformation contracts, the sort of thing that gets paid for out of operating budgets rather than software budgets. That is consulting work. Business Insider reported on Monday that one insider called the Anthropic vehicle "the McKinsey of AI", which is honest enough to be useful. McKinsey, BCG, Bain and Accenture have spent decades building the infrastructure for this kind of relationship. The labs do not want to spend decades.

So they have rented it. The PE firms are not really investors here, or not only investors. They are introduction layers. Blackstone alone runs about 275 portfolio companies. The four firms behind OpenAI's vehicle collectively touch thousands. None of those companies is going to call up OpenAI cold and ask for a deployment template. They will, however, accept a phone call from their own owner suggesting they try one.

There is a quieter detail underneath. Both labs are heading toward IPOs this year. PitchBook is already warning that OpenAI's might slip into 2027, but the direction is clear. A frontier lab going public needs a story about how its enterprise revenue compounds without requiring every customer to hire prompt engineers. A McKinsey-shaped attachment, with templates and reusable engagements, is exactly that story. The S-1 will look better with it than without.

What I keep noticing is how short the path was. Eighteen months ago the consensus was that the labs would compete for distribution: which one gets into Office, which one gets into Google Workspace, which one wins the chatbot. That is still happening, but it has stopped being the interesting question. The interesting question is which one gets quietly embedded in the close-the-books process of a mid-sized industrial holding in Ohio, and who got paid to put it there.

Sources:

Nine Million Beige Boxes

In 1982, France Télécom began handing out small beige terminals for free to anyone with a phone line. The terminal had a keyboard, a CRT, a modem, and no microprocessor. It dialled into a national videotex network using a standard called V23 bis, and on the other end of the line sat thousands of services that could be reached by typing short codes. The system was called Minitel, and within a decade it covered nine million households. By the peak in 1993, somewhere around 25 million French citizens were logging more than 90 million hours a month across roughly 26,000 services, more than a decade before most Americans had heard the word "internet".

The thing that gets forgotten is how deliberate the policy was. President Valéry Giscard d'Estaing's government rolled Minitel out during a period when French elites felt that the dominance of US firms in telephone equipment, computers, databases, and information networks was a threat to national sovereignty, or at least to cultural pride. The terminal was free because the state wanted volume. Usage was billed by the minute, the network paid out to service providers, and nobody needed a credit card or an account. It was a closed garden run by the post office, and for roughly fifteen years it worked better than anything else on earth.

Then the web arrived, and France kept its garden walled. Service providers were making real money on the existing system. Users were comfortable. The government had no political appetite to subsidise a transition to an English-speaking American protocol when the French-speaking national one was still doing what most French people wanted it to do. The country that had been a decade ahead of everyone else on consumer networking spent the back half of the 1990s coasting on the system it already had, while broadband matured elsewhere. By the time it became obvious which side of that bet had aged better, the gap was already wide.

The shutdown came on 30 June 2012. The Orange subsidiary of France Télécom, by then managing what was left of the network, said it had reached its natural death. Around 670,000 terminals were still in circulation when the plug was pulled, mostly used by farmers exchanging cattle data, doctors transmitting patient details to the national health service, and small tradespeople placing orders with suppliers who had never bothered moving online. Janine Galey, an 85-year-old mother of seven in Paris, told the Guardian she had used her Minitel until around 2000 and then gone straight to an iPad, skipping the desktop web entirely. There is a thirty-year window of French daily life in which a meaningful slice of the country transacted online without ever touching a browser.

What persists is the policy instinct. The same logic that built Minitel, that French communications infrastructure should be French and that the state has a legitimate role in shaping it, runs underneath a great deal of contemporary EU digital policy. GDPR, the Digital Markets Act, the AI Act, the recurring French enthusiasm for the phrase "souveraineté numérique" in cabinet briefings: none of that is causally downstream of Minitel in any clean way, but the intellectual furniture is the same. A country that once built its own network and ran it for thirty years is not going to be constitutionally relaxed about Mountain View running the next one.

The terminals themselves are kitsch now. They turn up in flea markets in the 11th arrondissement for thirty euros, beige plastic with the slide-out keyboards that supposedly inspired Steve Jobs's first Macintosh. Most of them still work if you can find a phone line that will carry the V23 bis signal, which is harder every year. The ghost is not the hardware. The ghost is the assumption, baked into a generation of French civil servants and now their successors, that the network is a thing the state can have an opinion about. The web, by contrast, has always insisted that it is weather. France was the last country to fully concede the point, and arguably has not conceded it yet.

Sources:

Bike Shorts at Chanel

A year before the famous hip-hop show, Lagerfeld put bike shorts on a Chanel runway. The Spring-Summer 1991 ready-to-wear was presented in Paris in October 1990, with a beach theme. Cycling shorts turned up under sequined tops, under little structured jackets, under cropped pieces in the saturated colours Chanel had not really worn since Coco was alive. The leggings-and-leotard logic that aerobics had pushed into ordinary wardrobes by the late eighties got promoted, on that runway, into the most expensive ready-to-wear in Paris.

The cast read like an inventory of the moment. Claudia Schiffer, Karen Mulder, Linda Evangelista, Tatjana Patitz, Yasmeen Ghauri, Naomi Campbell. Tim Blanks, looking back at the show twenty-odd years later for Style.com, used it as the marker for when backstage stopped being a back room. He remembered four or five camera crews at the start of the season and four or five hundred by the end of it. Whatever you call the supermodel era, this collection sits inside the moment it became a media phenomenon rather than an industry one.

What the show actually did, on the level of clothes, was harder to read at the time. Chanel in 1990 still meant something specific to the women who bought it: a quilted bag, a chain belt, a tweed suit, a particular kind of older clientele the house had spent the previous decade trying not to lose while also trying not to ossify around. Lagerfeld's job, by then, had been to keep both audiences in the room. The beach collection was an attempt at the second part. Bike shorts under a tweed jacket are not a concession to an existing customer; they are a bet that there is another customer arriving.

The reference points were sport and sportswear, not couture. An American context kept showing through, the cyclist on Venice Beach, the aerobics studio, the pop video. Lagerfeld liked to say his life was based on change, also change of mind. What was right for the next ten minutes might not be right after that. The Spring 1991 show looks now like the moment that change-of- mind became a method rather than a quip, the moment where the house's commercial heritage started getting pushed through a filter from somewhere outside Paris and let back out as something else.

The Fall 1991 show, six months later, ran the same trick at a higher temperature, piled-on chains and the line about Christmas trees. That collection took the headlines, partly because the press had caught up with what was being attempted, partly because hip-hop codes inside Chanel were a more legible provocation than bike shorts inside Chanel. The beach show stayed quieter in the record, even though it was the one that established the frame.

Looking at the Getty stills, the thing that stands out is how unforced the styling reads. The supermodels are not trying to sell you the bike shorts. They are wearing them as if cycling shorts under a jacket were already an ordinary thing for a woman with money to wear in October 1990, which it was not. By the time the next decade arrived, the experiment had hardened into a costume cliché, leggings under everything, athleticwear codes on the high street. The thing the runway did first does not always survive into the version that becomes the rule, but it leaves the imprint.

Sources:

Lytham St Annes, June 1957

Lytham St Annes is a quiet seaside town between Blackpool and the Ribble estuary. In June 1957 a machine the size of a delivery van was switched on inside a Post Office building there, and Ernest Marples, the postmaster general, pressed a button. ERNIE, the Electronic Random Number Indicator Equipment, generated the first nine-digit Premium Bond numbers in history. Two thousand numbers an hour, drawn from the thermal noise of neon gas tubes, fed to a teleprinter, matched against bond serials that were not yet on a computer because no computer existed at the right scale to hold them.

ERNIE was built at the Post Office Research Station at Dollis Hill, in north London, by the same engineers who had built Colossus during the war. Tommy Flowers oversaw the project. Harry Fensom, who had worked under Flowers on Colossus, was chief engineer. Sidney Broadhurst led the build team. None of them could speak about Colossus, the wartime machine was classified for thirty more years, but the techniques transferred sideways into a different national project. The state had decided to encourage saving without raising taxes. Premium Bonds were the answer. The lottery needed numbers no human could fix, and the men who had broken Lorenz cipher knew how to make them.

The Science Museum description puts it plainly: for many people, ERNIE was the first electronic brain they had ever heard of. Not the first they had used. The first they had heard of. The press anthropomorphised the machine immediately. Christmas cards arrived in Lytham St Annes addressed to ERNIE personally, and millions of people who had never used a typewriter, let alone a computer, took for granted that a steel cabinet in Lancashire was deciding their luck once a month.

The line of succession is long. ERNIE 2 arrived in 1973, sixty-five thousand numbers an hour. ERNIE 3 in 1988 ran at three hundred thousand, and produced in April 1994 the first Premium Bonds millionaire: a man from Surrey, ten thousand pounds invested, bond number 29JZ644125. ERNIE 4 in 2004 generated a million numbers an hour, weighed ten kilograms rather than fifteen hundred, and was small enough to retire to the National Museum of Computing at Bletchley Park when ERNIE 5 took over in 2019. ERNIE 5 is a chip the size of a grain of rice, built by a Geneva firm called ID Quantique. It uses the quantum behaviour of light rather than thermal noise. It produces nine million numbers in twelve minutes, replacing a machine that needed nine hours, replacing a machine that needed near enough three days for the first draw.

What persists across all five generations is the name. The technology is unrecognisable; the physical object has shrunk by something like a factor of a hundred thousand; the source of randomness has migrated from the thermal jitter of valves to the irreducible weirdness of photons. ERNIE is still ERNIE. NS&I uses the same nickname they used in 1957. The brand is older than nearly every computing system in continuous use anywhere on earth.

There is a hauntology in this, and it is the inverse of the usual one. The usual hauntological object is a thing whose function has died and whose body remains, leaving a husk. ERNIE is a name whose function has survived through five complete bodily reincarnations. The ghost is the handle, not the cabinet. The cabinet from 1957 sits in the Science Museum collection. ERNIE 4 sits at Bletchley Park, a few rooms from the Colossus rebuild, where it can keep its grandfather company. ERNIE 5 sits inside a server rack in Lancashire and is invisible at the scale of a glance.

Once a month at the start of every month, somewhere on that server, the descendant of a wartime code-breaking machine still picks the numbers. The teleprinter has been replaced, the neon tubes have been replaced, the engineers who built the original machine are long dead, and yet the press release that goes out from NS&I still credits the result to ERNIE's draw. Whatever is doing the work, the name does the explaining.

Sources:

Tables, Not Tokens

SAP announced this morning that it has agreed to acquire Prior Labs, the Freiburg startup behind TabPFN, and will invest more than €1 billion over four years to scale it into what the press release calls a globally leading frontier AI lab in Europe. Terms of the deal itself were not disclosed. The acquisition is expected to close in the second or third quarter of 2026, pending regulatory approval. Prior Labs will keep operating as an independent unit under its three co-founders, Frank Hutter, Noah Hollmann, and Sauraj Gambhir, with Yann LeCun and Bernhard Schölkopf on its scientific advisory board.

The headline number is the easy part. What is interesting is what SAP is buying.

Tabular foundation models are not large language models. They are a different shape of pre-trained network, designed for the kind of structured data that lives in spreadsheets and database rows: customer churn predictions, credit scoring, supply-chain forecasts, the unglamorous numerical workloads that actually run an ERP system. TabPFN, the model series Prior Labs published in Nature, set the state of the art on tabular benchmarks across hundreds of independent academic studies. It has been downloaded over three million times and is open source. SAP started seeding this category itself with SAP-RPT-1, and the Prior Labs deal is the doubling-down.

This matters because almost every public conversation about frontier AI in 2026 still defaults to chat. Whether a model can write code, summarise a meeting, explain a research paper, draft an email. None of that has very much to do with the data SAP customers actually run. Predicting whether a particular invoice will be paid on time is a tabular problem, and an LLM is the wrong tool for it. TabPFN is the right one, and SAP now owns the lab.

The other reading is geopolitical. SAP is the one European company that genuinely matters in enterprise software, and a German-headquartered frontier AI lab anchored in Freiburg is exactly the kind of thing the Cohere–Aleph Alpha merger was supposed to produce in a different architectural lane. It is not yet clear whether European AI sovereignty holds together as a strategy when it depends on private balance-sheet decisions, but the Walldorf cheque does buy a credible counterweight to the US labs in at least one part of the stack.

There is also the timing. SAP announced the Dremio acquisition on the same press-release run, an open-source data-lakehouse buy that fits the same agentic-AI distribution thesis Anthropic and OpenAI were both pricing in this weekend with their own PE-backed enterprise vehicles. The frontier-lab era is starting to look less like a small handful of California labs serving the world through APIs, and more like a set of vertically integrated stacks each glued to a particular distribution channel. SAP's channel happens to be every Fortune 500 finance department.

Whether tabular foundation models scale the way LLMs did is genuinely an open question. The Nature paper showed they work strikingly well at small to medium row counts; pushing them to millions of rows and real-time inference is what the €1 billion is meant to fund. If it does scale, the next decade of enterprise AI starts looking quite different from the chatbot-oriented one currently being marketed.

Sources:

Embed, Not Subscribe

Anthropic is finalising a $1.5 billion joint venture with Blackstone, Hellman & Friedman, and Goldman Sachs to sell AI tools to the private-equity-backed companies those firms own. Anthropic, Blackstone, and H&F are each putting in roughly $300 million; Goldman is anchoring with about $150 million. The Wall Street Journal broke the story Sunday night and the announcement landed on Monday. The structure is unusual enough to be worth slowing down on. This is not a sales channel. It is a vehicle. Three of the largest pools of capital on Wall Street are co-investing with a frontier-model lab to embed Claude inside their own portfolios.

The thing to notice is not the dollar figure. It is the shape. Anthropic, four years into commercialising a frontier model, has decided that selling API tokens to enterprises is not the business it actually wants to be in. The business it wants to be in is the one Palantir has been quietly building for twenty years, sending engineers into the customer's building, sitting next to the people who actually do the work, writing code against the messy data the customer has, and producing something that runs in production rather than a slide deck about pilots.

OpenAI got there first, in form if not at this scale. Its Forward Deployed Engineering team, led by Colin Jarvis, has been hiring against the Palantir template for most of a year. The group is still small, on the order of dozens of engineers backed by a few hundred in customer success, and the public framing is "zero to one" work at Morgan Stanley, T-Mobile, Klarna, and a handful of other names. The internal target, leaked to The Information last autumn, was fifteen billion dollars in enterprise revenue by the end of 2026, with enterprise share of total revenue moving from forty to fifty percent. Anthropic is doing the same thing with a different financial instrument. Rather than hire several hundred forward-deployed engineers itself and try to build the consulting muscle in-house, it is splitting the joint venture with the people who already own the customers.

This is interesting because it admits a thing the AI industry has not really wanted to admit out loud, which is that the hardest part of enterprise AI is not the model. The hardest part is everything around the model. Data hygiene, evals, guardrails, permissioning, the institutional politics of taking work away from a team that has been doing it for fifteen years and giving it to a system the executive cannot fully explain. The MIT study that has been ricocheting around boardrooms all year, the one that found ninety-five percent of generative-AI pilots fail to move into production, was a market signal. Foundation-model access is a commodity. The integration is the moat.

Once you accept that, the JV looks less like a deal and more like an asset class being constructed in real time. Blackstone and H&F own thousands of companies between them, across healthcare, industrials, financial services, and software. Each of those companies has a backlog of process work that someone has been promising to automate for a decade. Embedding a Claude team inside the portfolio means the AI lab gets a captive distribution channel, the PE firms get an operating-leverage story they can tell limited partners, and Goldman gets to be the banker for whatever rolls up out of the resulting consolidations. Everybody is paid twice.

The thing I keep coming back to, though, is what this means for the model itself. If the most lucrative thing Anthropic can do with Claude is to put humans next to it inside other companies, then the model is no longer the product. The model is the pretext for the engagement. Five years ago that would have sounded like a failure mode. Today it sounds like a strategy deck. The frontier labs are quietly turning into consultancies that happen to own the LLM, and the consultancies that do not own one will spend the rest of the decade trying to buy access to the ones that do.

Whether this is good for anyone outside the deal is a separate question. The PE-portfolio companies that get the embed will move faster than their competitors. The ones that do not will keep paying for API tokens and wonder why their pilots stall in the same place everyone else's stall. The forward-deployed engineer, a job title most associated with Palantir until recently, will become one of the most sought-after roles in the industry. And the question of who actually owns a foundation-model lab, whether it is a public utility, a product, or a private weapon for a small number of capital allocators, will get answered in the most boring possible way, by the legal structure of the JV.

Sources:

Nine Billion Faxes a Year

An estimated nine billion faxes still cross the wire every year, mostly in hospitals, law firms, pharmacies, and the kind of government office where the carpet has a pattern that pre-dates the euro. The machines themselves are mostly gone. What's left is a software emulation of the old fax standard, Group 3, running on top of a VoIP trunk, pretending to be a beige plastic box with a thermal-paper roll. The protocol survives. The object it once required has been quietly discarded, then re-summoned in software, because the institutions that depend on it never actually wanted the object. They wanted the legal status the object happened to confer.

This is the part that took me a while to understand. People usually frame the persistence of fax as institutional inertia, old doctors who can't be retrained, old lawyers who won't give up their dedicated line. The inertia is real, but it's not the mechanism. The mechanism is that a faxed document accrued, over about thirty years, a body of case law and regulation treating it as presumptively delivered, presumptively unaltered, and presumptively timely. The transmission confirmation page, with its timestamp and page count, became a kind of evidentiary atom. Courts accepted it. Regulators accepted it. HIPAA explicitly permits fax as an acceptable channel for transmitting protected health information. Email never accreted the same body of presumptions, partly because it lacks the same chain-of-custody artefact and partly because the regulations were written before anybody had thought hard about email at all.

Once that asymmetry hardened, every workflow built downstream of it inherited the dependency. Hospitals could not stop faxing without rewriting their referral procedures, their pharmacy authorisations, their record-release policies, and their malpractice posture all at once. The same is true for law firms filing motions at the close of business and for banks running loan documentation against regulatory clocks. None of those institutions love fax. They love the audit trail it produces and the legal precedent that audit trail invokes, and the cheapest way to keep the audit trail is to keep faxing.

So when the hardware became uneconomic, the protocol did not die with it. It moved into T.38, a real-time fax-over-IP standard that lets a softswitch carry the fax session across packet networks. From the application's point of view, nothing has changed. From the network's point of view, there is no longer a phone line. The dedicated copper pair the fax was always sold as needing has been replaced by SIP trunks running over ordinary internet, which is precisely the medium fax was supposed to be defending against. The compliance argument has quietly inverted. The transmission is now indistinguishable from email at the transport layer. What persists is the paperwork that says it isn't.

There is a particular kind of haunting in this. The persistence of fax is not the persistence of an old machine. It's the persistence of a legal fiction surviving the substrate it was written about. It's similar to the way the AT command set still answers inside a 5G modem, except that AT survives because nobody could be bothered to replace something that worked, while fax survives because somebody would have had to rewrite the law. The first is software inertia. The second is jurisprudential inertia. They look the same from the outside. Inside, they are very different ghosts.

A nurse in 2026 sending lab results to a referring physician is, in a real sense, operating a piece of 1980s telecoms ritual. The beige box is gone. The dial tone is simulated. The phone line is a software illusion. But the moment of transmission, the confirmation page, the timestamp, the page count, are still treated as a kind of legally privileged event, distinct in character from the email she might have sent instead. The ritual was always the point. The hardware was a costume the ritual happened to be wearing.

Sources: