Skip to content

Plutonic Rainbows

Meta Bets the Headcount on AI

Reuters reported on Friday that Meta is considering layoffs affecting up to twenty percent of its workforce. That is roughly fifteen thousand people. Meta's stock rose three percent on the following Monday.

The math driving this is not subtle. Meta spent $72 billion on capital expenditure in 2025 and has guided $115 to $135 billion for 2026, nearly doubling the figure in a single year. Reality Labs burned through $19.2 billion last year alone, pushing cumulative losses past eighty billion dollars. Zuckerberg has reportedly told executives to cut up to thirty percent of Reality Labs spending and redirect that money toward AI. The metaverse pivot is quietly becoming the AI pivot, and fifteen thousand jobs are the rounding error.

Wall Street loves it. Jefferies slapped a buy rating on the stock. Bank of America projected up to $8 billion in annualized savings. JPMorgan estimated six billion. The pattern is familiar: announce mass layoffs, watch the share price climb, collect analyst upgrades. Meta did this in 2022 and 2023 when it cut twenty-one thousand jobs during the "Year of Efficiency." The stock returned 194 percent the following year.

This time the justification has shifted. The 2022 cuts were about unwinding pandemic over-hiring. The 2026 cuts are about funding a bet. Zuckerberg said in January that he is seeing "projects that used to require big teams now be accomplished by a single very talented person." That framing does a lot of work. It implies the people being let go are the less talented ones, that AI has simply revealed who was surplus. Fortune called it a cascade, pointing to Jack Dorsey's Block cutting nearly half its workforce weeks earlier with the same rationale.

I keep returning to the gap between the narrative and the accounting. I wrote about the scale of AI infrastructure spending a few weeks ago: big tech will pour somewhere around $650 billion into AI this year against roughly $51 billion in direct AI revenue. Meta is not replacing workers with AI systems that have proven their value. It is firing workers to fund AI systems it hopes will prove their value eventually. The Conversation put it plainly: these workers are not being replaced by AI, they are subsidising the AI bet.

And then there is the junior hiring collapse. Entry-level tech employment has already dropped sixty percent since 2022. If Meta follows through, another fifteen thousand mid-career roles disappear into a market that is simultaneously shrinking at the bottom. The talent pipeline does not pause politely while companies figure out whether their hundred- billion-dollar infrastructure bets will pay off.

Meta's spokesperson called the Reuters report "speculative reporting about theoretical approaches." Maybe. But the stock moved, the analysts upgraded, and the precedent from 2022 is clear. The market has told Meta exactly what it wants to hear.

Sources:

The February Before Everything Changed

Patrick Demarchelier shot this against nothing but sand and sky. No props, no elaborate set. Just Cindy Crawford in head-to-toe pink Oscar de la Renta, pulling a satin jacket open to show its chartreuse lining, grinning like she already knew what the next decade had in store.

February 1990. She was twenty-three.

One month before this cover reached newsstands, Peter Lindbergh had gathered Crawford, Naomi Campbell, Linda Evangelista, Tatjana Patitz, and Christy Turlington in New York's Meatpacking District for a group portrait that British Vogue ran in January. That single black-and-white frame is the image most people point to when they talk about the birth of the supermodel era. By the time Demarchelier's pink-drenched cover appeared, the Revlon contract was signed, MTV's House of Style was already on the air, and George Michael was months away from calling five women about a music video that would make fashion history all over again.

But look at that grin and all the pink satin against the empty sky for a second. The palette alone tells you where fashion stood in the transitional window between the shoulder-padded eighties and the stripped-back minimalism that would dominate by mid-decade. Those earrings are enormous, coral and gold and completely unapologetic. The satin jacket screams occasion wear but she's wearing it on a beach with a casual knit top underneath. It shouldn't work. It works.

Demarchelier was known for exactly this kind of frame. Natural light, minimal staging, letting the subject carry the image. He'd shoot three to twenty rolls of film per setup, waiting for the moment when the performance stopped and the person started. Crawford gave him that grin and he had his cover.

The thing I keep coming back to is the ease. Not performative confidence, not the rehearsed poise you see in most editorial work. She's standing on a beach in pastel satin pulling her jacket open with both hands and she looks like she's having the best afternoon of her life. The entire industry was pivoting around her and she's just enjoying it.

That kind of ease doesn't photograph easily. Demarchelier knew it when he saw it.

The Flagship Tax Keeps Shrinking

OpenAI released GPT-5.4 mini and nano today. The mini sits at $0.75 per million input tokens, the nano at $0.20. The full GPT-5.4, announced ten days ago, costs substantially more for both input and output.

The interesting number is 54.4%. That's GPT-5.4 mini on SWE-Bench Pro, the benchmark that tests professional-grade coding on real repositories. The full GPT-5.4 scores 57.7%. Three percentage points separate the cheap model from the expensive one on the hardest coding evaluation OpenAI publishes. Context window drops from 1.05 million tokens to 400,000. Still enormous.

OpenAI frames this as a multi-model architecture play: the flagship plans, the mini executes. That's a reasonable pitch for agentic workflows where you're orchestrating dozens of parallel calls and per-token cost actually matters. GitHub Copilot already ships it at a 0.33x premium request multiplier, which tells you where the volume is heading.

The pattern repeats across every model family now. The mid-tier eats the flagship, the mini eats the mid-tier, and within six months the nano handles tasks that needed the flagship a year ago. The real product isn't any single model. It's the pricing curve.

Sources:

Intelligence by the Kilowatt-Hour

Nick Turley, OpenAI's head of ChatGPT, went on the Bg2 Pod on Sunday and said something that should have been obvious for months: unlimited AI plans probably can't survive. His exact framing was that offering unlimited prompts is "like having an unlimited electricity plan. It just doesn't make sense."

He's right. And the interesting part isn't the admission itself but how long it took to arrive.

The $200/month Pro tier, the one with unlimited prompts, has been acknowledged as unprofitable by Sam Altman himself. OpenAI's inference costs hit $8.4 billion in 2025 and are projected to reach $14.1 billion this year. The company expects to lose $14 billion in 2026 while simultaneously seeking $100 billion in new funding. Those numbers don't describe a business that can afford to let power users hammer GPT-5.4 all day for a flat fee.

Turley described the subscription model as "accidental," which is a revealing word. ChatGPT launched in November 2022 as a demo intended to run for a month. Subscriptions weren't a monetisation strategy; they were a capacity management tool bolted on after the thing went viral. Four years later, that temporary fix is still the core revenue model for a company burning through cash at a rate that makes WeWork look disciplined.

Altman tipped the direction at BlackRock's Infrastructure Summit on March 11 when he said OpenAI sees a future where "intelligence is a utility, like electricity or water, and people buy it from us on a meter." The electricity metaphor keeps appearing. I think they genuinely believe it. Metered intelligence, priced per token, scaled by consumption.

The problem with the utility analogy is that utilities are regulated, commoditised, and operate on thin margins. Nobody gets excited about their electricity provider. If OpenAI wants to be a utility, it needs to accept utility economics, and utility economics don't support the $300 billion valuation or the $200 billion revenue target for 2030.

Meanwhile, 1.5 million users cancelled their subscriptions in March alone. ChatGPT's market share has reportedly slipped from around 60% in early 2025 to under 45% now. Competitors aren't standing still. Claude, Gemini, and a growing constellation of open-weight models are absorbing the users who feel nickel-and-dimed. OpenAI keeps shipping but the goodwill account is overdrawn.

The shift from subscription to metered pricing would be the most honest thing OpenAI has done in years. Flat-rate unlimited access to a resource that costs billions to produce was always a lie, just one that users were happy to believe for $200 a month.

Sources:

T.E.D. Klein and the Perfection of Disappearing

T.E.D. Klein published two books in the 1980s and then, for all practical purposes, stopped. The Ceremonies arrived in 1984. Dark Gods followed in 1985. A thin collection of shorter pieces, Reassuring Tales, surfaced in 2006 in a limited run of 600 copies that sold out immediately. And that, give or take an expanded reissue, is the complete output of a writer Stephen King once called the most exciting voice in horror fiction.

Four novellas. That's what Dark Gods contains. "Children of the Kingdom," "Petey," "Black Man with a Horn," and "Nadelman's God." The last of these won the World Fantasy Award. The collection has been out of print more often than not, commanding serious prices on the secondhand market, and a 2024 Chiroptera Press edition with a new introduction by S.T. Joshi confirms what collectors already knew: this book refuses to go away.

I wrote briefly about Dark Gods a decade ago and didn't say nearly enough. The collection deserves more than a paragraph and a quote from Joshi, however accurate that quote remains. Klein's achievement towers over his more prolific contemporaries not despite the small body of work but, I think, because of it. Every sentence in Dark Gods earns its place. There's no filler. No coasting.

What separates Klein from most horror writers is where he finds the dread. His settings are aggressively mundane: a nursing home during the 1977 New York blackout, an airport departure lounge, a bungalow colony in the rural northeast. The protagonists are educated, self-absorbed men who think too much and notice too little. When the supernatural arrives, it doesn't crash through windows. It accumulates in the periphery, in details that read as benign on first pass and become unbearable in retrospect. Simon Strantzas identified this technique precisely : individual phrases that seem harmless in isolation weave into a horrible tapestry by each tale's climax. That skill separates the experts from the pretenders, and Klein is an expert.

"Black Man with a Horn" is the one that gets the most critical attention, and rightly so. The narrator is modeled on Frank Belknap Long, a real horror writer who knew Lovecraft personally and spent decades working in his shadow. Klein uses this to do something extraordinary: he writes a Cthulhu Mythos story that is simultaneously a meditation on what it means to write Lovecraftian fiction at all. The cosmic horror is genuine, but so is the inquiry into Lovecraft's racism, the narrator's own prejudices, the way inherited literary traditions carry inherited blind spots. Reactor's analysis remains the best piece written about this novella, and it's worth reading alongside the story itself.

Klein's acknowledged masters are Arthur Machen, M.R. James, Algernon Blackwood, Walter de la Mare. The lineage shows. His horror is atmospheric and restrained, closer to Robert Aickman's unsettling ambiguity than to the explicit violence that dominated 1980s horror publishing. Where Aickman leaves you uncertain about what happened, Klein leaves you certain that something terrible happened and uncertain about its full scope. The effect is different but the discipline is the same: withholding is a form of generosity toward the reader's imagination.

He edited Rod Serling's Twilight Zone Magazine for its first 37 issues, discovering Dan Simmons and Lois McMaster Bujold along the way. He resigned specifically to write a second novel, Nighttown, described as a paranoid horror novel set in New York City. Viking announced it for 1989. Then 1995. In a 2008 Cemetery Dance interview, Klein admitted he'd sold the book without knowing how to execute it. In 2016, following his retirement from Condé Nast, there were reports he'd finally finish it. As of 2026, it hasn't appeared.

I find his silence more interesting than frustrating at this point. There's a version of Klein's career where Nighttown arrives in 1989, he publishes steadily through the nineties, and Dark Gods becomes one strong collection among several. In the version we actually got, four novellas carry the entire weight. They have to be extraordinary, and they are. The scarcity creates a pressure that makes every re-reading feel loaded with consequence.

Thomas Ligotti is the writer Klein gets compared to most often, and the comparison is instructive for how little they share beyond seriousness. Ligotti is abstract, nihilistic, reaching for the philosophical void. Klein is grounded in specific places and social textures. You remember the nursing home in "Children of the Kingdom" as a physical space: the smell, the fluorescent lighting, the particular embarrassment of being the youngest person in the room. Ligotti would never write that scene. Klein's horror lives in the ordinary, in airport lounges and suburban kitchens, and that's exactly why it follows you home.

Alan Moore's Providence attempted something adjacent a few decades later, reinventing Lovecraft through literary self-awareness and graphic novel form. Moore succeeded on his own terms, but Klein got there first in prose, with less machinery and more precision.

Chiroptera Press's 2024 edition runs to 312 pages with new critical apparatus: Joshi's introduction, Dejan Ognjanovic's essay, Paul Romano's cover art. It's the kind of treatment usually reserved for writers with ten times the bibliography. Klein's middle name is Eibon, a deliberate Lovecraftian reference, and the care lavished on this edition suggests the mythology is working in both directions now. The books create the legend. The legend preserves the books.

Sources:

Nowhere to Hide

Azzedine Alaïa showed his Fall/Winter 1989 collection in November, on his own schedule, inside a half-converted glass-roofed space in Le Marais that reportedly leaked when it rained. The official Paris Fashion Week calendar meant nothing to him. It hadn't since 1988, when he started presenting whenever the work was finished rather than whenever the industry demanded.

The timing was extraordinary. The Berlin Wall came down on November 9th that year. Mugler was sending models out in bodywork-bustiers shaped like 1950s Buicks. Montana had just been tapped for Lanvin couture. The decade's theatricality was reaching terminal velocity, everything louder, bigger, more conceptual.

Alaïa's response was a room full of black.

Not black as absence. Black as argument. He'd said that limiting his palette left nowhere to hide, that stripping away colour forced the purest expression of structure. The collection delivered on that premise with sculptural precision that made everything else feel like costume. Cropped jackets in black lamb suede. Thick velvet knit nearly an inch deep. Varnished leather with cutout guipure lace motifs. Each piece engineered so that the seams and zips weren't just functional but structural, spiralling around the body in ways that simultaneously revealed and supported it.

Naomi Campbell walked. So did Yasmin Le Bon, Elle Macpherson, Nadège du Bospertus. They came because they wanted to, not because of fees. Campbell had known Alaïa since she was sixteen and called him papa. The relationships were real, which made the shows feel different from everything else happening in Paris that season.

He'd trained as a sculptor in Tunis before he ever touched fabric, and it showed in ways the King of Cling nickname never captured. The body-consciousness wasn't about sex appeal, or not only. It was about treating a garment as a three-dimensional object with its own internal logic. Every bandage strip cut to a specific width. Every seam placed to map the body underneath rather than impose a silhouette over it.

Thirty-seven years on, most of what hit the Paris runways in November 1989 looks dated. The Mugler Buick collection has become a curiosity. Montana's Lanvin tenure is a footnote. Alaïa's black suede jacket still looks like something you'd want to wear tomorrow.

Sous Le Soleil Exactement

Jacques Olivar shot Heather Stewart-Whyte for Marie Claire Bis in September 1991, and what emerged reads closer to short film than fashion editorial. The title borrows from the 1967 Serge Gainsbourg song written for Anna Karina, face of French New Wave cinema. That single reference tells you where Olivar's head was. Not selling clothes. Telling a story in which clothes happen to appear.

Olivar came to fashion sideways. Born in Casablanca in 1941, he trained as an airline pilot, then spent years in advertising photography before switching to editorial work in 1987. He arrived without the usual apprenticeship debts, and it shows. His images carry a narrative weight that pure fashion photographers rarely bother with. Models said few other photographers made them look so beautiful. That's a specific kind of compliment. Not flattering. Seeing.

The shoot runs through six looks across what feels like a sun-scorched Mediterranean village. Stewart-Whyte opens in a tied Cacharel blouse against crumbling stone, one hand raised to her mouth, then shifts into a cropped shirt and dark shorts on a dusty road that could have been lifted from a Visconti film. The textures multiply from there: Colonna's sheer mousseline catching the coastal light, Chantal Thomass lace in devastating close-up, a deep neckline framed by peeling revolutionary posters, and Michel Klein's polka-dot skirt photographed mid-stride. Claire Dupont styled the whole editorial, and the designer range is deliberate: romantic Cacharel beside Colonna's raw deconstruction beside Dolce & Gabbana's Sicilian edge. The full spectrum of where fashion was heading, compressed into twelve pages.

Stewart-Whyte was twenty-two and operating at altitude. That same month she had the September cover of Vogue Paris, shot by Dominique Issermann. Prada and Gucci campaigns were running simultaneously. She walked for Versace, Armani, Saint Laurent. The farm girl from East Sussex who'd answered an Elite Premier recruitment ad at seventeen had become one of the most-booked faces in Europe.

What holds these images together is her particular quality of stillness. She doesn't perform for the camera. She inhabits a space, and the camera finds her there. Olivar understood that distinction. So did she.

A Million Tokens, No Asterisk

Anthropic quietly dropped the long-context pricing premium today. When Opus 4.6 launched five weeks ago, the million-token context window carried a surcharge — anything over 200K tokens was billed at double the input rate and 1.5x on output. That caveat is gone. A full million-token prompt to Opus 4.6 now costs exactly what a 50K prompt costs per token: $5 input, $25 output per million. Sonnet 4.6 follows the same pattern at $3/$15.

The beta header is gone too. If your code was setting anthropic-beta: long-context on every request, it still works — the API just ignores it now. No migration, no breaking change. That's the kind of rollout I wish happened more often.

A million tokens is roughly 750,000 words. Thirty thousand lines of code. An entire mid-size codebase with room to spare. For anyone using Claude Code on Max or Team plans, this is where it gets tangible — sessions that previously hit compaction walls after an hour of deep work can now hold the full conversation history. I've been running Opus 4.6 with the extended context since February and the difference isn't subtle. Fewer moments where the model forgets a file you discussed twenty minutes ago. Fewer re-explanations.

The less obvious addition is context compaction. When conversations approach the token ceiling, the API automatically compresses earlier turns into a summary and continues from there. It's lossy — you lose verbatim recall of compressed sections — but it means long-running agentic workflows don't just crash into a wall. They degrade gracefully instead of failing. For coding agents that accumulate tool calls, observations, and intermediate reasoning over hours of work, this matters more than the raw number.

I should be honest about the limitations. Research on retrieval accuracy at these context lengths consistently shows a U-shaped curve — the model handles information at the beginning and end of the window reliably, but accuracy dips in the middle. Effective capacity is probably closer to 600-700K tokens of dependable recall. The window is a million tokens. The model's attention isn't uniformly distributed across all of them.

However. The competitive framing matters here. GPT-5 tops out at 400K tokens. Gemini offers larger windows but at significantly higher per-token cost for comparable output quality. Anthropic removing the premium essentially says: this is standard now, not a luxury feature. That's the right move. Context shouldn't be metered like roaming charges.

The media limit bumped to 600 images or PDF pages per request, up from 100. I haven't stress-tested that yet, but for anyone doing document analysis or legal review workflows, six hundred pages in a single pass changes the architecture of those pipelines entirely.

Sources:

Le Touquet in Monochrome

Peter Lindbergh spent much of 1986 on the beaches of Le Touquet with Azzedine Alaïa, shooting what would become some of the most enduring fashion photographs of the decade. The pairing made a strange kind of sense. Lindbergh — six feet tall, German, obsessed with cinema — and Alaïa — five-two, Tunisian-born, obsessed with the body beneath the fabric. They reportedly worked together so naturally that they barely needed to speak. Lindbergh's phrase for it was blunt: "hand in glove."

The Le Touquet sessions produced a body of work that Taschen eventually published in 2021, alongside an exhibition at the Fondation Azzedine Alaïa in Paris. Linda Spierings featured heavily. So did Tatjana Patitz, who was twenty years old and already possessed of the kind of face that made photographers forget their shot lists.

There's a particular frame from that Le Touquet session that I keep returning to. Patitz in a dark fur coat pulled up around her head like a hood, leather gloves held loosely against her chest, eyes locked on the lens. No backdrop except overcast sky. No props. The coat becomes architecture. The gloves become punctuation. Patitz just stands there and lets the whole composition resolve around her face.

What strikes me about it — and about Lindbergh's work generally from this period — is how little it relies on beauty in the conventional fashion-photography sense. He wasn't lighting Patitz to look flawless. He was lighting her to look present. The grain is visible. The tonal range is compressed into a narrow band of silver and charcoal. There's no retouching softness, no diffusion. Just skin and fur and the flat northern light of the Pas-de-Calais coast.

Alaïa understood something similar about clothing. His designs weren't meant to distract from the wearer — they were meant to disappear into the wearer's silhouette. A black coat on a grey beach in a black-and-white photograph. Three layers of reduction, each one stripping away something unnecessary until what remains is a woman looking directly at you from forty years ago with an expression that hasn't dated at all.

Lindbergh, Alaïa, and Patitz are all gone now. Patitz died in January 2023, at fifty-six. The Fondation keeps Alaïa's archive on Rue de la Verrerie. Lindbergh's son Benjamin curated the 2021 exhibition. The work outlasts everyone involved in making it, which is the only kind of permanence fashion ever really gets.

The Coordination Tax Nobody Budgeted For

The math is simple and merciless. If each agent in a pipeline performs at 95% accuracy — generous, honestly — a ten-step chain succeeds about 60% of the time. That's the textbook version. The production version is worse, because errors don't merely accumulate. They cascade destructively. An agent that misinterprets its task at step three feeds corrupted context to step four, which amplifies the distortion at step five, and by step eight you're debugging entropy.

This is the coordination tax. Everyone building multi-agent systems pays it, and almost nobody budgets for it.

Anthropic published a detailed account of building their own multi-agent research system earlier this year, and the failure modes they describe are instructive. Their agents spawned fifty subagents for simple queries. They scoured the web endlessly for nonexistent sources. They duplicated each other's work. Token consumption hit 15x normal chat interactions — not because the system was doing fifteen times more useful work, but because coordination overhead grows faster than capability.

The numbers from production deployments are stark. Between 41% and 87% of multi-agent LLM systems fail in production, depending on whose survey you trust. Nearly 80% of those failures stem from specification ambiguity and coordination breakdowns. Not infrastructure. Not hallucination. Organizational problems — who does what, who talks to whom, who has authority to override.

I've written before about the orchestra-without-a-conductor problem in enterprise multi-agent deployments. The metaphor holds up uncomfortably well. Individual agents can be brilliant. The ensemble performance still falls apart without clear governance.

The framework landscape reflects this confusion. LangGraph offers deterministic graph execution but can't prevent runaway loops — one engineer burned four dollars in API costs on eleven revision cycles before adding a manual counter. CrewAI hits a ceiling the moment you need anything beyond straightforward sequential handoffs. AutoGen's auto speaker selection makes arbitrary decisions about which agent acts next, sometimes skipping critical steps entirely. These aren't bugs. They're design consequences of a problem space nobody has cleanly solved.

A growing body of research suggests the answer might be simpler than we want it to be. As frontier models improve at long-context reasoning and tool use, the gap between single-agent and multi-agent performance narrows. One study found that a hybrid approach — try single-agent first, escalate to multi-agent only when needed — improved accuracy while cutting costs by up to 20%. Deloitte's 2026 survey is blunter: over 40% of agentic AI projects could be cancelled by 2027.

I keep returning to a line from Anthropic's engineering post: "agent-tool interfaces are as critical as human-computer interfaces." The hard part of multi-agent orchestration isn't writing agents. It's writing the contracts between them — the handoff protocols, the error boundaries, the authority hierarchies. Subagent architecture is contract law, not software engineering.

The compounding error math doesn't care how clever your agents are individually. It only cares how well they coordinate. And coordination, it turns out, is exactly the thing large language models are worst at faking.

Sources: