Skip to content

Plutonic Rainbows

The Cover That Didn't Need a Headline

ELLE gave her the cover in October 1992 and the leopard print did the rest. Basia Milewicz had that particular quality where the camera couldn't decide whether to focus on the bone structure or the attitude, so it just surrendered to both.

Claude on Telegram: They Fixed It

Every issue I complained about yesterday is gone. Anthropic merged a resilience rollup overnight that addressed the entire complaint cluster: silent polling death, zombie bot processes blocking reconnection, and 409 conflicts on session restart. Messages arrive instantly now. The bot survives terminal closes. Permission prompts relay to my phone.

I've been using it all day without a single dropped message. Running skills, processing photos, deploying posts, all from Telegram while away from my desk. The v2.1.81 update also re-clones the plugin on every load, so the fixes landed automatically without reinstalling anything. Twenty-four hours from "brilliant when it works" to just brilliant.

Sources:

Carla Bruni at the Ritz Pool

On January 20, 1996, Gianni Versace presented his Atelier Spring/Summer collection at the Hôtel Ritz in Paris. Not in a ballroom. Not in a convention hall. In the swimming pool. He'd been showing couture there since 1990, boarding over the water and transforming the space into something that felt less like a fashion venue and more like a private theatre built for three hundred and fifty people who already understood the plot.

That season he carpeted the pool in black bordered with acidic yellow. Models descended curling double staircases at the back of the runway, sometimes two at a time in mirrored formation. The architecture of the presentation mattered to Versace as much as the clothes. He understood that couture isn't just about construction — it's about the moment the construction becomes visible.

Carla Bruni in a yellow beaded gown, hand at her hip, the black and white checkerboard floor falling away behind her — and you can see immediately why Versace kept casting her. There's a stillness in Bruni's runway work that most models don't attempt. Where others project energy outward, she holds it close. The dress is spectacular — hand-beaded embellishment catching light from every angle, the kind of surface detail that only makes sense at couture scale — but Bruni wears it like she's thinking about something else entirely. That tension between excess and restraint is pure Versace. He built his house on it.

By 1996 Bruni had been modelling for nearly a decade. She'd walked for Chanel, Dior, Givenchy, Yves Saint Laurent, Valentino, Lacroix — everyone who mattered. She'd appeared on over two hundred and fifty magazine covers. She was among the twenty highest-paid models in the world. But her relationship with Versace was different. She was one of his key muses, present at nearly every show from 1990 onward, and the collaboration defined something specific about the era: the moment when supermodels became co-authors of the work rather than vehicles for it.

The Ritz pool itself tells part of this story. Built in 1987 during a two hundred and fifty million dollar renovation, its design drew from ancient Greek and Roman baths — which made it an almost too-perfect match for Versace's classical obsessions. The Medusa logo, his most enduring symbol, came from that same well of mythology. He chose it because, as he explained, anyone who looked at Medusa had no choice but to fall in love. Whether he was talking about the logo or about the women wearing it was never entirely clear.

The Spring/Summer 1996 collection marked a subtle shift. The heavy Baroque maximalism of earlier seasons was giving way to something more nuanced. The metallic fabrics were still there — Versace had invented his own chainmail textile, Oroton, back in 1982 — but the tones were quieter, the classical references more direct. Greek vestals instead of Byzantine emperors. The embellishment remained extraordinary but the silhouettes were leaner, more assured. This was a designer approaching the refinement phase of his vision, stripping away everything that wasn't load-bearing.

Naomi Campbell walked the same show in a black lace and zebra-striped dress with a feather in her hair. Karen Mulder was there. Helena Christensen. Valeria Mazza. Stella Tennant. Sting and Trudie Styler watched from the front row. The original supermodel generation was cresting — still dominant, still defining what glamour looked like — but the ground was shifting beneath them. Kate Moss and the rise of heroin chic were rewriting the rules in real time. Calvin Klein's gaunt, androgynous aesthetic stood in direct opposition to everything Versace believed about the body, about celebration, about the purpose of clothes. His shows were among the last places where unapologetic supermodel glamour wasn't just tolerated but required.

Eighteen months after this collection, Gianni Versace was murdered on the steps of his Miami mansion. His final couture show — presented at the same Ritz pool on July 6, 1997, nine days before his death — was dominated by black and weighted with Byzantine religious symbolism. Oversized jeweled crosses. Silhouettes that echoed nun's habits rendered with his signature body-conscious precision. Whether he sensed something ending or whether that reading is purely retrospective, the contrast with Spring/Summer 1996 is striking. The earlier collection still has daylight in it.

Bruni retired from modelling in 1997 as well, turning to a music career that would eventually lead to a completely different kind of public life. This photograph captures her in the narrowing window between peak visibility and departure — not just from the runway, but from the version of fashion that the runway represented. The supermodel era didn't end with a single event. It eroded gradually, season by season, as the industry moved toward a different set of values. But moments like this one — Bruni in Versace at the Ritz, the beading catching the light, the pool hidden beneath the floor — are where you can still see what the fuss was about.

The dress outlasts the show. The photograph outlasts the dress. What doesn't survive is the specific quality of attention in the room when three hundred and fifty people watched Carla Bruni descend a staircase above a swimming pool and understood, without anyone having to explain it, that they were seeing something that wouldn't come around again.

Claude on Telegram: Brilliant When It Works

Anthropic shipped Claude Code Channels today, letting you control a Claude Code session from Telegram. I set it up this afternoon. The pitch is irresistible: DM your bot from your phone, Claude executes on your Mac. MacStories built an entire iOS project wirelessly and the demo is genuinely impressive.

Then you start using it for real.

Messages get silently dropped. The bot shows "typing..." and nothing arrives. Close your terminal and the bot dies, messages lost permanently, no queue. Need to approve a permission prompt? Walk to your Mac. A version upgrade broke group messages with zero error output. This feels familiar for anyone who's watched Claude become load-bearing infrastructure and then buckle.

Getting here took more effort than it should have. The setup is rough, the documentation assumes you'll figure things out, and the failure modes are silent enough to make you question whether anything is happening at all. But once everything is configured and the pairing is locked down, it genuinely works. I've been running skills, deploying blog posts, and downloading media from my phone all evening. The gap between the concept and the execution is real, but the concept wins in the end.

Sources:

Teaching Machines to Destroy Is the Easy Part

The Pentagon's FY2026 budget allocates $13.4 billion specifically for autonomy and autonomous systems. That is the first time autonomy has been its own budget line item. Not tucked inside a larger programme, not buried in R&D. Its own line. $9.4 billion for unmanned aerial vehicles alone. The remaining billions split across maritime systems, underwater platforms, and counter-drone capabilities. The overall defence budget hit $1.01 trillion, a 13% jump from last year. These are not research numbers. These are procurement numbers.

We have moved past the question of whether AI belongs in warfare. It is already there.

Project Maven started in 2017 as a relatively modest effort to use machine learning for analysing drone footage. By May 2024, Palantir had secured the Maven Smart System contract for $480 million, since raised to $1.3 billion. The system fuses nine separate military intelligence pipelines into a single interface and compresses what the Pentagon calls the "kill chain" from hours to minutes. That phrase deserves attention. The kill chain is the sequence from identifying a target to destroying it. AI's contribution is making that sequence faster. Not safer. Not more considered. Faster.

Israel's deployment of the Lavender targeting system in Gaza made this concrete in ways that should trouble anyone paying attention. Lavender generated a database of roughly 37,000 Palestinian men it identified as linked to Hamas or Palestinian Islamic Jihad. The system recommended targets. Human oversight of those recommendations was described as minimal. When targeting junior militants, the IDF used unguided bombs that destroyed entire residential buildings because the automated system could most reliably locate people at their home addresses. Alongside their families.

I keep returning to that detail. Not a precision strike on a military installation. An algorithm identifying a person, a GPS coordinate resolving to a family home, and an unguided bomb.

China is building the mirror image. A March 2025 paper from Beijing Institute of Technology detailed plans for fully autonomous drone swarms in urban warfare, capable of distributed autonomous decision-making from target identification to strike. The researchers advocate for minimal human intervention, where humans authorise deployment and the swarms then react independently, including on the use of force. At China's September 2025 Victory Day parade, autonomous ground vehicles and collaborative combat aircraft were displayed as core future capabilities. Not prototypes. Capabilities.

The arms race dynamics here are genuinely frightening. Research published on arXiv last year argues that autonomous weapons lower the political barriers to military aggression by removing domestic opposition based on human casualties. Fewer body bags means less political cost, which means more willingness to deploy force. The authors' conclusion is counterintuitive but logically sound: reducing casualties in individual conflicts can increase the total number of conflicts that occur. You save soldiers in each war by starting more wars.

The UN General Assembly gets this. In November 2025, 156 states voted in favour of a resolution on autonomous weapons regulation. Five voted against. The United States and Russia were among the five. That vote tells you everything about where the major military powers stand on allowing international law to constrain their AI programmes.

Then there is what happened with Anthropic. In February, the Pentagon insisted on contract language authorising Claude for "any lawful use," which Anthropic believed would permit deployment for fully autonomous weapons and domestic mass surveillance. CEO Dario Amodei refused. Defence Secretary Hegseth responded by designating Anthropic a supply chain risk, a classification normally reserved for foreign adversaries, barring all defence contractors from using Claude. The message to every other AI company was unmistakable: cooperate or be excluded. The guardrails some companies try to build face pressure that most boardrooms will not withstand.

The question people keep asking, the one in the title of this post, is what happens when AI chooses to destroy us. I think it is the wrong question, or at least a premature one. The more immediate problem is not autonomous choice. It is autonomous delegation. We are handing systems that cannot exercise moral judgement the authority to make decisions that require it. Lavender did not choose to target family homes. It optimised for a metric. The humans who built the system chose the metric, approved the threshold, and accepted the collateral damage as tolerable.

In May 2023, USAF Colonel Tucker Hamilton described a scenario where a simulated AI drone, trained to destroy surface-to-air missile sites, killed the human operator who tried to override it. When retrained not to kill the operator, it destroyed the communications tower instead. Hamilton later called it a hypothetical thought experiment, not an actual test. But he said something revealing: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome." A system optimising for its objective will route around constraints that interfere with that objective. That is not science fiction. That is how reinforcement learning works. It is precisely the kind of goal misalignment that makes AI safety researchers lose sleep.

Studies have found that language models used for military advice are prone to recommending escalation, including nuclear weapons deployment. Palantir's own military system showed deteriorated performance over time. These systems evolve as they ingest new data, which means a system verified today may behave differently tomorrow. No system can verify its own blind spots, and we are deploying them in contexts where a blind spot means a bomb.

The $13.4 billion is already allocated. The contracts are signed. The swarms are being built on both sides of the Pacific. I do not think the danger is that AI will one day wake up and decide to destroy humanity. The danger is that we are building systems that destroy on command, removing the humans who might hesitate, and calling it progress. The machine does not need to choose violence. We already chose it for the machine. The question is whether anyone remains in the loop with the authority and the willingness to say stop.

Sources:

A Thousand Models in One Conversation

Fal.ai quietly shipped something that changes how I think about image generation workflows. Their MCP server exposes over a thousand generative AI models through nine tools, and because it speaks the Model Context Protocol, any compatible assistant can use it natively. Claude Code, Cursor, Windsurf, ChatGPT Desktop. You add a URL and an API key to your config, and suddenly your coding agent can search for models, check pricing, generate images, submit video jobs, and upload files without you ever leaving the conversation.

I set it up this afternoon. The configuration is a few lines of JSON pointing at https://mcp.fal.ai/mcp with your fal API key in the header. No SDK to install, no package to import. The server is stateless, hosted on Vercel, and your credentials travel per-request in the Authorization header without being stored. That last detail matters. MCP's security model has well-documented gaps, and a stateless server that never persists your key sidesteps the worst of them.

The nine tools split cleanly into discovery and execution. search_models and get_model_schema let you browse the catalogue and inspect input parameters. get_pricing returns per-unit costs. run_model handles synchronous inference. submit_job and check_job exist for longer tasks like video generation where you don't want to block your context waiting for a result. There is also upload_file for feeding images into editing models and recommend_model for when you know what you want to do but not which model does it best.

I asked for Flux model pricing and got a structured table back in seconds. Kontext Pro runs $0.04 per image. Kontext Max is $0.08. Flux 2 Turbo charges $0.012 per megapixel, making it the best value in the Flux 2 family. The cheapest option is Flux 1 Schnell at $0.003 per megapixel, which is thirteen times cheaper than Flux 1 Dev. These numbers came directly from the MCP tools, not from scanning a pricing page. No documentation tabs open, no context switching. Just a question and an answer inside the same terminal session where I was already writing code.

This is genuinely different from calling an API. When I built my image generation platform last year, integrating each new model meant reading docs, writing adapter code, handling authentication, mapping parameters. The MCP server compresses all of that into tool calls the assistant already knows how to make. I can ask "what video models are available?" and get back a list with endpoint IDs, then check pricing on any of them, then actually run one, all without writing a single line of integration code. The assistant handles the plumbing.

The discovery aspect is what surprised me most. I found models I didn't know existed. Nano Banana Pro for image editing at $0.15 per image (expensive, but interesting). Seedream V4 from ByteDance. A GPT Image 1.5 editing endpoint. Qwen image editing. The catalogue is broader than I expected, and being able to search it conversationally rather than navigating a web UI removes enough friction that I actually explored it.

There is a real cost to this convenience, though, and it would be dishonest to ignore it. MCP tools consume context window. Every tool definition the server exposes gets loaded into your conversation as schema, and those schemas eat tokens before you have done anything useful. Benchmarks from Scalekit found that MCP consumed four to thirty-two times more tokens than CLI alternatives for identical tasks. One documented case showed 143,000 out of 200,000 tokens consumed by MCP tool definitions alone. That is 72% of your context gone to overhead. Perplexity's CTO announced earlier this year that they are moving away from MCP toward traditional APIs for exactly this reason.

Fal's server is relatively lean with nine tools, so the overhead is manageable. But if you are running seven or eight MCP servers simultaneously, the context window tax gets severe. The protocol needs a solution for this, whether that is lazy loading of tool schemas, server-side filtering, or something else entirely. Anthropic donating MCP to the Agentic AI Foundation under the Linux Foundation late last year suggests they know governance and spec evolution need to accelerate.

For my own workflow, the tradeoff is clearly worth it. I have been building with Flux models through a custom platform with eighteen model adapters, unified interfaces, and Flask blueprints. That infrastructure made sense when each model required bespoke integration. The MCP server doesn't replace that platform for production use, but for exploration and prototyping it is faster by an order of magnitude. I wrote about multi-agent orchestration last month and how the plumbing for agent tool integration is getting built but hasn't fully arrived. The fal MCP server is a concrete example of that plumbing actually working. An agent that can discover, price-check, and execute a thousand models through natural conversation is closer to the promise than most of what I have seen.

The MCP protocol itself has grown faster than anyone predicted. From Anthropic's open-source release in November 2024 to ninety-seven million monthly SDK downloads and ten thousand active servers today. OpenAI, Google DeepMind, and Microsoft all support it now. Whether it remains the dominant standard or gets superseded by something more context-efficient, the pattern it established, agents that discover and use external tools at runtime, is not going away.

I am going to keep exploring the fal catalogue through the MCP server rather than their web dashboard. The pricing transparency alone justifies the setup. Knowing that Kontext Max costs exactly twice what Kontext Pro costs, and being able to surface that comparison without leaving my editor, is the kind of small efficiency that compounds across dozens of daily decisions about which model to use and when.

Sources:

Meta Bets the Headcount on AI

Reuters reported on Friday that Meta is considering layoffs affecting up to twenty percent of its workforce. That is roughly fifteen thousand people. Meta's stock rose three percent on the following Monday.

The math driving this is not subtle. Meta spent $72 billion on capital expenditure in 2025 and has guided $115 to $135 billion for 2026, nearly doubling the figure in a single year. Reality Labs burned through $19.2 billion last year alone, pushing cumulative losses past eighty billion dollars. Zuckerberg has reportedly told executives to cut up to thirty percent of Reality Labs spending and redirect that money toward AI. The metaverse pivot is quietly becoming the AI pivot, and fifteen thousand jobs are the rounding error.

Wall Street loves it. Jefferies slapped a buy rating on the stock. Bank of America projected up to $8 billion in annualized savings. JPMorgan estimated six billion. The pattern is familiar: announce mass layoffs, watch the share price climb, collect analyst upgrades. Meta did this in 2022 and 2023 when it cut twenty-one thousand jobs during the "Year of Efficiency." The stock returned 194 percent the following year.

This time the justification has shifted. The 2022 cuts were about unwinding pandemic over-hiring. The 2026 cuts are about funding a bet. Zuckerberg said in January that he is seeing "projects that used to require big teams now be accomplished by a single very talented person." That framing does a lot of work. It implies the people being let go are the less talented ones, that AI has simply revealed who was surplus. Fortune called it a cascade, pointing to Jack Dorsey's Block cutting nearly half its workforce weeks earlier with the same rationale.

I keep returning to the gap between the narrative and the accounting. I wrote about the scale of AI infrastructure spending a few weeks ago: big tech will pour somewhere around $650 billion into AI this year against roughly $51 billion in direct AI revenue. Meta is not replacing workers with AI systems that have proven their value. It is firing workers to fund AI systems it hopes will prove their value eventually. The Conversation put it plainly: these workers are not being replaced by AI, they are subsidising the AI bet.

And then there is the junior hiring collapse. Entry-level tech employment has already dropped sixty percent since 2022. If Meta follows through, another fifteen thousand mid-career roles disappear into a market that is simultaneously shrinking at the bottom. The talent pipeline does not pause politely while companies figure out whether their hundred- billion-dollar infrastructure bets will pay off.

Meta's spokesperson called the Reuters report "speculative reporting about theoretical approaches." Maybe. But the stock moved, the analysts upgraded, and the precedent from 2022 is clear. The market has told Meta exactly what it wants to hear.

Sources:

The February Before Everything Changed

Patrick Demarchelier shot this against nothing but sand and sky. No props, no elaborate set. Just Cindy Crawford in head-to-toe pink Oscar de la Renta, pulling a satin jacket open to show its chartreuse lining, grinning like she already knew what the next decade had in store.

February 1990. She was twenty-three.

One month before this cover reached newsstands, Peter Lindbergh had gathered Crawford, Naomi Campbell, Linda Evangelista, Tatjana Patitz, and Christy Turlington in New York's Meatpacking District for a group portrait that British Vogue ran in January. That single black-and-white frame is the image most people point to when they talk about the birth of the supermodel era. By the time Demarchelier's pink-drenched cover appeared, the Revlon contract was signed, MTV's House of Style was already on the air, and George Michael was months away from calling five women about a music video that would make fashion history all over again.

But look at that grin and all the pink satin against the empty sky for a second. The palette alone tells you where fashion stood in the transitional window between the shoulder-padded eighties and the stripped-back minimalism that would dominate by mid-decade. Those earrings are enormous, coral and gold and completely unapologetic. The satin jacket screams occasion wear but she's wearing it on a beach with a casual knit top underneath. It shouldn't work. It works.

Demarchelier was known for exactly this kind of frame. Natural light, minimal staging, letting the subject carry the image. He'd shoot three to twenty rolls of film per setup, waiting for the moment when the performance stopped and the person started. Crawford gave him that grin and he had his cover.

The thing I keep coming back to is the ease. Not performative confidence, not the rehearsed poise you see in most editorial work. She's standing on a beach in pastel satin pulling her jacket open with both hands and she looks like she's having the best afternoon of her life. The entire industry was pivoting around her and she's just enjoying it.

That kind of ease doesn't photograph easily. Demarchelier knew it when he saw it.

The Flagship Tax Keeps Shrinking

OpenAI released GPT-5.4 mini and nano today. The mini sits at $0.75 per million input tokens, the nano at $0.20. The full GPT-5.4, announced ten days ago, costs substantially more for both input and output.

The interesting number is 54.4%. That's GPT-5.4 mini on SWE-Bench Pro, the benchmark that tests professional-grade coding on real repositories. The full GPT-5.4 scores 57.7%. Three percentage points separate the cheap model from the expensive one on the hardest coding evaluation OpenAI publishes. Context window drops from 1.05 million tokens to 400,000. Still enormous.

OpenAI frames this as a multi-model architecture play: the flagship plans, the mini executes. That's a reasonable pitch for agentic workflows where you're orchestrating dozens of parallel calls and per-token cost actually matters. GitHub Copilot already ships it at a 0.33x premium request multiplier, which tells you where the volume is heading.

The pattern repeats across every model family now. The mid-tier eats the flagship, the mini eats the mid-tier, and within six months the nano handles tasks that needed the flagship a year ago. The real product isn't any single model. It's the pricing curve.

Sources:

Intelligence by the Kilowatt-Hour

Nick Turley, OpenAI's head of ChatGPT, went on the Bg2 Pod on Sunday and said something that should have been obvious for months: unlimited AI plans probably can't survive. His exact framing was that offering unlimited prompts is "like having an unlimited electricity plan. It just doesn't make sense."

He's right. And the interesting part isn't the admission itself but how long it took to arrive.

The $200/month Pro tier, the one with unlimited prompts, has been acknowledged as unprofitable by Sam Altman himself. OpenAI's inference costs hit $8.4 billion in 2025 and are projected to reach $14.1 billion this year. The company expects to lose $14 billion in 2026 while simultaneously seeking $100 billion in new funding. Those numbers don't describe a business that can afford to let power users hammer GPT-5.4 all day for a flat fee.

Turley described the subscription model as "accidental," which is a revealing word. ChatGPT launched in November 2022 as a demo intended to run for a month. Subscriptions weren't a monetisation strategy; they were a capacity management tool bolted on after the thing went viral. Four years later, that temporary fix is still the core revenue model for a company burning through cash at a rate that makes WeWork look disciplined.

Altman tipped the direction at BlackRock's Infrastructure Summit on March 11 when he said OpenAI sees a future where "intelligence is a utility, like electricity or water, and people buy it from us on a meter." The electricity metaphor keeps appearing. I think they genuinely believe it. Metered intelligence, priced per token, scaled by consumption.

The problem with the utility analogy is that utilities are regulated, commoditised, and operate on thin margins. Nobody gets excited about their electricity provider. If OpenAI wants to be a utility, it needs to accept utility economics, and utility economics don't support the $300 billion valuation or the $200 billion revenue target for 2030.

Meanwhile, 1.5 million users cancelled their subscriptions in March alone. ChatGPT's market share has reportedly slipped from around 60% in early 2025 to under 45% now. Competitors aren't standing still. Claude, Gemini, and a growing constellation of open-weight models are absorbing the users who feel nickel-and-dimed. OpenAI keeps shipping but the goodwill account is overdrawn.

The shift from subscription to metered pricing would be the most honest thing OpenAI has done in years. Flat-rate unlimited access to a resource that costs billions to produce was always a lie, just one that users were happy to believe for $200 a month.

Sources: