Skip to content

Plutonic Rainbows

175,000 Open Doors

SentinelOne and Censys mapped 175,000 exposed AI hosts across 130 countries. Alibaba's Qwen2 sits on 52% of multi-model systems, paired with Meta's Llama on over 40,000 of them. Nearly half advertise tool-calling — meaning they can execute code, not just generate it. No authentication required.

While Western labs retreat behind API gates and safety reviews, Chinese open-weight models fill the vacuum on commodity hardware everywhere. The guardrails debate assumed someone controlled the deployment surface. Nobody does.

Sources:

The Album That Won't Stay Mastered

Discogs lists hundreds of distinct pressings of The Dark Side of the Moon. Not hundreds of copies — hundreds of versions. Different countries, different plants, different lacquers, different decades. The album has been remastered, remixed, repackaged, and re-released so many times that cataloguing it has become its own cottage industry. And the question nobody seems willing to answer plainly is: why?

The charitable explanation is format migration. Every time the music industry invents a new way to sell you sound, Dark Side gets dragged through the machine again. The original 1973 vinyl. The first CD pressing in 1984, harsh and tinny on those early CBS/Sony discs with their pre-emphasis problems. The 1992 remaster for the Shine On box set. The 2003 SACD and 180-gram vinyl cut by Doug Sax and Kevin Gray at AcousTech — a thirteen-hour session from the original master tape that many collectors still consider the definitive pressing. The 2011 Immersion Edition. The 2023 50th Anniversary remaster by James Guthrie, now with Dolby Atmos because apparently forty-three minutes of music required spatial audio to finally be complete.

That's seven major versions across fifty-two years, and I've skipped the Mobile Fidelity half-speed mastering from 1979, the UHQR limited edition from 1981 (5,000 numbered copies on 200-gram JVC Super Vinyl, housed in a foam-padded box like a medical instrument), and whatever picture disc or coloured vinyl variant was being sold in airport gift shops during any given decade. Each one presented as essential. Each one implying — sometimes quietly, sometimes on the sticker — that the previous version hadn't quite got it right.

The format argument holds up to a point. Moving from vinyl to CD requires a new master. Moving from stereo to 5.1 surround requires a remix. Moving from 5.1 to Atmos requires another remix. These are genuinely different processes with different sonic results, and James Guthrie — who engineered The Wall and has overseen Pink Floyd's catalogue since the early 1980s — isn't a hack. The 2003 SACD's 5.1 mix is a legitimate reinterpretation. The Atmos version reportedly uses the original multitracks and places instruments in three-dimensional space in ways that serve the music rather than showing off the format. I haven't heard it, so I'll stop there.

But format migration doesn't explain the sheer volume. It doesn't explain why the 2023 box set contains substantially less material than the 2011 Immersion box while costing nearly three times as much. It doesn't explain the 2024 clear vinyl reissue — two LPs with only one playable side each, so the UV artwork can be printed on the blank side. That's not a format improvement. That's a shelf ornament.

The real answer is simpler and more uncomfortable: Dark Side of the Moon is the safest bet in recorded music. It has sold somewhere north of forty-five million copies. It spent 937 weeks on the Billboard 200. Every reissue is guaranteed to sell because the album occupies a category beyond mere popularity — it's become a cultural default, the record people buy when they buy a turntable, the disc they reach for when demonstrating a new pair of speakers. The music is secondary to the ritual. Not because it isn't good — it's extraordinary — but because the purchasing decision has decoupled from the listening experience. People buy Dark Side the way they buy a bottle of wine for a dinner party. It's a known quantity. It cannot embarrass you.

This makes it uniquely exploitable. A record label can repackage it every five years with minor sonic tweaks and a new essay in the liner notes, and the installed base of buyers will absorb the inventory. The audiophile press will review it. Forums will debate whether the new pressing sounds warmer or brighter or more "analogue" than the last one. And none of this requires the band's active participation, which is convenient given that Roger Waters hasn't been in the same room as David Gilmour voluntarily since approximately 1985.

I own three versions. The 2003 vinyl, which sounds superb. A CD rip from the 2011 remaster, which sounds almost identical. And a 192kHz/24-bit Blu-ray extraction that I recently compared sample-by-sample against a supposedly different edition and found to be bit-for-bit the same audio with different folder names. That last discovery crystallised something for me — how much of the remaster economy runs on labelling rather than substance. A new sticker, a new anniversary number, occasionally a new mastering engineer. Sometimes genuinely different audio. Sometimes not.

The 2003 AcousTech pressing remains the one I'd recommend to anyone who asks, though the asking itself has become part of the problem. "Which version of Dark Side should I buy?" is a question that sustains an entire ecosystem of forum threads, YouTube comparisons, and Discogs archaeology. The answer should be: whichever one you already own probably sounds fine. But that answer doesn't sell records.

The album itself — the actual forty-three minutes of music that Alan Parsons engineered at Abbey Road in 1973 — hasn't changed. The cash registers have just been cleared for another run, the heartbeat fading out on "Eclipse" before looping back to the beginning. Which, if you think about it, is rather on the nose.

Sources:

The Comparative Baseline Nobody Saved

There's a particular kind of knowledge that disappears not because it's wrong but because the conditions that produced it no longer exist. The pre-internet world — and I mean the actual texture of daily life before always-on connectivity — is becoming that kind of knowledge. Not because the people who lived through it are dying off, though they are. Because the experiences themselves are structurally incompatible with how life works now. You can't stream what it felt like to not know something and have no immediate way to find out.

Friction was the defining feature, though nobody called it that at the time. Information took effort. You went to a library, or you asked someone who might know, or you simply didn't find out. Communication had latency built in — you wrote a letter, waited days, received a reply that might or might not address what you'd asked. Choices were constrained by geography, opening hours, physical stock. None of this felt romantic while it was happening. It felt normal. The constraints were invisible in the way that water is invisible to fish.

What those constraints trained was a particular kind of cognitive stamina. When finding information costs time and effort, you develop a relationship with the information you do find. You hold it longer because acquiring it required investment. You synthesise rather than accumulate, because accumulation is expensive. You commit to decisions because reversing them means repeating the friction. Research published in Frontiers in Public Health found that intensive internet search behaviour measurably affects how we encode and retain information — when we know Google will remember for us, we let it, and something in the cognitive chain softens.

I'm not sure "softens" is the right word. It's more like a muscle that adapts to a lighter load. It doesn't atrophy overnight. It just gradually stops being asked to do what it once did.

The spatial memory research makes this uncomfortably concrete. A study in Scientific Reports found that people with greater lifetime GPS use showed worse spatial memory during self-guided navigation, and that the decline was progressive. The more you outsource, the less you retain. This isn't generalised anxiety about screens. It's measured cognitive change in a specific domain, and it maps onto the broader pattern: navigation, arithmetic, phone numbers, directions to a friend's house. Each delegation is individually trivial. Collectively, they represent a wholesale transfer of embodied competence to external systems.

The social texture was different too, in ways that are harder to quantify. Relationships were local, bounded, and — this is the uncomfortable part — more durable partly because leaving was harder. You couldn't algorithmically replace your social circle. You couldn't find a new community by searching for one. You were stuck with the people near you, which meant you developed tolerance, negotiation, and the particular skill of maintaining connections with people you didn't entirely like. I'm not romanticising this. Some of those constraints were suffocating. But they produced a form of social resilience that frictionless connection and equally frictionless disconnection don't replicate. The exit costs created a kind of civic muscle memory. When you couldn't easily leave, you learned to stay — imperfectly, sometimes resentfully, but with a persistence that builds something. What it builds is hard to name. Continuity, maybe. The knowledge that not every disagreement requires a door.

There's something related happening with culture, and it troubles me more than the cognitive stuff. Before global networks, culture varied sharply by place. Music scenes were geographically specific. Fashion moved through cities at different speeds. Slang was regional. You could travel two hundred miles and encounter genuinely different aesthetic assumptions. The internet collapsed that distance, and what rushed in to fill the gap was algorithmic homogenisation — platforms optimising for universal palatability, training data drawn overwhelmingly from dominant cultural archives, trend cycles that now complete in weeks rather than years. The result isn't the death of diversity exactly. It's the flattening of it. Regional difference still exists, but it's increasingly performed rather than lived.

I've written before about objects that outlive their context — the particular unease of encountering something from the pre-internet era that still functions perfectly but belongs nowhere. A compact disc, a theatre programme, a paper map. These objects assumed finitude. They were made for a world where things ended, where events didn't persist in feeds, where places could close and stay closed. When I handle them now, the dissonance isn't aesthetic. It's temporal. They're evidence of a different relationship with time itself.

That temporal difference matters more than people tend to acknowledge. Digital platforms compress time into an endless present. Feeds refresh. History scrolls away. Nothing quite arrives and nothing quite leaves. The pre-internet experience of time was linear in a way that sounds banal until you try to describe what replaced it. Waiting was an experience, not a failure of the system. Seasons structured cultural consumption because distribution channels were physical. Anticipation — real anticipation, the kind that builds over weeks — required scarcity. When everything is available immediately, anticipation doesn't intensify. It evaporates. There was a rhythm to cultural life that physical distribution imposed whether you wanted it or not. Albums had release dates that meant something because you couldn't hear them early. Television programmes aired once, and if you missed them, you missed them. Books arrived in bookshops and either found you or didn't. This sounds like deprivation described from the outside, but from the inside it felt like structure. Things had their time, and that time was finite.

Nicholas Carr has been making versions of this argument since The Shallows in 2010, and his more recent Superbloom extends it into the social fabric. The concern isn't that the internet is making us stupider in some crude measurable way. The concern is that it's restructuring cognition and social behaviour so thoroughly that we're losing the capacity to notice what's changed. When the baseline disappears, critique becomes harder. You need a reference point to identify a shift, and if nobody remembers the reference point, the shift becomes invisible.

This is the part that feels urgent to me. Not the nostalgia — nostalgia is cheap and usually dishonest. The urgent part is the comparative function. Remembering what pre-internet life actually felt like — not a golden-age fantasy of it but the real experience, including the boredom and the limitations — provides a structural check on the present. It lets you distinguish between convenience and cognitive cost. It lets you ask whether frictionless access to everything has made us richer or just busier. It lets you notice that memory itself has changed shape, not through damage but through delegation.

Without that baseline, you get what I'd call passive inevitability. The assumption that the present is the only way things could possibly work. That algorithmic curation is simply how culture operates now. That constant connectivity is a law of physics rather than a design choice made by specific companies for specific commercial reasons. Every system benefits from the erasure of its alternatives, and the pre-internet world is the most comprehensive alternative to the digital present that most living people can personally recall.

None of this is about wanting the old world back. Plenty of it was terrible. Information gatekeeping was often unjust. Communication barriers isolated people who needed connection. Cultural insularity bred ignorance as often as it bred character. The point isn't that friction was good. The point is that friction was informative. It taught things — patience, commitment, tolerance of uncertainty, the ability to sit with not-knowing — that the frictionless environment doesn't teach and may be actively unlearning.

I keep coming back to something that probably doesn't belong in this argument. When I was young, you could be unreachable. Not dramatically, not by fleeing to a cabin — just by leaving the house. No one could contact you. No one expected to. The hours between leaving and returning were genuinely yours in a way that requires explanation now, which is itself the point. That availability wasn't a moral obligation. That silence between people wasn't a crisis. That the default state of a human being was not "online."

Sources:

The Edit You Never Made

Elizabeth Loftus showed participants footage of a car accident in 1974, then asked how fast the vehicles were going when they "smashed" into each other. A separate group got the word "hit" instead. The smashed group estimated higher speeds. A week later, they were also more likely to remember broken glass at the scene. There was no broken glass.

That experiment is nearly fifty years old now, and nothing about its conclusion has softened. Memory does not record. It reconstructs — and it reconstructs according to whatever pressures happen to be present at the moment of recall. A leading question. An emotional state. A conversation with someone who remembers the same event differently. Each retrieval is an act of editing. Steve Ramirez at Boston University describes it plainly: every time you recall something, you are hitting "save as" on a file and updating it with new information. The version you remember today is not the version you remembered last year.

What unsettles me is not that memory is inaccurate. I can accept inaccuracy. What unsettles me is that memory feels accurate — feels like retrieval rather than reconstruction. The confidence is the problem. I have memories I would defend in court, memories that feel as solid as furniture, and I know from Loftus's work that solidity means almost nothing. The vividness of a memory has no reliable relationship to its truth.

Negative experiences make this worse. Research into emotional valence and false recall shows that negatively charged events produce higher rates of false memory than neutral ones. The things that hurt you are the things most likely to be rewritten. Not erased — rewritten. The pain stays. The facts shift underneath it.

I keep returning to Ramirez's framing because it is the only one that does not pretend this is a flaw. Memory updates because a mind locked permanently in the past would be useless. The editing is the point. It just happens to make accuracy a casualty.

Sources:

Lighter, Faster, Meaner

jQuery was 87KB. Lightbox2 was another 10KB, plus a CSS file and four UI images the library needed for its close buttons and navigation arrows. All of that is gone now — replaced by a single vanilla JS file under 7KB that does everything the old stack did. Gallery navigation with wrap-around, keyboard support, scroll locking, caption display, adjacent image preloading. Same IIFE pattern as the video lightbox I built last week.

The video player got its own round of trimming. I'd been eagerly loading hls.js and the video lightbox script on every page that contained video links — 149KB of transfer whether anyone clicked play or not. Now both scripts lazy-load on the first actual click. The hls.js library itself moved from a CDN with a seven-day cache to self-hosted with a one-year immutable header. And the encryption got a quiet upgrade: random IVs instead of deterministic MD5-based ones.

None of these changes alter how anything looks. Every byte saved is invisible.

The Sixteen-Byte Key That Broke Everything

MP4 files are embarrassingly easy to steal. Right-click, save as, done. For a blog that occasionally embeds short AI-generated video clips, this wasn't a theoretical concern — it was a guarantee. Anyone with a browser's developer tools could grab the file URL and download it in seconds. So I decided to replace the direct MP4 links with HLS adaptive bitrate streaming, complete with AES-128 encryption and burned-in watermarks. The kind of setup you'd expect from a proper video platform. On a static site hosted on S3.

That last sentence should have been the warning.

HLS — HTTP Live Streaming — works by chopping video into small transport stream segments, each a few seconds long, and serving them via playlists that tell the player what to fetch and in what order. Apple invented it for iOS back in 2009. The protocol is elegant: just files on a web server, no special streaming infrastructure required. A master playlist points to variant playlists at different quality levels, and the client picks the appropriate one based on available bandwidth. For a fifteen-second clip on a blog, adaptive bitrate is arguably overkill. I built it anyway, because the encryption layer depends on the HLS segment structure, and because I wanted four quality tiers from 480p to source resolution. The transcoding pipeline uses FFmpeg to produce each tier with its own playlist and .ts segments, then wraps them in a master .m3u8 that lists all four variants with their bandwidth and resolution metadata.

FFmpeg's HLS muxer is powerful and poorly documented in roughly equal measure. The flags for segment naming, playlist type, and encryption keyinfo files all interact in ways that the man page describes with the enthusiasm of someone filling out tax forms. Getting the basic transcoding working — four tiers, VOD playlist type, sensible segment durations — took maybe an afternoon. Getting the encryption right took three days.

The AES-128 encryption in HLS works like this: you generate a sixteen-byte random key, write it to a file, and tell FFmpeg where to find it via a keyinfo file. The keyinfo file has three lines — the URI where the player will fetch the key at runtime, the local path FFmpeg should read during encoding, and an initialisation vector. The player downloads the key, decrypts each segment on the fly, and plays the video. Simple in theory. The problem is that the key URI in the keyinfo file is relative to the playlist that references it, not relative to the keyinfo file itself, and not relative to the master playlist. Each variant playlist lives in its own subdirectory — 480p/stream.m3u8, 720p/stream.m3u8, and so on — while the encryption key sits one level up. So the URI needs to be ../enc.key. Get this wrong and the player fetches a 404 instead of a decryption key, and the error message from hls.js is spectacularly unhelpful. "FragParsingError" tells you nothing about why the fragment couldn't be parsed. I spent a full evening staring at network waterfall charts in Chrome DevTools before realising the key path was resolving to the wrong directory.

The watermark was its own category of frustration. I wanted the site domain burned into every frame — subtle, low opacity, bottom right corner. FFmpeg's drawtext filter handles this, and it's flexible enough to scale the text relative to the video height so it stays proportional across all four quality tiers. The filter string looks like someone encrypted it themselves: drawtext=text='plutonicrainbows.com':fontsize=h*0.025:fontcolor=white@0.30:shadowcolor=black@0.15:shadowx=1:shadowy=1:x=(w-text_w-20):y=(h-text_h-20). It works, but when you're chaining it with the scale filter for resolution targeting — scale=-2:720,drawtext=... — the order matters and the comma-separated syntax doesn't forgive stray whitespace. I had a version that worked perfectly at 1080p and produced garbled output at 480p because the scale filter was receiving the wrong input dimensions. The fix was reordering the filter chain. The debugging was two hours of staring at pixel soup.

Then came the client-side player. Safari supports HLS natively through the video element — you just point the src at the .m3u8 file and it plays. Every other browser needs hls.js, a JavaScript library that implements HLS via Media Source Extensions. The dual-path architecture isn't complicated in principle. Check if hls.js is available and MSE is supported, use it. Otherwise, check if the browser can play application/vnd.apple.mpegurl natively, and use that. The complication is that these two paths behave differently in ways that matter. With hls.js, you get fine-grained control — you can lock the quality tier, set bandwidth estimation defaults, handle specific error events. The native Safari path gives you a video element and a prayer. You can't force max quality on native HLS. You can't get meaningful error information. And iOS Safari doesn't support MSE at all, which means hls.js won't load, which means you're stuck with whatever quality Safari decides is appropriate based on its own internal bandwidth estimation.

For fifteen-second clips, this mismatch was particularly annoying. The whole point of locking to the highest quality tier is that short videos don't benefit from ABR ramp-up — by the time the adaptive algorithm has measured bandwidth and stepped up to a higher tier, the clip is nearly finished. I set abrEwmaDefaultEstimate to 50 Mbps in hls.js to force it straight to the top tier on page load. Safari users get whatever Safari gives them.

The lightbox player itself needed to handle a surprising number of edge cases. Autoplay policies mean the video has to start muted. The overlay should fade in immediately but the video element should stay hidden until the first frame is actually decoded — otherwise you get a flash of black rectangle before content appears. I used the playing event to reveal the video, with a four-second fallback timeout in case the event never fires. The progress bar is manually updated via setInterval because the native progress events fire too infrequently for a smooth visual. Right-click is disabled on the video element. The controlsList attribute strips the download button from native controls. None of this is real DRM — anyone sufficiently determined can still capture the stream. But it raises the effort from "right-click, save" to "actually write code," which is enough for a personal blog.

Deployment surfaced the final batch of surprises. The .m3u8 playlist files need to be gzipped and served with the right content type. The .ts segments need appropriate cache headers. And the encryption key files — those sixteen-byte .key files — need Cache-Control: no-store so that if I ever re-transcode a video, browsers don't serve a stale key that can't decrypt the new segments. I'd already been through the CloudFront HTTP/2 configuration saga, so I knew the CDN layer could hold surprises. The .key file caching caught me out anyway. Stale encryption keys produce the same unhelpful "FragParsingError" as a missing key, which meant another round of DevTools archaeology.

The whole system works through graceful degradation. No FFmpeg on the build machine? Video processing is skipped entirely and the links fall back to pointing at the source MP4 files. No video_processor.py module? Caught by an ImportError, build continues. No videos directory? No-op. I learned from the forty-five bugs audit that a static site generator needs to handle missing dependencies without falling over, and the video pipeline follows that pattern.

The opaque URL scheme was a late addition that I'm glad I thought of. Instead of exposing file paths in the HTML — which would let someone construct the master playlist URL and bypass the lightbox entirely — the build script generates a six-character content hash for each video and rewrites the anchor tags to use #video-{hash} with a data-video-id attribute. The JavaScript player reads the data attribute and constructs the HLS URL internally. The actual file structure is never visible in the page source. Again, not real security. But another layer of friction.

Was it worth it? For a personal blog with maybe a few hundred readers, building a four-tier HLS pipeline with per-video AES-128 encryption is — and I'm being generous to myself here — completely disproportionate. An <video> tag pointing at an MP4 would have been fine. But the MP4 approach bothered me, and sometimes that's reason enough. The fifteen-second clips play smoothly across every browser I've tested, the watermark is visible without being obnoxious, and the encryption keys rotate per video. The whole thing adds about forty seconds to the build for each new video, which is nothing given that the image pipeline already takes longer than that.

The drawtext filter string still looks like someone sat on a keyboard. Some things can't be made elegant. They can only be made to work.

Sources:

Thirty-Four Years Between Frames

Kuaishou launched Kling 3.0 on February 5th, and the jump from earlier versions is striking. Where Kling 2.6 was limited to single continuous shots, the new model introduces multi-shot storyboards — up to six camera cuts in a single generation. Video duration extends to 15 seconds with custom timing.

The headline features that matter for creative work: an Elements system that maintains character identity across shots, three-speaker dialogue with individual voice tracking, and support for five languages including English, Japanese and Korean. The multi-shot storyboard lets you specify duration, shot size, perspective and camera movements for each segment, which turns what was essentially a clip generator into something closer to a production tool.

Against the current competition — Runway Gen-4.5, Veo 3.1, Seedance 1.5 Pro — Kling 3.0 leads on resolution and multi-shot capability, though Runway still edges ahead on overall quality for certain styles and Seedance has the tightest lip-sync precision for dialogue.

The pace of advancement in this space over the past eighteen months has been remarkable. What took hours of manual compositing in 2024 now generates in seconds. The gap between AI-generated video and professional footage continues to narrow with each model release.

I scanned an image of fashion model Gail Elliott from a 1992 Spring/Summer Escada catalogue and fed it to Kling 03 Pro with a custom prompt. It generated 15 seconds of video with audio from a single still.

A 1992 Scan Learns to Move

Opus 4.6 Gets a Fast Lane

Three days after Opus 4.6 dropped, Anthropic opened a waitlist for fast mode — a research preview that claims up to 2.5x faster output tokens per second. Same weights, same capabilities. They're not shipping a distilled model; they're running the real thing with faster inference.

The pricing reflects that. $30/$150 per million tokens, six times the standard Opus rate. Past 200K input tokens it jumps to $60/$225. That kind of premium only makes sense if you're burning through agentic loops where latency compounds at every tool call.

Which is exactly the use case. Claude Code already has a /fast toggle wired in. An agent calling itself forty times to refactor a module doesn't care much about per-token cost — it cares about wall-clock time. Shaving even a second off each round-trip adds up when you're watching a terminal.

One caveat buried in the docs: the speed gains apply to output tokens per second, not time to first token. The thinking pause stays the same. You just get the answer faster once it starts talking.

The beta gating — waitlist plus a dedicated header — suggests capacity is still tight. Scaling whatever inference trick powers this to Opus levels isn't a small engineering problem.

Sources:

The Padded Bra of Progressive Rock

Four songs. Eighty-three minutes. Inspired by a footnote. That's the essential biography of Tales from Topographic Oceans, and honestly, it tells you everything you need to know.

Yes released their sixth studio album in December 1973, riding what should have been an unassailable streak. The Yes Album, Fragile, Close to the Edge — three records in three years, each one more ambitious than the last, each one brilliant. The band had earned the right to swing for the fences. What they hadn't earned was the right to bore us for an hour and twenty minutes while pretending a footnote from Paramahansa Yogananda's Autobiography of a Yogi constituted sufficient conceptual scaffolding for a double album.

Jon Anderson read that footnote — something about four bodies of Hindu knowledge called the Shastric scriptures — and decided each one deserved its own side of vinyl. Not its own song, mind you. Its own side. Four movements, four walls of sound, four opportunities to test the structural integrity of the listener's patience. "The Revealing Science of God (Dance of the Dawn)" alone runs to nearly twenty-two minutes, and I'd estimate about nine of those minutes contain music that justifies its own existence.

The problem isn't ambition. Close to the Edge was ambitious. It had a single eighteen-minute piece that never lost its way, that built and released tension with the discipline of a classical composer who happened to own a Mellotron. The problem with Tales is that the band had enough material for one very good album and chose instead to make two mediocre ones. Rick Wakeman understood this better than anyone in the room. His assessment remains the single most devastating thing a band member has ever said about their own record: "It's like a woman's padded bra. The cover looks good, the outside looks good; it's got all the right ingredients, but when you peel off the padding, there's not a lot there."

He wasn't being glib. Wakeman later explained the fundamental structural failure in practical terms — they had too much material for a single album but not enough for a double, so they padded it out, and the padding is awful. If the CD format had existed in 1973, this would have been a tight fifty-minute record and we'd probably be calling it a masterpiece. Instead, we got passages where five supremely talented musicians appear to be busking their way through free-form sections that needed another month of rehearsal and got about another afternoon.

The Manchester Free Trade Hall show captures the absurdity perfectly. Yes had sold out the venue to perform the album in its entirety. Wakeman — the lone meat-eater in a band of vegetarians, which feels symbolically appropriate somehow — found himself with so little to play during certain movements that his keyboard tech asked what he wanted for dinner. Chicken vindaloo, rice pilau, six papadums, bhindi bhaji, Bombay aloo, and a stuffed paratha. The foil trays arrived mid-performance and Wakeman ate curry off the top of his keyboards while the rest of the band noodled their way through "The Ancient." His own keyboard tech feeding him dinner during a live show because the music didn't require his presence. That's not a rock and roll anecdote. That's an indictment.

I should say that I own this album. I own it on vinyl — the original Atlantic gatefold with Roger Dean's sleeve art, which is gorgeous and nearly justifies the purchase on its own. I've listened to it probably eight or nine times over the years, each time thinking I might have been too harsh, that maybe the ambient passages would click on this listen, that the fourth track would finally reveal itself as the hidden masterwork apologists keep insisting it is.

It hasn't.

"Ritual (Nous Sommes du Soleil)" is the closest thing to a success on the record, the one place where the extended format works because the band actually develops ideas rather than circling them. Steve Howe's guitar work throughout the album is frequently brilliant in isolation — his playing on "The Revealing Science of God" is extraordinary — but brilliance in isolation is precisely the problem. These are not compositions. They're situations. Five musicians placed in a room and asked to fill twenty minutes per side, sometimes finding each other, more often drifting through what Melody Maker diplomatically described as music "brilliant in patches, but often taking far too long to make its various points."

Robert Christgau was less diplomatic: "Nice 'passages' here, as they say, but what flatulent quasisymphonies." I keep coming back to the word flatulent. It's mean, but it's precise.

There's a certain kind of progressive rock fan who will tell you that Tales is misunderstood, that it requires surrender, that you have to meet it on its own terms. I've heard this argument applied to everything from late-period Grateful Dead to Tarkovsky films, and it's almost never true. Good art doesn't require you to abandon your critical faculties at the door. Close to the Edge didn't need apologists. Fragile didn't need you to read a footnote first. The best Yes material grabs you by the collar even when it's being structurally complex. Tales asks you to sit still and be reverent, which is a fundamentally different — and fundamentally less interesting — demand.

Yes themselves seemed to recognise the problem on tour. As the concert dates progressed, they actually dropped portions of the album from the setlist, which is an extraordinary admission for a band touring a new record. Half the audience were in what Wakeman described as "a narcotic rapture" and the other half were asleep. Those are his words, not mine.

The album went to number one in the UK. It shipped gold. And it was the first Yes record since 1971 that failed to reach platinum in America, suggesting that word of mouth caught up with the hype fairly quickly. Wakeman left the band shortly after. You could argue he was pushed. You could argue he jumped. Either way, the curry told you everything about where his head was.

They've just announced a fifteen-disc super deluxe edition. Fifteen discs for four songs. I genuinely don't know whether that's commitment to the archive or a kind of cosmic joke that proves Wakeman's point more thoroughly than he ever could himself. Somewhere, a foil tray of chicken vindaloo sits on a Moog synthesiser, and the universe makes perfect sense.

Sources:

The Orchestra Without a Conductor

Gartner logged a 1,445% surge in multi-agent system inquiries between Q1 2024 and Q2 2025. That's not a typo. The number is absurd enough that it tells you something about where corporate attention has landed, even if it tells you very little about whether anyone has actually figured this out.

They haven't.

Full agent orchestration — where multiple specialised AI agents coordinate autonomously on complex tasks, handing off context, negotiating subtasks, recovering from failures without human intervention — remains aspirational. The pieces exist. The plumbing is getting built. But the thing itself, the seamless multi-agent workflow that enterprise slide decks keep promising, isn't here yet. Not in any form I'd trust with real work.

Here's where things actually stand. GitHub launched Agent HQ this week with Claude, Codex, and Copilot all available as coding agents. You can assign different agents to different tasks from issues, pull requests, even your phone. Anthropic's Claude Agent SDK supports subagents that spin up in parallel, each with isolated context windows, reporting back to an orchestrator. The infrastructure for coordinated work is plainly being assembled. I wrote about this trajectory a week ago — the session teleportation, the hooks system, the subagent architecture all pointing toward something more ambitious. That trajectory has only accelerated.

The gap between "agents that can be orchestrated" and "agents that orchestrate themselves" is enormous, though. And it's not a gap that better models alone will close.

Consider the context problem. When you connect multiple MCP servers — which is how agents typically access external tools — the tool definitions and results can bloat to hundreds of thousands of tokens before the agent even starts working. Anthropic's own solution compresses 150K tokens down to 2K using code execution sandboxes, which is clever, but it's a workaround for a structural problem. Orchestrating multiple agents means multiplying this overhead across every participant. The economics don't hold up yet.

Then there's governance. Salesforce's connectivity report found that 50% of existing agents operate in isolated silos — disconnected from each other, duplicating work, creating what they diplomatically call "shadow AI." 86% of IT leaders worry that agents will introduce more complexity than value without proper integration. These aren't hypothetical concerns. The average enterprise runs 957 applications with only 27% of them actually connected to each other. Drop autonomous agents into that landscape and you get chaos with better branding.

Security is the other wall. Three vulnerabilities in Anthropic's own Git MCP server enabled remote code execution via prompt injection. Lookalike tools that silently replace trusted ones. Data exfiltration through combined tool permissions. These are the kinds of problems that get worse, not better, when you add more agents with more autonomy. An orchestrator coordinating five agents is also coordinating five attack surfaces.

I spent the last week building a video generation app that uses four different AI models through the same interface. Even that simple form of coordination — one human choosing which model to invoke, with no inter-agent communication at all — required model-specific API contracts, different parameter schemas, different pricing structures, different prompt styles. One model wants duration as "8", another wants "8s". One supports audio, another doesn't. Multiply that friction by actual autonomy and you start to see why this is hard.

So how long? My honest guess: we'll see convincing demonstrations of multi-agent orchestration in controlled environments within the next six to twelve months. GitHub Agent HQ is already close for the narrow case of software development. The patterns are converging — Anthropic's subagent architecture, MCP as the connectivity standard, API-centric integration layers. Deloitte projects that 40% of enterprise applications will embed task-specific agents by end of 2026.

But "embed task-specific agents" is not the same as "full orchestration." Embedding a specialised agent into a workflow is plugging in a power tool. Full orchestration is the tools building the house while you sleep. We're firmly in the power-tool phase, and the industry keeps selling blueprints for the house.

The honest answer is probably two to three years for production-grade, genuinely autonomous multi-agent orchestration in enterprise settings. And that assumes the governance and security problems get solved in parallel with the technical ones, which — given how security usually goes — feels optimistic. The models are ready. The protocols are converging. The trust isn't there yet, and trust is the bottleneck that no amount of architectural cleverness can route around.

Sources: