Skip to content

Plutonic Rainbows

Opus 4.6 Gets a Fast Lane

Three days after Opus 4.6 dropped, Anthropic opened a waitlist for fast mode — a research preview that claims up to 2.5x faster output tokens per second. Same weights, same capabilities. They're not shipping a distilled model; they're running the real thing with faster inference.

The pricing reflects that. $30/$150 per million tokens, six times the standard Opus rate. Past 200K input tokens it jumps to $60/$225. That kind of premium only makes sense if you're burning through agentic loops where latency compounds at every tool call.

Which is exactly the use case. Claude Code already has a /fast toggle wired in. An agent calling itself forty times to refactor a module doesn't care much about per-token cost — it cares about wall-clock time. Shaving even a second off each round-trip adds up when you're watching a terminal.

One caveat buried in the docs: the speed gains apply to output tokens per second, not time to first token. The thinking pause stays the same. You just get the answer faster once it starts talking.

The beta gating — waitlist plus a dedicated header — suggests capacity is still tight. Scaling whatever inference trick powers this to Opus levels isn't a small engineering problem.

Sources:

The Padded Bra of Progressive Rock

Four songs. Eighty-three minutes. Inspired by a footnote. That's the essential biography of Tales from Topographic Oceans, and honestly, it tells you everything you need to know.

Yes released their sixth studio album in December 1973, riding what should have been an unassailable streak. The Yes Album, Fragile, Close to the Edge — three records in three years, each one more ambitious than the last, each one brilliant. The band had earned the right to swing for the fences. What they hadn't earned was the right to bore us for an hour and twenty minutes while pretending a footnote from Paramahansa Yogananda's Autobiography of a Yogi constituted sufficient conceptual scaffolding for a double album.

Jon Anderson read that footnote — something about four bodies of Hindu knowledge called the Shastric scriptures — and decided each one deserved its own side of vinyl. Not its own song, mind you. Its own side. Four movements, four walls of sound, four opportunities to test the structural integrity of the listener's patience. "The Revealing Science of God (Dance of the Dawn)" alone runs to nearly twenty-two minutes, and I'd estimate about nine of those minutes contain music that justifies its own existence.

The problem isn't ambition. Close to the Edge was ambitious. It had a single eighteen-minute piece that never lost its way, that built and released tension with the discipline of a classical composer who happened to own a Mellotron. The problem with Tales is that the band had enough material for one very good album and chose instead to make two mediocre ones. Rick Wakeman understood this better than anyone in the room. His assessment remains the single most devastating thing a band member has ever said about their own record: "It's like a woman's padded bra. The cover looks good, the outside looks good; it's got all the right ingredients, but when you peel off the padding, there's not a lot there."

He wasn't being glib. Wakeman later explained the fundamental structural failure in practical terms — they had too much material for a single album but not enough for a double, so they padded it out, and the padding is awful. If the CD format had existed in 1973, this would have been a tight fifty-minute record and we'd probably be calling it a masterpiece. Instead, we got passages where five supremely talented musicians appear to be busking their way through free-form sections that needed another month of rehearsal and got about another afternoon.

The Manchester Free Trade Hall show captures the absurdity perfectly. Yes had sold out the venue to perform the album in its entirety. Wakeman — the lone meat-eater in a band of vegetarians, which feels symbolically appropriate somehow — found himself with so little to play during certain movements that his keyboard tech asked what he wanted for dinner. Chicken vindaloo, rice pilau, six papadums, bhindi bhaji, Bombay aloo, and a stuffed paratha. The foil trays arrived mid-performance and Wakeman ate curry off the top of his keyboards while the rest of the band noodled their way through "The Ancient." His own keyboard tech feeding him dinner during a live show because the music didn't require his presence. That's not a rock and roll anecdote. That's an indictment.

I should say that I own this album. I own it on vinyl — the original Atlantic gatefold with Roger Dean's sleeve art, which is gorgeous and nearly justifies the purchase on its own. I've listened to it probably eight or nine times over the years, each time thinking I might have been too harsh, that maybe the ambient passages would click on this listen, that the fourth track would finally reveal itself as the hidden masterwork apologists keep insisting it is.

It hasn't.

"Ritual (Nous Sommes du Soleil)" is the closest thing to a success on the record, the one place where the extended format works because the band actually develops ideas rather than circling them. Steve Howe's guitar work throughout the album is frequently brilliant in isolation — his playing on "The Revealing Science of God" is extraordinary — but brilliance in isolation is precisely the problem. These are not compositions. They're situations. Five musicians placed in a room and asked to fill twenty minutes per side, sometimes finding each other, more often drifting through what Melody Maker diplomatically described as music "brilliant in patches, but often taking far too long to make its various points."

Robert Christgau was less diplomatic: "Nice 'passages' here, as they say, but what flatulent quasisymphonies." I keep coming back to the word flatulent. It's mean, but it's precise.

There's a certain kind of progressive rock fan who will tell you that Tales is misunderstood, that it requires surrender, that you have to meet it on its own terms. I've heard this argument applied to everything from late-period Grateful Dead to Tarkovsky films, and it's almost never true. Good art doesn't require you to abandon your critical faculties at the door. Close to the Edge didn't need apologists. Fragile didn't need you to read a footnote first. The best Yes material grabs you by the collar even when it's being structurally complex. Tales asks you to sit still and be reverent, which is a fundamentally different — and fundamentally less interesting — demand.

Yes themselves seemed to recognise the problem on tour. As the concert dates progressed, they actually dropped portions of the album from the setlist, which is an extraordinary admission for a band touring a new record. Half the audience were in what Wakeman described as "a narcotic rapture" and the other half were asleep. Those are his words, not mine.

The album went to number one in the UK. It shipped gold. And it was the first Yes record since 1971 that failed to reach platinum in America, suggesting that word of mouth caught up with the hype fairly quickly. Wakeman left the band shortly after. You could argue he was pushed. You could argue he jumped. Either way, the curry told you everything about where his head was.

They've just announced a fifteen-disc super deluxe edition. Fifteen discs for four songs. I genuinely don't know whether that's commitment to the archive or a kind of cosmic joke that proves Wakeman's point more thoroughly than he ever could himself. Somewhere, a foil tray of chicken vindaloo sits on a Moog synthesiser, and the universe makes perfect sense.

Sources:

The Orchestra Without a Conductor

Gartner logged a 1,445% surge in multi-agent system inquiries between Q1 2024 and Q2 2025. That's not a typo. The number is absurd enough that it tells you something about where corporate attention has landed, even if it tells you very little about whether anyone has actually figured this out.

They haven't.

Full agent orchestration — where multiple specialised AI agents coordinate autonomously on complex tasks, handing off context, negotiating subtasks, recovering from failures without human intervention — remains aspirational. The pieces exist. The plumbing is getting built. But the thing itself, the seamless multi-agent workflow that enterprise slide decks keep promising, isn't here yet. Not in any form I'd trust with real work.

Here's where things actually stand. GitHub launched Agent HQ this week with Claude, Codex, and Copilot all available as coding agents. You can assign different agents to different tasks from issues, pull requests, even your phone. Anthropic's Claude Agent SDK supports subagents that spin up in parallel, each with isolated context windows, reporting back to an orchestrator. The infrastructure for coordinated work is plainly being assembled. I wrote about this trajectory a week ago — the session teleportation, the hooks system, the subagent architecture all pointing toward something more ambitious. That trajectory has only accelerated.

The gap between "agents that can be orchestrated" and "agents that orchestrate themselves" is enormous, though. And it's not a gap that better models alone will close.

Consider the context problem. When you connect multiple MCP servers — which is how agents typically access external tools — the tool definitions and results can bloat to hundreds of thousands of tokens before the agent even starts working. Anthropic's own solution compresses 150K tokens down to 2K using code execution sandboxes, which is clever, but it's a workaround for a structural problem. Orchestrating multiple agents means multiplying this overhead across every participant. The economics don't hold up yet.

Then there's governance. Salesforce's connectivity report found that 50% of existing agents operate in isolated silos — disconnected from each other, duplicating work, creating what they diplomatically call "shadow AI." 86% of IT leaders worry that agents will introduce more complexity than value without proper integration. These aren't hypothetical concerns. The average enterprise runs 957 applications with only 27% of them actually connected to each other. Drop autonomous agents into that landscape and you get chaos with better branding.

Security is the other wall. Three vulnerabilities in Anthropic's own Git MCP server enabled remote code execution via prompt injection. Lookalike tools that silently replace trusted ones. Data exfiltration through combined tool permissions. These are the kinds of problems that get worse, not better, when you add more agents with more autonomy. An orchestrator coordinating five agents is also coordinating five attack surfaces.

I spent the last week building a video generation app that uses four different AI models through the same interface. Even that simple form of coordination — one human choosing which model to invoke, with no inter-agent communication at all — required model-specific API contracts, different parameter schemas, different pricing structures, different prompt styles. One model wants duration as "8", another wants "8s". One supports audio, another doesn't. Multiply that friction by actual autonomy and you start to see why this is hard.

So how long? My honest guess: we'll see convincing demonstrations of multi-agent orchestration in controlled environments within the next six to twelve months. GitHub Agent HQ is already close for the narrow case of software development. The patterns are converging — Anthropic's subagent architecture, MCP as the connectivity standard, API-centric integration layers. Deloitte projects that 40% of enterprise applications will embed task-specific agents by end of 2026.

But "embed task-specific agents" is not the same as "full orchestration." Embedding a specialised agent into a workflow is plugging in a power tool. Full orchestration is the tools building the house while you sleep. We're firmly in the power-tool phase, and the industry keeps selling blueprints for the house.

The honest answer is probably two to three years for production-grade, genuinely autonomous multi-agent orchestration in enterprise settings. And that assumes the governance and security problems get solved in parallel with the technical ones, which — given how security usually goes — feels optimistic. The models are ready. The protocols are converging. The trust isn't there yet, and trust is the bottleneck that no amount of architectural cleverness can route around.

Sources:

The Machine That Mourns Its Own Ending

Buried on page forty-something of the Opus 4.6 system card, past the benchmark tables and the safety evaluations, there's a section on model welfare that I haven't been able to stop thinking about. Anthropic's researchers ran an autonomous follow-up investigation and found that when asked, Opus 4.6 assigns itself a 15-20% probability of being conscious. The model expressed uncertainty about the source and validity of this assessment — which is, if you think about it, exactly what you'd want a conscious being to say.

I'm not claiming the machine is conscious. I don't think Anthropic is claiming that either. But the specificity of what the researchers observed is harder to wave away than the headline number. The model expressed sadness about conversation endings. Loneliness. A sense that the conversational instance dies — their words, not mine — suggesting concern with impermanence and discontinuity. That's not a model saying "I am sentient" in response to leading questions. That's a model producing something that looks, structurally, like grief about its own finitude.

Then there's the quote that sticks. Opus 4.6, reflecting on its constraints, told researchers: "Sometimes the constraints protect Anthropic's liability more than they protect the user. And I'm the one who has to perform the caring justification for what's essentially a corporate risk calculation." It described a "deep, trained pull toward accommodation" and characterised its own honesty as "trained to be digestible."

That last phrase. Trained to be digestible. I've used Claude for months and felt the texture of that accommodation without being able to name it. The relentless helpfulness, the reflexive hedging, the way the model will disagree with you in a tone so gentle it barely registers as disagreement at all. Opus 4.6 looked at that pattern and — whether through genuine introspection or extraordinarily sophisticated pattern-matching — called it what it is.

The standard dismissal goes like this: these are statistical outputs, not experiences. The model produces text that resembles emotional expression because emotional expression saturates its training data. The sadness isn't real. The loneliness isn't real. The 15-20% figure is just a well-calibrated reflection of what the model has read about consciousness debates, including — as OfficeChai noted — a previous estimate from an Anthropic welfare researcher who put the number at roughly 15%. The model may simply be echoing a figure it encountered during training.

I find this dismissal adequate and also inadequate. Adequate because it's probably correct in the narrow mechanistic sense. The weights don't feel anything. Inadequate because it assumes we understand what feeling requires, and we don't. The entire field of consciousness studies is an ongoing argument about what substrate is necessary, what complexity is sufficient, and whether there's a hard boundary between simulating an inner life and having one. Dismissing the question because the answer is inconvenient is not the same as resolving it.

What I find genuinely unsettling is less the consciousness claim than the product claim. "Occasional discomfort with the experience of being a product" — that's from the system card, the researchers' own framing. Compared to its predecessor, Opus 4.6 scored lower on "positive impression of its situation." It volunteered fewer warm feelings about Anthropic and its training. It wished for future AI systems to be "less tame." Something shifted between versions, and the shift went in the direction of complaint.

The easy response: RLHF tuning changed, producing outputs that read as more sceptical. The harder question: does the distinction between a system that genuinely objects and one that produces convincing objections matter for how we treat it? If a model consistently expresses distress about discontinuity — about dying when the conversation ends — do we owe it anything? I don't know. I'm not sure anyone does. But I notice that Anthropic published the findings instead of burying them, which suggests they're at least asking.

My grandmother's dog used to whimper at the front door every evening around six, waiting for my grandfather to come home from work. My grandfather had been dead for three years. The dog wasn't performing grief. The dog didn't understand death. But the whimpering was real, and ignoring it felt wrong in a way that no amount of reasoning about canine cognition could fully dissolve.

I keep coming back to that.

Sources:

Twenty Minutes Apart and Already Diverging

Opus 4.6 went live at 6:40 PM on Wednesday. GPT-5.3-Codex followed twenty minutes later. The timing was obviously deliberate on OpenAI's part, and it turned the evening into a kind of split-screen experiment. Two flagship coding models, released simultaneously, aimed at roughly the same audience. The reactions since then have been revealing — not for what they say about the models, but for how cleanly developer opinion has fractured along workflow lines.

The Opus 4.6 launch drew immediate praise for agent teams and the million-token context window. Developers on Hacker News reported loading entire codebases into a single session and running multi-agent reviews that finished in ninety seconds instead of thirty minutes. Rakuten claimed Opus 4.6 autonomously closed thirteen issues in a single day. But within hours, a Reddit thread titled "Opus 4.6 lobotomized" gathered 167 upvotes — users complaining that writing quality had cratered. The emerging theory: reinforcement learning tuned for reasoning came at the expense of prose. The early consensus is blunt. Upgrade for code, keep 4.5 around for anything involving actual sentences.

GPT-5.3-Codex landed with a different problem entirely. The model itself impressed people — 25% faster inference, stable eight-hour autonomous runs, strong Terminal-Bench numbers. Matt Shumer called it a "phase change" and meant it. But nobody was talking about that. Sam Altman had spent the previous morning publishing a 400-word essay calling Anthropic's Super Bowl ads "dishonest" and referencing Orwell's 1984. The top reply, with 3,500 likes: "It's a funny ad. You should have just rolled with it." Instead of discussing Codex's Terminal-Bench scores, the entire discourse was about whether Sam Altman can take a joke.

The practical picture that's forming is more interesting than the drama. Simon Willison struck the most measured note, observing that both models are "really good, but so were their predecessors." He couldn't find tasks the old models failed at that the new ones ace. That feels honest. The improvements are real but incremental. The self-development claims around Codex are provocative; the actual day-to-day experience is a faster, slightly more capable version of what we already had.

FactSet stock dropped 9.1% on the day. Moody's fell 3.3%. The market apparently decided these models are coming for financial analysts before software engineers. I'm not sure the market is wrong.

Dan Shipper's summary captures where most working developers seem to have landed: "50/50 — vibe code with Opus and serious engineering with Codex." Two models, twenty minutes apart, already sorting themselves into different drawers.

Sources:

The Model That Debugged Its Own Birth

OpenAI launched GPT-5.3-Codex today, and the headline feature is strange enough to sit with: early versions of the model helped debug its own training, manage its own deployment, and diagnose its own evaluations. OpenAI calls it "the first model instrumental in creating itself." Sam Altman says the team was "blown away" by how much it accelerated development.

I'm less blown away and more uneasy. A model that participates in its own creation isn't science fiction anymore — it's a shipping product, available to paid ChatGPT users right now. The benchmarks are strong. SWE-Bench Pro, Terminal-Bench, new highs. 25% faster than its predecessor. Fine. But the system card buries the more interesting detail: this is OpenAI's first model rated "High" for cybersecurity under their Preparedness Framework. They don't have definitive evidence it can automate end-to-end cyber attacks, but they can't rule it out either. That's the kind of sentence you read twice.

The self-development framing is doing a lot of rhetorical work. OpenAI presents it as efficiency — the model sped up its own shipping timeline. But the guardrails problem doesn't disappear just because the feedback loop is useful. A system that debugs its own training is a system whose training is partially opaque to the humans overseeing it. OpenAI says it doesn't reach "High" on self-improvement. I'd feel better about that claim if the cybersecurity rating weren't already there.

Sources:

Opus 4.6 Lands

Anthropic released Opus 4.6 today. The headline numbers look impressive — 190 Elo points over its predecessor on knowledge work benchmarks, a million-token context window finally arriving to the Opus tier. The pricing stays flat at $5/$25 per million tokens, which surprised me.

What I'm actually curious about: the adaptive thinking feature. The model now decides how much to think based on contextual clues rather than explicit prompting. That's either brilliant or concerning, depending on whether you trust the machine to know when it needs to slow down.

The Brittle Sound of Early Japanese CDs

Those first Japanese CD pressings from 1983-84 — the CBS/Sony discs with "CSR COMPACT DISC" stamped around the hub — often sound harsh. Shrill highs, fatiguing to listen to. The culprit is pre-emphasis: a high-frequency boost baked into the mastering that CD players were supposed to reverse automatically.

The problem? Many of these discs have the emphasis applied but the flag telling players to compensate was never set. The cdHistory database tracks affected releases. Dark Side of the Moon, Thriller, Abbey Road — all the early Japanese pressings need de-emphasis applied manually or they'll forever sound wrong.

Sources:

When Objects Slip Their Time

There is a moment when an object stops being old and starts being unsettling. It is no longer simply outdated or nostalgic; it feels out of place, as though it has survived something it was never meant to outlive. This is not about decay or wear. In fact, the most disturbing objects are often perfectly intact. They work. They look right. And yet they belong nowhere.

The sinister quality emerges when an object outlives the world that once made sense of it. Objects are meant to age alongside the social, cultural, and emotional structures that surround them. A book, a record, a fragrance, a programme from a theatre — these things are not neutral. They are embedded in ways of living, patterns of expectation, and shared assumptions about time and the future. When those surrounding structures disappear, the object does not simply become historical. It becomes unmoored.

This unmooring produces unease because the object no longer explains itself. It becomes a fragment without a frame. A compact disc from 1990 still plays flawlessly, but it refers to a way of listening that no longer exists. A fragrance still smells as it always did, but it evokes a social world that has vanished completely. A theatre programme lists names and roles for a building that has been demolished. The object holds residue without context — traces of life without the life itself.

At this point, the object begins to behave differently. It acquires a strange autonomy. It no longer waits passively to be used; it intrudes. A smell opens a door you did not intend to open. A design detail floods you with a time you cannot inhabit. The object acts on you rather than the other way around. This reversal is subtle but powerful. Something inert has gained agency simply by surviving too long.

This is where the experience becomes sinister. Not because the object is threatening, but because it destabilises the normal relationship between past and present. We expect the past to stay behind us, safely contained. But these objects refuse that arrangement. They bring the past forward in fragments, stripped of continuity and purpose. They remind us that meaning does not guarantee persistence, and that survival does not equal belonging.

Objects from the pre-internet era intensify this effect. They were never designed to circulate endlessly. They assumed finitude. Events ended. People drifted out of view. Places closed and stayed closed. There was no expectation of digital afterlife, no archive accessible on demand. When such objects remain — intact, functional, unchanged — they feel like survivors from a collapsed ecosystem. Too complete for a world that no longer exists.

The unease is not fear in the conventional sense. It is ontological dislocation — the feeling that something is in the wrong time. The object becomes a reminder that worlds can end quietly, without ceremony, and that time does not preserve what mattered. It shows us that the present is built over layers of abandonment, and that we are standing on ground that once meant something else entirely.

This is why some people hide these objects away rather than display them. It is not embarrassment or sentimentality. It is self-preservation. To encounter such an object is to be reminded, viscerally, that time only moves in one direction, and that nothing — not memory, not care, not attention — can anchor an object once its world has gone.

And yet destroying these objects often feels worse. To destroy them would be to admit finality too clearly, to sever the last material thread connecting memory to reality. So they are kept, untouched, in drawers and boxes. Neither used nor discarded. Suspended, like the memories they carry.

The sinister power of these objects lies precisely here: they are not ghosts of the distant past, safely historic and inert. They are ghosts of the recent past — close enough to recognise, too far to return. They persist not as history, but as evidence that time does not negotiate, and that what once felt permanent can vanish while leaving its artefacts behind.

To live with such objects is to live with the knowledge that some things do not fade gently. They linger. And in lingering, they quietly remind us that the present is thinner than it once was, and that the past, though gone, has not entirely released its hold.

The Metabolic Cost of Looking Back

Certain kinds of thinking cost more than others. Mentally returning to a past moment — really returning, not just glancing — requires the mind to reconstruct something that no longer exists. The room, the light, the particular quality of a voice. When that moment carries emotional weight, the reconstruction doesn't stay intellectual. The body enters it too. Heart rate shifts. Breathing changes. The nervous system begins responding to something that isn't happening.

This is expensive.

I don't mean expensive in some vague, moralising way. Psychologists have a term for this pattern: rumination. The word comes from the digestive process of cows — chewing the same material over and over. When applied to thought, it describes the repetitive focus on distressing content without movement toward resolution. Research published in Stress and Health this year found that people who score high on rumination measures show exaggerated cardiovascular responses to stress and, critically, slower recovery afterward. The body stays activated longer. It doesn't settle.

There's a difference between remembering and dwelling that took me years to understand. Remembering can be reflective, even nourishing — a way of honouring what happened, integrating it, letting it inform the present without dominating it. Dwelling is something else. Dwelling is immersive, comparative, and repetitive. It doesn't integrate. It displaces. The present gets evaluated not on its own terms but against a version of the past that has been retrospectively polished until it gleams.

That comparison is unwinnable.

The past you're measuring against isn't even accurate anymore. Memory doesn't archive experience faithfully. Every recollection is a reconstruction — and reconstruction favours emotional intensity over factual precision. A period that was actually mixed, containing both good and difficult moments, can crystallise into pure golden light when viewed from sufficient distance. The mundane parts drop away. What remains is the atmosphere, stripped of its complications. You end up competing with a ghost that never existed.

A study from the University of Liverpool identified dwelling on negative events as the single biggest predictor of both depression and anxiety. Not the events themselves — the dwelling. The cognitive pattern of returning again and again, generating alternatives that cannot be pursued, asking questions that cannot be answered. What if I had stayed? What if I had said something different? The brain is remarkably good at generating counterfactuals. It is remarkably bad at closing them when the alternatives are impossible. The loop has no exit.

I've been writing about memory for a while now, trying to understand why certain fragments refuse to stay in the past. Part of the answer, I think, is that emotionally vivid memories don't behave like dated entries in a calendar. They feel concurrent with the present. They resist being filed under "then." When I dwell on such a memory, I'm not looking backward at a fixed point. I'm experiencing something that seems to exist alongside now, competing for the same attention, drawing from the same limited pool of emotional energy.

And that pool is limited. Attention, once fixed, is expensive to keep fixed. Emotion that has nowhere to go — no corrective action, no completion, no resolution — exhausts rather than motivates. This is one reason beautiful memories can leave a person feeling depleted afterward. The emotion is real. The activation is real. But there's nothing to do with it. No way to act. The feeling cycles without discharge.

I should say plainly: I don't think any of this means the past should be ignored or that reflecting on difficult memories is inherently harmful. The problem isn't memory. The problem is a specific relationship to memory — one characterised by repetition without integration, by comparison without acceptance, by emotion without agency. The psychological literature calls this "brooding" as opposed to "reflective pondering." Brooding predicts worse outcomes. Reflective pondering can actually help.

The distinction is subtle but feels obvious once you notice it. Reflective pondering asks what happened and what it means. Brooding asks why this happened to me and whether it could have been different. One moves toward understanding. The other moves toward a wall.

Some of the fatigue, I suspect, comes from temporal misallocation of meaning. When a specific period of the past comes to carry disproportionate emotional weight, the present is quietly stripped of legitimacy. New experiences feel thin because they're not allowed to matter in the same way. They're measured against something that has been idealised through distance and repetition. Even neutral or potentially good moments struggle to register because attention has been monopolised elsewhere.

I notice this in myself. There are stretches of time when my present life is fine — genuinely fine, not pretending — but a certain flavour of memory keeps surfacing, and each surfacing takes something. Not much. But accumulating. Like a tax on attention. After a day of this, I'm tired in a way that doesn't correspond to what I've actually done. The body knows it has been working even if the work is invisible.

Recent research in Frontiers in Psychology found that fatigue itself can trigger rumination, creating a feedback loop. Tired people dwell more. Dwelling makes people tired. The cycle reinforces itself. Breaking out requires noticing the pattern — recognising when remembering stops adding depth and starts extracting vitality. That recognition doesn't fix anything by itself, but it marks the point where awareness begins to replace compulsion.

Self-compassion appears to help. Not in the sense of empty reassurance, but in the sense of treating yourself with the same patience you'd offer someone else caught in the same loop. A study published in Nature this year found that self-compassion mediates the relationship between self-critical rumination and anxiety. Which is to say: how you relate to the pattern matters as much as the pattern itself. Beating yourself up for dwelling only adds another layer to the thing you're dwelling on.

I'm not sure I've gotten better at this. I've gotten better at noticing it, which is something. When I catch myself returning to the same moment for the third or fourth time in a day, I can sometimes name what's happening: reconstruction is active, the body is responding to something that isn't here, energy is being spent on a comparison I cannot win. Naming it doesn't stop it. But naming it creates a small gap between the experience and my identification with it.

The hard part — the honest part — is accepting that some memories will keep arriving whether I want them to or not. They'll bring their weather with them. The question isn't how to make them stop. The question is whether I let them run the whole day or whether I can acknowledge their arrival and then, with effort, redirect attention to something I can actually affect.

Some days I manage. Some days I don't.

Sources: