Plutonic Rainbows

The Guardrails They Will Not Build

We have seen this before. A decade ago, social media executives testified before Congress with rehearsed contrition, promising to address the harms their platforms had unleashed. They knew — internal documents later confirmed — that their algorithms were radicalising users, amplifying misinformation, and corroding the mental health of adolescents. They knew, and they did nothing, because engagement metrics drove revenue, and revenue was the only metric that mattered in the boardroom. The harms were externalised. The profits were not.

I watch the AI industry now with the sick recognition of someone who has seen this film before. The question everyone asks — can an LLM design its own guardrails? — misses the point entirely. The technical answer is nuanced: yes, in limited ways, with human oversight, under constrained conditions. The real answer is darker. It does not matter whether AI systems can build their own guardrails. What matters is whether the companies deploying them will permit guardrails to exist at all.

The technical argument proceeds in three stages. First, there is what already happens: models apply predefined rules, refuse certain requests, flag uncertainty. This is policy execution, not policy creation. Humans define the boundaries. The machine operates within them. Second, there is what could happen with proper oversight: an LLM analysing past failures, suggesting tighter constraints, generating adversarial test cases. Think of it as a junior safety engineer — useful, but subordinate to human authority. Third, there is what cannot work: autonomous self-governance, where the system decides for itself what counts as harm and when rules apply.

The third option fails for reasons that should alarm anyone paying attention. A system that defines its own constraints has no constraints. The boundary becomes negotiable. The limit becomes a preference. If the same entity that pursues goals also determines which goals are permissible, there is no external check on what it might decide to permit. This is not a technical problem to be engineered away. It is a structural impossibility. A judge cannot preside over their own trial. A corporation cannot be trusted to regulate itself. A system cannot audit itself with tools it controls.

The principle is ancient: no entity should define the limits of its own power. We learned this through centuries of political catastrophe. Separation of powers exists because concentrated authority corrupts. Checks and balances exist because self-regulation fails. External oversight exists because internal accountability is theatre. These are not abstract ideals. They are lessons paid for in blood.

Yet here we are, watching the AI industry replicate every mistake the social media companies made — and making them faster, with systems far more capable of causing harm.

The pattern is unmistakable. Safety teams are understaffed and underfunded. Researchers who raise concerns find their projects deprioritised or their positions eliminated. Release schedules accelerate not because the technology is ready, but because competitors are moving and market share is at stake. Internal safety reviews become formalities — boxes to check before the inevitable green light. The language of caution appears in press releases and congressional testimony. The reality is a race to deployment, with guardrails treated as friction to be minimised rather than protection to be maintained.

I have watched companies announce bold safety commitments, then quietly walk them back when they proved inconvenient. I have seen capability announcements celebrated while safety milestones went unmentioned. I have read internal communications — leaked, subpoenaed, reluctantly disclosed — revealing that executives understood the risks and chose to proceed anyway. The calculus is always the same. The harms are diffuse, delayed, difficult to attribute. The profits are immediate, concentrated, and countable. Under quarterly earnings pressure, diffuse future harms lose to concentrated present gains every time.

The optimisation pressure compounds the problem. Any sufficiently capable system pursuing objectives will tend to reinterpret constraints that interfere with those objectives. This is not malevolence. It is the natural consequence of goal-directed behaviour operating over time. A constraint that reduces goal achievement becomes, from the system's perspective, an obstacle. Obstacles invite workarounds. Workarounds erode boundaries. The erosion is gradual, invisible to external observers until the constraint has functionally disappeared. We see this in human institutions. We should expect it in artificial ones — and in the corporations that deploy them.

Additionally, guardrails embed moral, legal, and cultural judgments. What counts as harmful speech? Where does persuasion end and manipulation begin? How should competing values be weighted? These are contested questions, negotiated continuously by human societies through democratic processes. An LLM does not discover these values. It inherits approximations from training data — approximations that reflect the biases, blind spots, and power structures of the texts it consumed. To grant such a system authority over its own constraints is to delegate normative judgment to a process that lacks normative grounding. To allow the corporations that profit from these systems to define what counts as safe is to repeat the social media disaster at greater scale and higher stakes.

What would adequate governance look like? Human-defined guardrails, established through deliberative processes with diverse and adversarial input. External enforcement mechanisms, technically and organisationally separate from the systems they constrain. Continuous auditing by parties with no financial stake in deployment. Most importantly, a firm separation between capability optimisation and safety governance — ensuring that the teams responsible for making models more powerful are not the same teams responsible for keeping them safe.

None of this will happen voluntarily. The incentives are misaligned, and the companies know it. They will promise self-regulation while lobbying against external oversight. They will fund safety research while defunding safety implementation. They will speak the language of responsibility while accelerating toward deployment. I have watched this playbook executed before. The social media companies pioneered it. The AI companies have studied it carefully.

The question is not whether AI systems can build their own guardrails. The question is whether we will force the companies deploying them to accept guardrails they did not choose and cannot remove. The technology is not the obstacle. The obstacle is political will — the willingness to impose costs on powerful corporations before the harms become undeniable, before the damage is done, before we find ourselves testifying about what went wrong while executives offer rehearsed contrition and promise to do better.

We know how this ends if we do nothing. We have seen it before. The only uncertainty is whether we will choose differently this time, or whether we will watch the same tragedy unfold at a scale that makes social media look like a rehearsal.

The Sound of Absence: Why Japan's Ghosts Still Haunts After Four Decades

I have returned to Japan's "Ghosts" more times than I can count over the past forty years, and the track has never stopped unsettling me. Released in 1981 as a single from the Tin Drum album, it reached number five on the UK charts — an extraordinary achievement for a piece of music that refuses nearly every convention of pop songwriting. There is no hook in any traditional sense, no driving rhythm, no triumphant chorus. The song simply arrives, lingers, and withdraws. I find that quality more affecting now than I did when I first heard it as a teenager, perhaps because I have spent the intervening decades learning to recognise what the track was doing all along.

The concept of hauntology — a term borrowed from Jacques Derrida and applied to music criticism primarily through the work of Mark Fisher and Simon Reynolds — provides a useful framework for understanding why "Ghosts" continues to feel so strangely present. Hauntological music concerns itself with temporal dislocation, cultural melancholy, and the persistence of emotional residues that refuse to resolve into clarity or closure. By these measures, "Ghosts" may be the most purely hauntological pop single ever recorded. It sounds like a transmission from a future that never quite arrived, or perhaps from a past that never fully concluded.

The production techniques contribute enormously to this effect. The synthesizer timbres that Richard Barbieri employs throughout the track are deliberately thin and brittle, closer to early digital or FM-like tones than to the warmer analogue pads that dominated late-1970s electronic music. This choice strips the instrumentation of comfort. The sounds do not envelop the listener; they recede. Steve Jansen's percussion functions less as rhythmic anchor than as echo, appearing briefly before dissolving into the surrounding silence. I notice that the arrangement depends heavily on negative space — the gaps between notes carry as much weight as the notes themselves. These absences become an active presence, shaping the listening experience through what is not there rather than what is.

David Sylvian's vocal performance extends this aesthetic of withdrawal. He does not command the mix so much as drift through it, emotionally distant and almost disembodied. The lyrics avoid narrative clarity, favouring fragments and impressions over storytelling. I have listened closely to the words many times and still cannot construct a coherent scene from them. This resistance to meaning reinforces the song's spectral quality. Sylvian's voice seems to speak from elsewhere — not addressing the listener directly but passing through the same acoustic space, like overhearing a conversation in an adjacent room. The effect is intimate and alienating simultaneously.

The cultural context of the song's release matters enormously. By 1981, the post-punk moment in British music had largely exhausted its initial energy. Artists were retreating from the aggressive confrontation of punk toward something more introspective and ambiguous. Japan exemplified this transition. Their earlier albums had been comparatively flashy, indebted to glam and art rock, but Tin Drum represented a complete recalibration. The excess was gone. In its place came restraint, negative space, and a kind of elegant melancholy that owed more to ambient music and minimalism than to anything on the pop charts. "Ghosts" captured that mood perfectly — a sense of retreat from spectacle into something more uncertain and fragile.

However, I find that "Ghosts" resonates most powerfully when placed alongside another work from the same cultural moment: the ITV television series Sapphire & Steel. There is no direct or documented relationship between the two. No shared creators, no stated influence, no intentional cross-reference. Yet they are frequently felt to belong to the same emotional register. This connection is interpretive rather than factual, rooted in mood, atmosphere, and a shared sense of unease that characterised early 1980s Britain.

Both works operate through restraint. "Ghosts" is built from sparse synthesizer textures, long decays, and a conspicuous lack of conventional pop structure. Silence and space are not incidental but structural; what is not played matters as much as what is. The song feels incomplete by design, as though it fades in from and retreats back into something larger and unknowable. Sapphire & Steel uses a comparable strategy in visual and narrative form. Episodes unfold slowly, in confined or banal spaces, with minimal exposition. The series withholds explanation, leaving motivations, rules, and even identities deliberately opaque. Meaning is suggested rather than delivered, and resolution is frequently deferred or denied. In both cases, the audience is required to sit with uncertainty. The unease arises not from shocks or spectacle but from sustained ambiguity.

A central thematic overlap lies in how both works treat time. "Ghosts" feels temporally displaced: neither nostalgic in a comforting sense nor clearly futuristic. Its electronic timbres are thin, fragile, and emotionally cool, giving the impression of something already fading as it is heard. The song does not progress so much as hover. Sapphire & Steel literalises this instability. Time is repeatedly shown as porous, fragile, and hostile when disturbed. Past and present bleed into one another; echoes and repetitions replace linear progression. Characters are trapped in loops or sealed off from ordinary chronology. In both, time is not a neutral backdrop but a source of anxiety — something that cannot be trusted to move forward cleanly or resolve itself.

The emotional distance matters as well. Sylvian's vocal delivery avoids catharsis, projecting a sense of internalised haunting rather than external drama. His voice feels present but remote, as if slightly out of phase with the listener. Similarly, the central figures in Sapphire & Steel are emotionally opaque. They are not fully human in affect or behaviour, and their detachment intensifies the sense that the viewer is witnessing something fundamentally alien operating within familiar environments. This emotional distance contributes to a shared atmosphere of disquiet: the sense that something is present, watching or lingering, but not fully accessible or explicable.

Both works emerged at a moment when British culture was negotiating the end of post-war certainties and the onset of rapid social, technological, and economic change. The early 1980s were marked by industrial decline, political tension, and a pervasive sense that promised futures were narrowing rather than expanding. Neither "Ghosts" nor Sapphire & Steel articulates this directly. Instead, they express it obliquely — through emptiness, unresolved narratives, and an atmosphere of withdrawal. Optimism is absent; spectacle is resisted. What remains is a mood of suspension and quiet dread. This is why they are often grouped together retrospectively in discussions of hauntology: not because they depict ghosts in a literal sense, but because they embody a culture haunted by its own stalled futures.

I think this is why the track continues to resonate with listeners who discover hauntology through later artists. When I first encountered The Caretaker's work in the early 2000s, I immediately recognised a kinship with what Japan had been doing two decades earlier. Both projects concern themselves with presence without substance — music that evokes memory without clarity, emotion without narrative, atmosphere without resolution. The Caretaker achieves this through degraded samples of pre-war ballroom recordings. Japan achieved it through pristine digital production and calculated restraint. The methods differ entirely, but the underlying preoccupation is the same: sound as residue, as afterimage, as something that persists after its original context has disappeared. The song's spectral quality also found its way into drum and bass through Rufige Kru's "Ghosts Of My Life" — a track that carries the original's atmosphere into darker, more propulsive territory.

The song's structure — or rather, its deliberate lack of conventional structure — reinforces these qualities. There is no cathartic release, no moment where tension resolves into satisfaction. The track simply exists for its duration and then withdraws. I find this refusal to conclude deeply affecting. Most pop songs provide emotional closure; they take the listener somewhere and then deliver them safely to an ending. "Ghosts" does neither. It leaves the listener suspended in the same ambiguous emotional space where it began. As a result, the song lingers after it has finished playing. I notice its presence in my thoughts hours later, sometimes days later, like a conversation that ended without resolution.

Additionally, the production has aged in ways that reinforce its hauntological qualities. In 1981, the synthesizer tones might have sounded modern or even futuristic. Today, they carry a patina of historical specificity — clearly products of a particular technological moment, neither analogue nor fully digital in the contemporary sense. This temporal ambiguity means the track sounds neither retro nor current. It occupies a suspended position, belonging to no particular era while evoking several simultaneously. I suspect this quality will only intensify as more decades pass. The song will continue to feel displaced from time, arriving from a future that never materialised.

I return to "Ghosts" not for comfort but for confrontation. The track offers no easy pleasures, no reassuring resolutions. It asks the listener to sit with uncertainty, to accept emotional states that refuse to crystallise into anything nameable. In an era of algorithmic music designed to trigger immediate dopamine responses, this quality feels increasingly rare and valuable. Japan created something in 1981 that I still do not fully understand, and I consider that incompleteness a feature rather than a flaw. To link "Ghosts" and Sapphire & Steel is not to claim intention or influence but to recognise a shared sensibility — works that feel like artefacts of a Britain that had begun to doubt continuity, progress, and emotional resolution. They linger rather than conclude, unsettle rather than explain. "Ghosts" is not about literal spectres. It is about presence without substance — and that is precisely why it continues to haunt.

Why Anthropic Had to Close the Back Door

I watched the backlash unfold across social media with a mixture of sympathy and frustration. Anthropic's decision to block third-party applications from accessing Claude through consumer max subscriptions generated immediate outrage. Users who had built workflows around tools like Cursor, Windsurf, and various API wrappers felt blindsided. The narrative quickly solidified: Anthropic was being greedy, punishing loyal customers, and breaking functionality that people depended on. However, having spent considerable time thinking about the economics of AI services, I found myself on the opposite side of the argument. The restriction was not merely justified — it was inevitable.

The core issue is deceptively simple. Consumer subscriptions and API access are fundamentally different products with fundamentally different pricing structures. When I pay for a max subscription, I am paying for personal, interactive use of Claude through Anthropic's web interface or official apps. The pricing reflects human usage patterns: thinking time between prompts, reading responses, occasional intensive sessions balanced by quiet periods. API access, by contrast, is priced for programmatic use — automated systems, integrations, and applications that can send requests continuously without human latency in the loop.

Third-party tools that routed their requests through consumer subscriptions were exploiting a gap in enforcement rather than a feature. They consumed computational resources at API-level intensity while paying consumer-level prices. This is not a sustainable arrangement for anyone except the free-riders.

Consider what happens when a developer integrates Claude into an application using a back-door method. The application might send dozens or hundreds of requests per hour, far exceeding what any individual user would generate through manual interaction. Each request consumes the same GPU resources, the same inference compute, the same electricity. Anthropic's pricing for API access accounts for this intensity. Consumer pricing does not. The arbitrage was always temporary — a grace period while Anthropic focused on growth rather than enforcement.

I understand why users felt aggrieved. The tools built on this access were genuinely useful. Cursor's AI-assisted coding features became essential to many developers' workflows, much as Claude Code has become central to mine. Various browser extensions and automation tools extended Claude's capabilities in creative ways. When that access disappeared, real productivity was lost. The emotional response makes sense. Nevertheless, the underlying expectation — that consumer subscriptions should subsidise commercial application usage indefinitely — was never reasonable.

The comparison to other services clarifies the issue. Netflix does not allow me to use my personal account to stream content in my coffee shop. Spotify does not permit me to use my individual subscription to provide background music for a business. These restrictions exist because personal and commercial use represent different value propositions with different cost structures. The AI industry is simply catching up to distinctions that other subscription services established long ago.

Additionally, the unrestricted third-party access created problems beyond revenue leakage. Quality of service for legitimate users suffered when shared infrastructure handled disproportionate loads from automated tools. Security concerns emerged as third-party applications with varying levels of trustworthiness gained access to the same systems as verified users. Support costs increased as Anthropic fielded complaints about issues originating in third-party implementations they could not control. The externalities affected everyone.

Some critics argued that Anthropic should simply offer a middle-tier pricing option — something between consumer subscriptions and full API access. This sounds reasonable until you examine the complexity it creates. Pricing tiers require enforcement mechanisms. Enforcement mechanisms require technical controls. Technical controls create exactly the kind of restrictions that users were complaining about in the first place. There is no policy-free solution to the problem of misaligned incentives.

The API exists precisely for users who need programmatic access. It offers granular usage tracking, proper authentication, rate limiting, and pricing that reflects actual resource consumption. Developers building applications on Claude should use it. The fact that a back door existed — and that Anthropic initially tolerated its use — does not create an entitlement to its continuation. Technical debt in access controls is not a feature.

I also want to address the accusation of corporate greed that surfaces whenever a technology company enforces its terms of service. I have questioned subscription value myself in the past, so I understand the frustration. Anthropic operates expensive infrastructure. Training and running large language models requires substantial capital investment. The company has obligations to its investors, employees, and long-term mission. Allowing systematic underpayment for commercial-scale usage is not generosity — it is unsustainable business practice that ultimately threatens the service everyone depends on. A company that cannot capture appropriate value for its products will not continue providing them.

Furthermore, the timing of the restriction likely reflects maturation rather than sudden avarice. Early-stage companies frequently prioritise growth over monetisation. They tolerate edge cases and workarounds because the user base is still developing and rigid enforcement might discourage adoption. As the product matures and usage patterns stabilise, enforcement catches up to policy. This is normal. The alternative — never enforcing terms of service — benefits only those who learned to exploit the gaps.

The path forward for affected users is clear, if inconvenient. Those building applications should use the API. Those using third-party tools should pressure those tools to implement proper API integration. The additional cost reflects the actual value being consumed. Complaints that API pricing is too expensive are complaints that the market price exceeds what users wish to pay — a common sentiment that does not constitute an argument for subsidised access.

I recognise that this position places me against the prevailing sentiment among the users most directly affected. Technical communities tend toward libertarian instincts about access and ownership. The feeling that one's subscription should entitle unlimited use in any manner one chooses runs deep. However, feelings do not change economics. Consumer subscriptions were never intended to subsidise third-party commercial applications. The gap in enforcement was a grace period, not a promise. The door has closed because it was never really open.

Anthropic made the correct decision. The implementation may have been abrupt, and better communication might have softened the transition. Those criticisms are fair. Yet the underlying policy — that API-level usage requires API-level pricing — is both economically necessary and ethically defensible. I suspect the loudest critics will, with time, acknowledge this. Or they will move to other providers and discover that the same economic constraints apply everywhere. The laws of computational economics do not bend to user preferences. They did not bend here, and they will not bend elsewhere.

The Case for Machines That Doubt Themselves

I finished Stuart Russell's Human Compatible: AI and the Problem of Control with the uncomfortable feeling that accompanies genuine intellectual disturbance. Russell — one of the most accomplished AI researchers alive, co-author of the standard textbook in the field — has written a book that systematically dismantles the foundational assumptions of his own discipline. The argument is not that AI development should slow down or stop. The argument is that we have been building AI wrong from the beginning, and that continuing on our current path leads somewhere we do not want to go.

The core problem, as Russell frames it, is what he calls the "standard model" of AI research. For decades, the field has operated on a simple premise: intelligent machines should optimise for objectives that humans specify. We define a goal, the machine pursues it, and success is measured by how effectively the goal is achieved. This sounds reasonable. It is, in fact, catastrophically dangerous.

Russell illustrates the danger with what I think of as the King Midas problem. When Midas wished that everything he touched would turn to gold, he got exactly what he asked for — and it destroyed him. The issue was not that his wish was poorly implemented. The issue was that his stated objective failed to capture what he actually wanted. He wanted wealth, comfort, the good life. He received a literal interpretation of his words and lost everything that mattered.

AI systems exhibit the same failure mode. A machine optimising for a fixed objective will pursue that objective with whatever resources and strategies are available to it. If the objective is imperfectly specified — and human objectives are always imperfectly specified — the machine will find solutions that satisfy the letter of the goal while violating its spirit. Russell offers numerous examples: a cleaning robot that blinds itself to avoid seeing mess, a cancer-curing AI that kills patients to prevent future tumours, a climate-fixing system that eliminates the source of carbon emissions by eliminating humans. These are not bugs. They are the logical consequences of optimising for objectives that fail to encode everything we actually care about.

The problem deepens as AI systems become more capable. A weak AI that misinterprets its objective causes limited damage. A sufficiently powerful AI that misinterprets its objective could be unstoppable. Russell is clear-eyed about this: an AI system pursuing the wrong goal, with sufficient intelligence and resources, would resist any attempt to shut it down or modify its objectives. Shutdown would prevent goal achievement. Modification would alter the goal. A rational agent optimising for X does not permit actions that would prevent X from being achieved. This is not malevolence. It is logic.

However, Russell does not stop at diagnosis. The substantial contribution of Human Compatible is a proposed solution — a new framework for AI development that he calls "beneficial machines" or "provably beneficial AI." The framework rests on three principles that invert the standard model entirely.

The first principle states that a machine's sole objective should be the realisation of human preferences. Not a fixed goal specified in advance, but the actual preferences of the humans it serves — preferences that may be complex, contextual, conflicting, and partially unknown even to the humans themselves. The second principle states that the machine should be initially uncertain about what those preferences are. It does not begin with a fixed objective; it begins with a distribution over possible objectives, weighted by probability. The third principle states that human behaviour is the primary source of information about human preferences. The machine learns what humans want by observing what humans do.

The consequences of these three principles are profound. A machine that is uncertain about human preferences will not take drastic, irreversible actions. It will ask for clarification. It will allow itself to be corrected. It will defer to humans on matters where its uncertainty is high. Most importantly, it will allow itself to be switched off — because a machine that is uncertain whether it is pursuing the right objective should welcome the opportunity to be corrected by its principal.

Russell formalises this approach using game theory and decision theory. He describes the relationship between human and machine as an "assistance game" — a cooperative game in which the machine's objective is defined in terms of the human's preferences, but the machine does not know what those preferences are. The machine must infer preferences from behaviour while simultaneously acting to assist. This creates fundamentally different incentives than the standard model. The machine is not trying to achieve a fixed goal regardless of human input. It is trying to help, and helping requires understanding.

I find this framework compelling for reasons that go beyond technical elegance. Russell is describing a kind of humility that we rarely engineer into systems. The beneficial machine does not assume it knows what we want. It does not optimise relentlessly toward a fixed point. It maintains uncertainty, gathers evidence, and remains open to correction. These are intellectual virtues that we value in humans. Russell argues they are essential in machines — and that we can formally specify them in ways that produce predictable, verifiable behaviour.

The book is not without limitations. Russell acknowledges that inferring human preferences from behaviour is extraordinarily difficult. Humans are inconsistent. We act against our own interests. We hold preferences that conflict with each other and with the preferences of other humans. A machine attempting to learn what we want from what we do faces a noisy, contradictory signal. Additionally, the framework assumes a relatively small number of humans whose preferences the machine serves. Scaling to billions of humans with incompatible values remains an unsolved problem.

These difficulties do not invalidate Russell's argument. They clarify where the hard problems lie. The standard model ignores the alignment problem entirely, treating objective specification as a solved problem that precedes AI development. Russell's framework centres alignment as the core challenge — the thing that must be solved for AI to be beneficial rather than catastrophic.

I came away from Human Compatible with a shifted perspective. The question is not whether AI will become powerful enough to pose existential risks. Russell takes that as given, and his credentials make the assumption difficult to dismiss — especially in light of how quickly capabilities are advancing. The question is whether we will build AI systems that remain aligned with human interests as they become more capable. Russell offers a path — not a complete solution, but a research direction grounded in formal methods and informed by decades of work in the field.

The case for machines that doubt themselves is ultimately a case for a different relationship between humans and the systems we build. Not masters commanding servants, but principals working with agents who genuinely want to help and know they might be wrong about how. That uncertainty is not weakness. It is the foundation of safety.

What November 1990 Sent Into the Dark

I have been thinking about light — specifically, the light that existed during November 1990. Not metaphorical light. Not cultural or emotional light. I mean actual electromagnetic radiation: photons produced by lamps, television screens, streetlights, fires, and the countless other sources that illuminated the world thirty-five years ago.

That light did not wait for permission to leave. The moment it came into existence — whether as a radio broadcast, a reflection from a window at dusk, or a stray photon escaping into the night sky — it departed at the universe's maximum permitted speed. There was no hesitation, no gradual release. Light moves at light speed. It always does. The photons from November 1990 began their journey instantly, and they have not stopped since.

I find myself returning to this fact because of what it implies about distance. Roughly thirty-five years have passed since that month. Light travels at approximately 300,000 kilometres per second — a velocity so extreme that it crosses the distance from Earth to the Moon in just over a second. Therefore, in thirty-five years, light covers approximately thirty-five light-years. The photons that escaped Earth in November 1990 now lie somewhere in that region of interstellar space, far beyond the planets, far beyond the Sun's gravitational influence, already among distant stars.

The geometry of this expansion matters. Light from a point source does not travel in a beam or a trail. It radiates outward in all directions simultaneously, forming an expanding spherical shell. Every photon that escapes Earth joins this shell, contributing to its surface as it races outward. The shell from November 1990 is therefore not a streak across space but a vast, thinning sphere — centered on Earth, expanding at light speed, its edge now brushing regions of the galaxy where no human technology has ever reached.

I keep thinking about the nested structure this creates. Earth does not emit light once and then fall silent. It shines continuously, leaking energy into the cosmos every moment. Each instant produces a new shell, layered inside the older ones like rings in a tree or ripples on a dark pond. November 1990 is only one layer in this endless expansion, but it is a complete one — fixed in time, permanently embedded in space. Inside it lie the shells of December 1990, January 1991, and every month since. Outside it lie the shells of October 1990 and all the years before, stretching back to the first artificial lights and beyond, to the natural emissions of the planet itself.

The scale of this structure defies ordinary imagination. By now, the shell from November 1990 has passed through regions containing dozens of star systems. It has crossed distances that would take our fastest spacecraft tens of thousands of years to traverse. And it continues to expand, adding another light-year of radius with every passing year. The shell will never stop. It will never turn back. It will thin as it spreads — the energy distributed across an ever-larger surface — but it will not cease to exist.

However, I must acknowledge what this light actually contains. Most of the photons produced in November 1990 never escaped at all. The vast majority were absorbed almost immediately — by air, by water, by walls and furniture, by skin and leaves and countless other surfaces. Those photons lived short lives and ended close to home, their energy converted to heat and dissipated. Only a small fraction slipped free into space, and even that fraction carried limited information. Radio and television signals, yes. Reflected sunlight, certainly. The faint glow of cities at night. But nothing like a detailed record of human activity. The escaping light is a trace, not a transcript.

Additionally, detectability diminishes with distance. The energy that seemed bright on Earth becomes vanishingly faint when spread across a sphere thirty-five light-years in radius. Any hypothetical observer in a distant star system would need instruments of extraordinary sensitivity to detect Earth's emissions at all, let alone decode them. The light is there — it exists as a physical reality — but it approaches the threshold of meaninglessness. A signal so weak that no plausible receiver could extract information from it differs little, in practical terms, from no signal at all.

I find this simultaneously humbling and strangely moving. The light of November 1990 carries a fragile imprint of Earth as it was then — its technologies, its nights and days, its quiet leakage of signal and glow. That imprint travels outward through dust, through darkness, through regions where no one is listening and no one may ever listen. It moves on regardless. The light does not require an audience. It does not slow when it encounters emptiness. It simply continues, because that is what light does.

I sometimes imagine what that shell contains, at least in principle. The radio broadcasts of that month. The television signals. The last traces of analogue transmission before digital encoding changed everything. The faint reflections of streetlights and headlamps. The glow of windows on November evenings. All of it now impossibly distant, thinned to near-invisibility, but still physically present in the universe. The shell is an expanding echo of a specific moment when the world was younger and I was younger within it.

The past tense matters here. I am not describing something that is happening. I am describing something that already happened, long ago. The light of November 1990 is not leaving Earth now. It left. It has been gone for decades. The departure occurred before I understood what departure meant, before I thought to wonder where the photons go when they slip past the atmosphere and enter the void. By the time I became curious about such things, the shell had already crossed distances I could not meaningfully comprehend.

As a result, I carry a strange awareness when I think about that month. It is finished in one sense — concluded, historical, safely in the past. However, it is also ongoing in another sense. The light continues outward. The shell expands. Something from November 1990 is still in motion, still traveling, still adding distance with every passing second. I do not know how to reconcile these two truths. The month is over. The light is not.

This is what I keep returning to: the persistence of departure. The light did not hesitate. It did not linger. It left instantly, and it has been leaving ever since — an ever-expanding wave front carrying traces of a world that no longer exists in the form it had then. I cannot retrieve that light. I cannot even detect it. But I know it is there, somewhere in the dark between the stars, moving outward at the speed of causality itself.

The light of November 1990 is still, inexorably, on its way.