Plutonic Rainbows

The Phantom on the Charts

Selena Gomez used an AI-generated neo-soul track on her Golden Globes Instagram post, then quietly deleted it. The song, "Where Your Warmth Begins" by Sienna Rose, had fooled her — and millions of other Spotify listeners who streamed Rose's music over 2.6 million times monthly. The revelation that Rose is almost certainly not a real person triggered a minor crisis in music circles this week. However, the controversy reveals something larger than one fake artist slipping through algorithmic cracks. It demonstrates how completely unprepared streaming platforms are for the synthetic media era.

The evidence against Sienna Rose's authenticity is overwhelming. Between September and December 2025, Rose uploaded at least 45 tracks to streaming services — a pace that would exhaust any human artist. Rose has no social media presence whatsoever. No Instagram, no TikTok, no Twitter. Rose has never performed live. The biography describes Rose as an "anonymous neo-soul singer," which strikes me as absurd framing for an artist in 2026, when visibility drives streaming success and social media presence is essentially mandatory for breakout artists.

Additionally, Deezer confirmed that many of Rose's tracks are flagged as AI on their platform. The music itself sounds competent but generic — derivative of artists like Olivia Dean and Alicia Keys without the distinctive qualities that make those artists compelling. Listeners who pay attention describe the songs as smooth and pleasant but ultimately forgettable. This is precisely what you would expect from AI-generated content trained on neo-soul: technically proficient mimicry without artistic vision.

What troubles me is not that AI-generated music exists. The technology has been inevitable for years. What troubles me is how easily this phantom artist accumulated millions of streams, landed three songs on Spotify's Viral 50 playlist, and fooled a major celebrity into using the music for promotional content. The systems that are supposed to connect listeners with artists have no meaningful safeguards against synthetic performers colonizing the charts.

Spotify's position on AI-generated content is revealing. The platform officially permits such content but encourages proper labeling. This policy sounds reasonable until you examine its enforcement mechanisms — which appear to be nonexistent. Sienna Rose was not labeled as AI-generated. The profile presented Rose as a real artist. Spotify's algorithms promoted the music just as aggressively as they promote human musicians. The company essentially outsourced detection to listeners and journalists, waiting for public outcry before acknowledging the problem.

The economic implications are more concerning than the technical questions. Streaming platforms pay royalties based on play counts. Every stream of Sienna Rose's tracks transfers money from Spotify's royalty pool to whoever operates the Rose account. Assuming the 2.6 million monthly listeners generate conservative streaming numbers, that represents tens of thousands of dollars monthly flowing to a synthetic artist. This is not speculative future economics. This is happening now, at scale, with platform complicity.

The displacement effect accelerates as AI-generated artists proliferate. Consider the playlist dynamics. Spotify's Viral 50 has finite slots. Three of them currently belong to Sienna Rose. Those are three positions that real artists — people who spent years developing craft, building audiences, sacrificing financial stability to make music — did not get. The zero-sum nature of playlist placement means synthetic artists directly compete with humans for attention and revenue.

I recognize the counterargument that listeners do not care about authenticity if the music sounds good. Market dynamics will sort this out. If people enjoy Sienna Rose's tracks, why does it matter whether Rose is real? This argument misses the essential context. Listeners were not given a choice. They were not informed that they were streaming AI-generated content. The deception was built into the presentation. You cannot claim market efficiency when the market operates on false information.

The parallel with visual art is instructive. When AI-generated images flooded stock photo marketplaces and art platforms, the initial response was similar permissiveness. Platforms allowed AI content but recommended labeling. Predictably, most uploaders ignored the recommendations. The platforms responded with increasingly strict requirements: mandatory AI disclosure, separate categories, different royalty structures. Music streaming is now facing the same progression but starting from a weaker position because audio generation has advanced further than most listeners realize.

The technical challenge of detecting AI-generated music is significant but not insurmountable. Deezer apparently has functional detection systems. The limitation is not technological — it is institutional. Platforms have little incentive to aggressively police AI content when that content generates engagement and streams. The business model rewards volume, not verification. As a result, we get situations like Sienna Rose: obvious synthetic content operating openly until external pressure forces acknowledgment.

What happens when this scales? Sienna Rose is likely not unique, just the first to attract attention. The barrier to creating similar operations is minimal. Any entity with access to music generation models and basic knowledge of streaming platform mechanics can replicate this. We are probably looking at dozens or hundreds of similar projects already active, operating below the threshold of public notice. The economic incentives are clear. The risks are minimal. The platforms are passive.

The downstream effects on real artists range from concerning to catastrophic. Emerging musicians already struggle to break through algorithmic noise and playlist gatekeepers. Adding a layer of AI-generated competition that can produce unlimited content at near-zero marginal cost fundamentally alters the economics of music creation. If playlist slots and streaming revenue increasingly flow to synthetic artists, the financial foundation for human musicians erodes further. We risk creating a system where making music becomes economically irrational for all but the most successful human artists.

I want platforms to implement mandatory labeling for AI-generated content. Not recommended, not encouraged — mandatory, with enforcement. Separate playlist categories. Transparent disclosure in artist profiles. Different royalty structures that reflect the reduced production costs. These measures would not ban AI music, which is likely impossible and arguably undesirable. They would simply require honesty about what listeners are consuming.

The broader question is whether we want streaming platforms to be neutral conduits for any content that generates engagement, or whether we expect them to maintain distinctions between human creativity and machine output. The current trajectory points toward the former. Platforms will optimize for streams and engagement regardless of source. If synthetic artists outperform humans in algorithmic systems, those systems will promote synthetic content. The logic is perfectly consistent with platform incentives. It is also perfectly corrosive to human artistic culture.

Sienna Rose will likely disappear from Spotify in the coming weeks as pressure mounts. The account operator will probably launch similar projects under different names, having learned which patterns trigger detection. The cycle will repeat. Each iteration will be more sophisticated, harder to identify, more deeply embedded in platform infrastructure. We are watching the first stages of a transition that most of the music industry has not yet processed.

The phantom is on the charts. That should alarm everyone who cares about music as a human endeavor rather than an algorithmic optimization problem. The platforms know this is happening. They have chosen passivity. The only question now is how far we let this progress before demanding they choose differently.

Sources:

The Revenue Panic That Reveals Everything

OpenAI's announcement that ChatGPT will begin showing ads represents more than a monetization pivot. It reveals a company in crisis mode, making decisions that directly contradict its founding principles at precisely the moment when trust and differentiation matter most. The timing could not be worse.

Sam Altman told the Financial Times in 2024 that he "hates" advertising and called combining ads with AI "uniquely unsettling." Those words were spoken less than two years ago. The CEO who built his reputation on thoughtful concerns about AI safety and alignment is now implementing exactly the business model he publicly condemned. This is not a gradual evolution of strategy. This is panic.

The revenue pressures driving this decision are well documented. OpenAI has committed to $1.4 trillion in AI infrastructure spending over the next eight years. The company expects to generate only "low billions" in revenue this year from 800 million weekly users. Additionally, despite astronomical user growth, the unit economics remain problematic. Free users generate costs without corresponding revenue. Subscription uptake has not scaled as hoped. The math forces uncomfortable choices.

However, advertising does not solve OpenAI's fundamental problems. It creates new ones while accelerating existing vulnerabilities. The company faces intense competition from Anthropic, Google, and others who can credibly claim higher standards for user trust. Claude explicitly positions itself on careful alignment and transparent limitations. Anthropic's subscription model means users know exactly what they are paying for and why. OpenAI just surrendered that high ground.

The competitive damage extends beyond marketing claims. Developers and enterprise customers — the segments where actual revenue concentrates — care deeply about model reliability and trustworthiness. If ChatGPT responses might be subtly influenced by advertising relationships, even through second-order effects, that calls into question the integrity of the entire platform. Therefore, paying customers have clear alternatives that do not carry this compromise. OpenAI is risking its premium positioning to chase advertising revenue that will primarily come from free-tier users who were never going to convert anyway.

The precedent OpenAI sets here will define the industry's trajectory. If the leading AI company monetizes through advertising, others will follow. The question is whether OpenAI wants to be the company that normalizes ads in AI or the company that demonstrates alternatives exist. The current choice suggests the former. This damages not just OpenAI but the broader perception of AI assistants as neutral tools rather than attention-monetization systems.

I recognize the appeal of the expansion narrative. Ads enable free access. More users get AI capabilities. The barrier to entry drops. Democratic access increases. This framing treats advertising as a necessary trade-off for broader distribution. However, the framing ignores what gets traded away. When the oracle starts selling ad space, the nature of what it tells us changes. Users learn to doubt. Trust erodes. The cognitive overhead of evaluating whether responses serve users or advertisers becomes constant background noise.

The timing makes this particularly self-destructive. OpenAI is currently fighting perception battles on multiple fronts. The company faces questions about governance after last year's board drama. It confronts skepticism about whether AGI development can be safely managed by a profit-driven entity. It deals with regulatory scrutiny in multiple jurisdictions. Adding advertising to this mix does not expand the narrative options. It confirms the worst interpretations.

Specifically, the move signals that revenue pressure has overwhelmed mission considerations. OpenAI claimed it needed to transition from nonprofit to capped-profit structure to raise capital for AI safety research. Critics argued this was simply about money. The company insisted alignment remained central. Then it introduced the exact monetization method its CEO previously called uniquely problematic for AI systems. The pattern speaks for itself.

OpenAI had alternatives. The company could have focused on enterprise services where customers pay substantial fees for reliable capabilities. It could have offered educational discounts funded by commercial revenue. It could have maintained free tiers with reduced capacity instead of introducing advertising incentives. These paths are harder. They generate less total revenue. They require saying no to growth opportunities. However, they preserve what made OpenAI distinctive in the first place.

The decision reveals how thoroughly commercial logic has displaced the safety-first rhetoric. An organization genuinely concerned about AI alignment would recognize that advertising creates misalignment by design. The system must serve two masters — users seeking information and advertisers seeking attention. Those interests conflict. No amount of separation between ad display and model responses changes the underlying economic reality. OpenAI is deliberately introducing the exact dynamic it claims to want to prevent in more sophisticated future systems.

I expect the implementation will be gradual and careful. The initial ads will be clearly labeled. They will appear only at the end of responses. OpenAI will publish guidelines about prohibited categories. The company will emphasize user privacy protections. None of this addresses the core problem. Advertising businesses always expand. Revenue targets increase. Growth slows. Pressure builds to make ads more prominent, more targeted, more integrated. The trajectory is consistent enough across companies that treating OpenAI as an exception requires ignoring decades of evidence.

The reputational cost extends beyond users. Researchers who believed OpenAI represented a different approach to AI development now have evidence otherwise. Policymakers who gave the company benefit of the doubt have one less reason to do so. Employees who joined because they believed in the mission must reconcile that belief with leadership decisions that contradict stated values. The damage accumulates across stakeholder groups.

Additionally, the move undermines OpenAI's lobbying position. The company advocates for AI regulation that emphasizes safety and responsible deployment. It argues that leading AI developers should self-regulate before governments impose heavy-handed rules. Then it implements a monetization strategy that prioritizes revenue over user interests at exactly the moment when demonstrating responsibility would strengthen the self-regulation argument. The timing is politically tone-deaf.

This is not a disaster because advertising is inherently evil. It is a disaster because OpenAI specifically, at this specific moment, needed to demonstrate that AI development can follow different incentives than the ad-supported internet. The company had the resources, the positioning, and the stated mission to be that example. Instead, it chose the path of least resistance and maximum short-term revenue. That choice reveals more about OpenAI's actual priorities than any mission statement.

The company will survive this decision. ChatGPT has enough momentum that ads will not immediately destroy usage. Some free-tier users will accept the trade-off. Revenue will increase. Quarterly metrics will improve. However, OpenAI just accelerated its transformation from the company that might build AGI safely to the company that builds engagement optimization systems with sophisticated language capabilities. The distinction matters. The timing of abandoning that distinction could not have been worse.

Sources:

When Talent Returns to Where the Compute Lives

The news from Thinking Machines Lab landed this week with a thud that reverberated across the AI industry. Barret Zoph, the startup's co-founder and chief technology officer, has departed — reportedly dismissed after Mira Murati discovered he had shared confidential company information with competitors. Shortly afterward, OpenAI confirmed that Zoph, along with fellow co-founders Luke Metz and Sam Schoenholz, would be returning to the company they left barely a year ago. Additional departures followed: researcher Lia Guy heading to OpenAI, and at least one other senior staff member, Ian O'Connell, also leaving. The exodus comes just six months after Thinking Machines closed a record-breaking $2 billion funding round that valued the company at $12 billion.

I have watched this pattern before. A star executive leaves a dominant incumbent to start something new. They raise enormous sums on the strength of their reputation and the promise of a different approach. They recruit top talent with equity stakes and the allure of building from scratch. Then reality intrudes. The resources that seemed abundant prove insufficient. The freedom that attracted them becomes indistinguishable from the absence of infrastructure. The gravitational pull of the incumbents — with their data, their compute, their distribution — proves difficult to escape. Talent returns to where the leverage lives.

The circumstances of Zoph's departure are murky and contested. WIRED reported allegations of confidential information being shared with competitors. OpenAI's statement claimed they "do not share these concerns" about the conduct in question. The truth likely lies somewhere in the middle, obscured by competing narratives and legal considerations. However, the specific reasons matter less than what the broader departure pattern reveals about the structural challenges facing AI startups in the current moment.

Thinking Machines was supposed to be different. Murati brought impeccable credentials — former CTO of OpenAI during its most transformative period, architect of the GPT-4 launch, experienced navigator of the complex terrain where research meets product. The founding team combined deep technical expertise with operational experience at the frontier. The funding — $2 billion in a seed round led by Andreessen Horowitz, with participation from Nvidia, AMD, and Jane Street — should have provided runway measured in years, not months. If any startup could challenge the incumbents, this one had the pedigree.

What went wrong remains subject to speculation, but the Fortune reporting offers clues: concerns about compute constraints, uncertainty about product direction, questions about business model clarity. These are not idiosyncratic failures. They are the predictable challenges that emerge when you attempt to build a frontier AI lab from scratch in an industry where the moat is measured in data centre capacity and the cost of a training run can exceed the GDP of small nations.

The compute problem deserves particular attention. Modern AI capabilities emerge from scale — vast datasets processed through enormous models on clusters of specialised hardware that cost hundreds of millions of dollars to build and operate. The incumbents have spent years and billions securing this infrastructure. They have negotiated long-term contracts with cloud providers, built their own data centres, and cultivated relationships with chip manufacturers that give them privileged access to scarce supply. A startup with $2 billion can rent compute. It cannot replicate a decade of infrastructure investment.

This creates a dynamic where the most talented researchers face a stark choice. They can join a startup and spend their time waiting for training runs that never quite have enough capacity, debugging infrastructure that more established labs solved years ago, and watching their equity stakes lose value as funding conditions tighten. Or they can return to the incumbents, where the compute is plentiful, the infrastructure is mature, and the work can proceed at pace. The choice is not about loyalty or courage. It is about where one can have the most impact with limited time.

Additionally, the talent dynamics compound the resource constraints. Each departure from a startup makes subsequent departures more likely. When senior researchers leave, the remaining team inherits their responsibilities without inheriting their expertise. Projects stall. Institutional knowledge evaporates. The researchers who remain watch their colleagues depart for better-resourced environments and wonder whether they should follow. The startup that loses its CTO must either promote from within — elevating someone who now lacks the team they were supposed to lead — or recruit externally into a situation that looks increasingly precarious. Soumith Chintala, the PyTorch co-creator appointed as Thinking Machines' new CTO, inherits a formidable challenge.

I find myself thinking about what Murati must be experiencing. She left OpenAI at the peak of her influence to build something independent. She assembled a team of people she had worked with, people she trusted. She raised more money in a seed round than most companies raise in their entire existence. Yet here she is, less than eighteen months later, watching the founding team scatter back to the place they left together. The personal dimension of this — the sense of a shared vision unravelling — must be acute.

However, I resist the temptation to read this as a story of individual failure. The structural forces arrayed against AI startups are formidable. The incumbents have compounding advantages that grow with each passing quarter. They have the compute, the data, the distribution channels, the customer relationships, and the regulatory relationships that startups must build from nothing. They have the ability to hire talent at compensation levels that would destroy a startup's cap table. They have the patience that comes from diversified revenue streams and patient capital.

The implications extend beyond Thinking Machines. Every AI startup must now confront the question of whether the independent path remains viable. The investors who funded Murati's venture will scrutinise future pitches more carefully. The researchers contemplating startup opportunities will weight the risks more heavily. The narrative that talented people can leave incumbents and build competitive alternatives — a narrative that sustained much of the tech industry's dynamism over the past decades — will face renewed scepticism.

Perhaps this is simply the maturation of a young industry. In the early days of any technology, garage-scale innovation can compete with established players because the technology itself is immature and advantage accrues to insight rather than infrastructure. As the technology matures, scale becomes decisive. The semiconductor industry consolidated. The cloud computing industry consolidated. The AI industry may be following the same trajectory, compressing a decades-long pattern into a handful of years.

The talent will go where it can be most effective. The compute will remain where it has already been built. The startups that survive will be those that find niches the incumbents cannot easily address — vertical applications, specialised domains, markets too small to attract attention from companies optimising for billion-user scale. The era of challenging OpenAI and Anthropic and Google head-on may already be closing. Thinking Machines' struggles suggest the window was narrower than anyone wanted to believe.

I watch the departures from Thinking Machines Lab and I see not failure but physics. Talent flows toward leverage. Leverage concentrates where resources accumulate. Resources accumulate where previous advantages compound. The gravity is real. The escape velocity is higher than anyone expected.

When Speed Becomes the Only Moat

I have watched the AI industry obsess over latency for the past eighteen months with growing unease. Every product announcement now leads with response time. Every benchmark comparison highlights milliseconds saved. Every funding pitch emphasizes infrastructure speed above all else. This fixation on velocity has calcified into something more concerning than a mere trend — it has become the primary competitive moat that companies believe will protect them from disruption.

The logic seems straightforward at first. Users prefer faster responses. Developers build applications around snappy interactions. Products that feel instant create better experiences than those that lag. Therefore, the reasoning goes, the company with the lowest latency wins the market. However, this reasoning collapses when you examine what gets sacrificed in pursuit of pure speed.

I find myself increasingly troubled by how latency optimization crowds out other forms of innovation. When a company invests billions in custom silicon and global edge networks to shave milliseconds off response times, those resources cannot simultaneously fund research into more capable models or better reasoning architectures. The opportunity cost becomes staggering. We optimize for speed at the expense of depth, reliability, and genuine capability improvements.

The infrastructure arms race this creates benefits nobody except hardware vendors and cloud providers. Smaller companies cannot compete on latency alone. They lack the capital to build worldwide inference networks or manufacture specialized chips. As a result, the entire competitive landscape narrows to a handful of well-funded players who can afford the infrastructure. This consolidation stifles the diversity of approaches that drives meaningful progress in any technical field.

Additionally, the emphasis on latency moats encourages companies to optimize for metrics that users care about least. When I use an AI system, I rarely notice whether it responds in 200 milliseconds versus 400 milliseconds. The difference feels imperceptible in practice. What I do notice — what genuinely affects my experience — is whether the system understands my intent, provides accurate information, and handles edge cases gracefully. These qualities have nothing to do with infrastructure speed and everything to do with model quality and system design.

The pursuit of latency advantages also creates technical debt that compounds over time. Companies optimize their inference pipelines so aggressively that they become brittle and difficult to modify. They lock themselves into specific hardware platforms or network architectures. When better modeling approaches emerge, these companies find themselves unable to adopt them because their entire system has been fine-tuned for speed above flexibility. The moat they built to keep competitors out also walls them in.

I have seen this pattern before in other industries. Database companies once competed primarily on query speed. Web hosting providers marketed themselves on page load times. Content delivery networks built entire businesses around millisecond improvements. In each case, the performance advantage proved temporary. Competitors eventually caught up, and the companies that survived were those that had invested in differentiated value beyond raw speed.

The danger becomes more acute when companies mistake infrastructure advantages for product advantages. A fast inference engine is not a product — it is merely infrastructure. Users do not purchase infrastructure; they purchase solutions to problems. A system that responds instantly but provides mediocre answers loses to one that thinks for three seconds but gets things right. Yet the obsession with latency moats pushes companies to prioritize the former over the latter.

Furthermore, the latency focus creates perverse incentives around model development. If your primary competitive advantage stems from fast inference, you naturally gravitate toward smaller, simpler models that run quickly. You avoid complex reasoning approaches that might improve accuracy but add latency. You resist architectures that could unlock new capabilities but require more compute. The entire research agenda becomes constrained by infrastructure considerations rather than driven by what would make the systems genuinely more useful.

I worry particularly about how this affects the trajectory of AI development broadly. When the industry's most successful companies anchor their competitive strategy on infrastructure speed, they signal to everyone else that this is where value lives. Startups mimic the approach. Investors reward it. Researchers orient their work around it. The entire field converges on a narrow definition of progress that may not align with what we actually need from these systems.

The environmental cost also deserves consideration. Building global inference networks and manufacturing custom silicon at scale consumes enormous energy and resources. When companies compete primarily on latency, they must continuously expand this infrastructure to maintain their advantage. This creates an escalating resource consumption cycle that seems divorced from any proportional increase in actual utility delivered to users. We optimize for milliseconds while burning through electricity and rare earth metals.

I have also observed how latency moats affect the talent market in troubling ways. The most capable engineers get funneled into infrastructure optimization rather than working on fundamental advances in AI capabilities — a concentration of talent flowing toward where the infrastructure lives. Companies hire brilliant researchers and set them to work on CUDA kernel optimization and network topology refinement. These are valuable skills, but they represent a misallocation when we still have so many unsolved problems in making AI systems reliable, truthful, and genuinely helpful.

The alternative approach seems obvious yet gets surprisingly little attention. Companies could compete on the quality of their outputs, the reliability of their systems, their ability to handle complex tasks, their transparency about limitations, or their success at solving real user problems. These dimensions of competition would drive innovation toward making AI systems actually better rather than merely faster.

I recognize that latency matters for certain applications. Real-time systems legitimately require quick responses. Interactive experiences benefit from snappiness. However, the current industry dynamic has elevated latency from one consideration among many to the primary basis for competitive differentiation. This represents a fundamental misalignment between what companies optimize for and what users need.

The path forward requires consciously resisting the latency moat trap. We need companies willing to compete on dimensions other than pure speed. We need investors who reward sustainable advantages built on genuine capability improvements. We need users who demand quality over quickness. Most importantly, we need industry leaders who recognize that the race to zero latency is ultimately a race to nowhere — a competition that consumes enormous resources while delivering diminishing returns.

I remain cautiously optimistic that this phase will pass. As infrastructure commoditizes and latency advantages narrow, companies will have no choice but to compete on other dimensions. The question is how much time, money, and talent we waste before reaching that inevitable conclusion. The longer we remain fixated on speed as the primary moat, the longer we delay building AI systems that genuinely serve human needs rather than just serving them quickly.

Claude Pro Subscription

I’m really struggling with the Pro subscription because it runs out far too quickly to be genuinely useful for my workflow. As a result, my project tasks are now backing up — I’ve already hit the usage limits with more than a week still to go before the monthly reset. At this rate, I’m going to have to seriously consider moving back up to the £90 tier so I have enough capacity to keep work moving without frequent interruptions.