When Talent Returns to Where the Compute Lives
January 16, 2026
The news from Thinking Machines Lab landed this week with a thud that reverberated across the AI industry. Barret Zoph, the startup's co-founder and chief technology officer, has departed — reportedly dismissed after Mira Murati discovered he had shared confidential company information with competitors. Shortly afterward, OpenAI confirmed that Zoph, along with fellow co-founders Luke Metz and Sam Schoenholz, would be returning to the company they left barely a year ago. Additional departures followed: researcher Lia Guy heading to OpenAI, and at least one other senior staff member, Ian O'Connell, also leaving. The exodus comes just six months after Thinking Machines closed a record-breaking $2 billion funding round that valued the company at $12 billion.
I have watched this pattern before. A star executive leaves a dominant incumbent to start something new. They raise enormous sums on the strength of their reputation and the promise of a different approach. They recruit top talent with equity stakes and the allure of building from scratch. Then reality intrudes. The resources that seemed abundant prove insufficient. The freedom that attracted them becomes indistinguishable from the absence of infrastructure. The gravitational pull of the incumbents — with their data, their compute, their distribution — proves difficult to escape. Talent returns to where the leverage lives.
The circumstances of Zoph's departure are murky and contested. WIRED reported allegations of confidential information being shared with competitors. OpenAI's statement claimed they "do not share these concerns" about the conduct in question. The truth likely lies somewhere in the middle, obscured by competing narratives and legal considerations. However, the specific reasons matter less than what the broader departure pattern reveals about the structural challenges facing AI startups in the current moment.
Thinking Machines was supposed to be different. Murati brought impeccable credentials — former CTO of OpenAI during its most transformative period, architect of the GPT-4 launch, experienced navigator of the complex terrain where research meets product. The founding team combined deep technical expertise with operational experience at the frontier. The funding — $2 billion in a seed round led by Andreessen Horowitz, with participation from Nvidia, AMD, and Jane Street — should have provided runway measured in years, not months. If any startup could challenge the incumbents, this one had the pedigree.
What went wrong remains subject to speculation, but the Fortune reporting offers clues: concerns about compute constraints, uncertainty about product direction, questions about business model clarity. These are not idiosyncratic failures. They are the predictable challenges that emerge when you attempt to build a frontier AI lab from scratch in an industry where the moat is measured in data centre capacity and the cost of a training run can exceed the GDP of small nations.
The compute problem deserves particular attention. Modern AI capabilities emerge from scale — vast datasets processed through enormous models on clusters of specialised hardware that cost hundreds of millions of dollars to build and operate. The incumbents have spent years and billions securing this infrastructure. They have negotiated long-term contracts with cloud providers, built their own data centres, and cultivated relationships with chip manufacturers that give them privileged access to scarce supply. A startup with $2 billion can rent compute. It cannot replicate a decade of infrastructure investment.
This creates a dynamic where the most talented researchers face a stark choice. They can join a startup and spend their time waiting for training runs that never quite have enough capacity, debugging infrastructure that more established labs solved years ago, and watching their equity stakes lose value as funding conditions tighten. Or they can return to the incumbents, where the compute is plentiful, the infrastructure is mature, and the work can proceed at pace. The choice is not about loyalty or courage. It is about where one can have the most impact with limited time.
Additionally, the talent dynamics compound the resource constraints. Each departure from a startup makes subsequent departures more likely. When senior researchers leave, the remaining team inherits their responsibilities without inheriting their expertise. Projects stall. Institutional knowledge evaporates. The researchers who remain watch their colleagues depart for better-resourced environments and wonder whether they should follow. The startup that loses its CTO must either promote from within — elevating someone who now lacks the team they were supposed to lead — or recruit externally into a situation that looks increasingly precarious. Soumith Chintala, the PyTorch co-creator appointed as Thinking Machines' new CTO, inherits a formidable challenge.
I find myself thinking about what Murati must be experiencing. She left OpenAI at the peak of her influence to build something independent. She assembled a team of people she had worked with, people she trusted. She raised more money in a seed round than most companies raise in their entire existence. Yet here she is, less than eighteen months later, watching the founding team scatter back to the place they left together. The personal dimension of this — the sense of a shared vision unravelling — must be acute.
However, I resist the temptation to read this as a story of individual failure. The structural forces arrayed against AI startups are formidable. The incumbents have compounding advantages that grow with each passing quarter. They have the compute, the data, the distribution channels, the customer relationships, and the regulatory relationships that startups must build from nothing. They have the ability to hire talent at compensation levels that would destroy a startup's cap table. They have the patience that comes from diversified revenue streams and patient capital.
The implications extend beyond Thinking Machines. Every AI startup must now confront the question of whether the independent path remains viable. The investors who funded Murati's venture will scrutinise future pitches more carefully. The researchers contemplating startup opportunities will weight the risks more heavily. The narrative that talented people can leave incumbents and build competitive alternatives — a narrative that sustained much of the tech industry's dynamism over the past decades — will face renewed scepticism.
Perhaps this is simply the maturation of a young industry. In the early days of any technology, garage-scale innovation can compete with established players because the technology itself is immature and advantage accrues to insight rather than infrastructure. As the technology matures, scale becomes decisive. The semiconductor industry consolidated. The cloud computing industry consolidated. The AI industry may be following the same trajectory, compressing a decades-long pattern into a handful of years.
The talent will go where it can be most effective. The compute will remain where it has already been built. The startups that survive will be those that find niches the incumbents cannot easily address — vertical applications, specialised domains, markets too small to attract attention from companies optimising for billion-user scale. The era of challenging OpenAI and Anthropic and Google head-on may already be closing. Thinking Machines' struggles suggest the window was narrower than anyone wanted to believe.
I watch the departures from Thinking Machines Lab and I see not failure but physics. Talent flows toward leverage. Leverage concentrates where resources accumulate. Resources accumulate where previous advantages compound. The gravity is real. The escape velocity is higher than anyone expected.
Recent Entries
- When Speed Becomes the Only Moat January 16, 2026
- Claude Pro Subscription January 15, 2026
- No System Can Verify Its Own Blind Spots January 13, 2026