What Cursor Forgot to Mention About Composer 2
March 23, 2026 · uneasy.in/9bb87ec
Cursor launched Composer 2 on March 19 with the kind of
language companies use when they want you to feel something
big just happened. "Frontier-level coding intelligence."
"Our first continued pretraining run." The blog post read
like a declaration of independence from foundation model
providers. Within 24 hours, a developer named Fynn caught
an internal model identifier leaking through Cursor's
OpenAI-compatible base URL:
kimi-k2p5-rl-0317-s515-fast.
That is not subtle. Kimi K2.5 is an open-source model from Moonshot AI, a Beijing-based company backed by Alibaba. One trillion parameters total, 32 billion active per request. Cursor took it, applied reinforcement learning on coding tasks, and shipped it as their own breakthrough. Yulun Du, Head of Pretraining at Moonshot, confirmed the tokenizer was "completely identical." The base model was never mentioned in the announcement.
Cursor co-founder Aman Sanger eventually acknowledged the omission on X: "It was a miss to not mention the Kimi base in our blog from the start." He claimed roughly a quarter of the compute came from the base model, with the rest from Cursor's own training. That ratio is debatable, but the transparency failure is not.
The licensing angle makes it worse. Kimi K2.5 ships under a Modified MIT License requiring prominent UI attribution for any product exceeding 100 million monthly active users or $20 million in monthly revenue. Cursor's annualized revenue exceeds $2 billion. Their interface displayed "Composer 2" and nothing else. Moonshot initially had employees publicly questioning whether the use was authorized. Those posts disappeared, replaced by an official statement calling it an "authorized commercial partnership" through inference provider Fireworks AI.
This matters beyond one company and one model. Cursor built a $29 billion valuation partly on the perception that they were doing novel AI research. The Hacker News thread captured the tension: one user wrote that "the entire company is built on packaging open source and reselling it," while others countered that serious engineering goes into fine-tuning and RLHF pipelines. Both things can be true. But the omission transforms a legitimate technical contribution into something that feels like sleight of hand.
The broader pattern is familiar. When I wrote about why Anthropic had to close the back door, the underlying question was the same: who controls access to what, and does the user actually know? AI tools increasingly operate as routing layers, assembling capabilities from various foundation models. The label on the box tells you almost nothing about what is inside.
Cursor still makes a good product. The editor is fast, Tab predictions feel nearly telepathic some days. None of that required hiding the provenance of the model doing the heavy lifting. The fix was always simple: one line in the blog post, one badge in the UI. They chose not to, and a developer reading API metadata had to do it for them.
Sources:
Recent Entries
- The Future We Were Measuring For March 23, 2026
- Nobody Calls It Nostalgia at a Billion Dollars March 22, 2026
- The Cover That Didn't Need a Headline March 21, 2026