[MODEL] 8 min readOraCore Editors

Cursor’s Composer 2 started from Kimi

Cursor says Composer 2 began on Moonshot AI’s Kimi base, then added more training. The company says the final model is very different.

Share LinkedIn
Cursor’s Composer 2 started from Kimi

Cursor launched Composer 2 this week with a bold pitch: “frontier-level coding intelligence.” Within hours, the company was dealing with a much less polished story. An X user claimed the model was basically Moonshot AI’s Kimi 2.5 with extra reinforcement learning layered on top.

The reaction mattered because Cursor is not a small side project. The company raised $2.3 billion last fall at a $29.3 billion valuation, and it has been reported to be running at more than $2 billion in annualized revenue. When a company that big ships a new model, people expect a clean origin story.

Instead, Cursor ended up confirming that the story was more complicated. The company says Composer 2 started from an open-source base, and that base was Kimi.

What Cursor actually admitted

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

The first public clue came from a post by an X user going by Fynn, who pointed to code that seemed to identify Kimi as the model underneath Composer 2. Their jab was simple: if the model is built on Kimi, why hide it?

Cursor’s vice president of developer education, Lee Robinson, responded directly. “Yep, Composer 2 started from an open-source base!” he wrote. He added that “Only ~1/4 of the compute spent on the final model came from the base, the rest is from our training.”

That detail matters. Cursor is not saying it copied Kimi and slapped on a new label. It is saying the company used Kimi as a starting point, then spent most of the compute on its own training work. Robinson also said Composer 2’s benchmark results are “very different” from Kimi’s.

  • Cursor says about 25% of final training compute came from the base model
  • About 75% came from Cursor’s own training pipeline
  • The company says benchmark behavior differs materially from Kimi
  • Cursor says the use fits Kimi’s license terms

Why the omission raised eyebrows

The biggest issue was not technical. It was messaging. Cursor’s launch post did not mention Moonshot AI or Kimi at all, even though the model depended on that base. That left the company open to a pretty obvious charge: if the foundation mattered this much, why not say so on day one?

Cursor co-founder Aman Sanger later admitted that omission. “It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model,” he wrote.

That is a useful correction, but it also shows how sensitive model provenance has become. Developers care about training data, base models, and licensing because those details shape trust. If a company is vague about the starting point, people start asking what else is being left out.

“It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model.” — Aman Sanger, Cursor co-founder

There is also a branding problem here. Cursor sells itself to developers as a tool that helps them work faster and think clearly. When the company’s own model announcement needs cleanup after launch, that creates friction with the exact audience it wants to impress.

The licensing and partnership angle

Cursor did not stop at saying the model was based on Kimi. Robinson also said the use was consistent with Kimi’s license. The Kimi account on X later echoed that point and said Cursor used Kimi “as part of an authorized commercial partnership” with Fireworks AI.

Moonshot’s account even framed the situation positively: “We are proud to see Kimi-k2.5 provide the foundation,” it wrote, adding that Cursor’s continued pretraining and high-compute RL training fit the open model ecosystem it wants to support.

That is the key distinction here. Open models are meant to be reused, modified, and improved. The issue is not that Cursor built on Kimi. The issue is that the company introduced Composer 2 like it had come from nowhere, when the base model was part of the story from the start.

  • Kimi 2.5 is open source
  • Moonshot AI is backed by Alibaba and HongShan
  • Cursor says its final model came from heavy additional training
  • Fireworks AI was named in the partnership explanation

How this compares with other AI model launches

Cursor’s situation is easier to understand when you compare it with how other AI companies talk about model lineage. OpenAI, Anthropic, and Google usually keep the base model story fairly clear, even if they are vague on training data. When they release a new system, they usually explain whether it is a fresh model, a fine-tune, or an iteration built on earlier work.

That matters because model origin affects how users interpret benchmark claims. If you say a model is new, people assume the gains come from your own work. If you say it started from an existing open model, then the conversation shifts to how much you changed and whether the improvements are meaningful.

Cursor’s own numbers make the point. If roughly a quarter of the compute came from the base and the rest came from Cursor’s training, then Composer 2 is closer to a heavily reworked derivative than a clean-room model. That may be perfectly valid, but it is not the same thing as training from scratch.

  • OpenAI usually frames releases around model generations and fine-tunes
  • Anthropic tends to describe model families and capability tiers clearly
  • Google AI often separates base models from productized systems
  • Cursor disclosed the base only after users spotted it

The other comparison is geopolitical. Building on a Chinese model is not automatically a problem, but it lands differently in 2026 than it might have a few years ago. U.S. AI startups are under pressure to look independent, especially when the public conversation keeps framing AI progress as a U.S.-China race. That makes transparency more important, not less.

Cursor’s response suggests it understands that now. The company says it will correct the omission in future launches, which is the right move if it wants developers to keep trusting its model claims.

What this means for developers and buyers

If you are choosing a coding assistant, the main lesson is simple: ask what the model is built on. The name on the product page is only part of the story. The base model, the amount of extra training, and the license terms all matter if you care about reliability, provenance, or long-term vendor risk.

This also tells us something about how AI products are evolving. A lot of the best systems are no longer born as single, monolithic models. They are assembled from open bases, proprietary training, reinforcement learning, and product-specific tuning. That is normal now. What is not normal is pretending the base layer does not exist.

Cursor has now corrected the record, but the first impression still counts. My guess is that the next wave of model launches from developer tools will include a much more explicit “built on X, trained with Y” section near the top of the announcement. If they do not, users will keep finding the missing pieces themselves.

For now, the practical question is whether Cursor can turn this into a trust win. If it is more transparent in the next release, developers may see this as a messy but honest correction. If not, every future benchmark claim will come with the same annoying follow-up: what was the base model this time?

Related reading: AI coding tools are starting to look like model companies.