Cursor’s Kimi K2.5 Disclosure Miss, Explained
Cursor’s Composer 2 hid its Kimi K2.5 base model. That disclosure gap matters for trust, licensing, and code-data handling.

Cursor said Composer 2 hit 61.7% on Terminal-Bench 2.0 and undercut Claude Opus 4.6 on price. Then a developer inspecting API traffic found a model ID that pointed straight to Moonshot AI's Kimi K2.5 base model. That gap between marketing and provenance is why this story matters.
This is about more than one blog post missing a credit line. It touches licensing, vendor trust, and the question every team should ask before shipping code through an AI assistant: what model is actually handling our data?
What Cursor announced, and what the API revealed
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
On March 19, Cursor launched Composer 2 with language that made it sound like a new proprietary model built in-house. The post talked about “continued pretraining,” “reinforcement learning,” and “frontier-level coding intelligence.” Those are real techniques, and the benchmark numbers were real too.

What users did not get upfront was the model lineage. A developer named Fynn inspected Cursor’s API traffic and found a model identifier that read like a breadcrumb trail: kimi-k2p5-rl-0317-s515-fast. That naming pattern pointed to Kimi K2.5, reinforcement learning fine-tuning, a March 17 training date, and a fast serving setup.
Cursor later acknowledged the omission. Aman Sanger, Cursor’s co-founder, said the company should have mentioned the Kimi base model from the start. That admission matters because it changes the story from “we built this from scratch” to “we built on top of an open model and trained further.” Those are very different claims.
- Composer 2 launch date: March 19, 2026
- Terminal-Bench 2.0 score: 61.7%
- Claimed pricing: about one-tenth of Claude Opus 4.6
- Discovered model ID: accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast
- Public attention: the disclosure thread reached millions of views
Why the licensing clause matters
The legal wrinkle comes from Kimi K2.5’s license. Moonshot AI released it under a modified MIT license, which is permissive for most use cases, but it includes a specific attribution rule: products above 100 million monthly active users or above $20 million in monthly revenue must prominently display “Kimi K2.5” in the interface.
Cursor’s reported annual recurring revenue is above $2 billion. That works out to roughly $167 million per month, far beyond the threshold. On paper, that makes attribution more than a courtesy. It becomes a condition tied to commercial use.
There was some back-and-forth after the discovery. Moonshot AI employees initially flagged a violation, then deleted those posts. Moonshot’s official account later described the relationship as an authorized commercial partnership through Fireworks AI. That may resolve the technical licensing question, but it does not erase the disclosure issue in the launch post.
“It was a miss to not mention the Kimi base in our blog from the start.” — Aman Sanger, Cursor co-founder
Why teams should care even if they never use Cursor
The bigger lesson is that AI products are layered systems. The model in the marketing page is often only the visible top layer. Underneath it you may have an open base model, vendor fine-tuning, a separate inference provider, and a UI wrapper. Each layer has its own license and data path.

That matters for security, procurement, and compliance. If your company has GDPR obligations, HIPAA concerns, or data residency rules, “the vendor says it’s compliant” is too vague. You need to know where prompts go, which model processes them, and which infrastructure handles inference.
There is also a trust issue. A vendor that clearly lists its base model and training approach gives you something you can verify. A vendor that describes a system as “self-developed” while omitting the foundation model asks you to trust a marketing claim instead of the technical record.
- Ask which base model powers the tool
- Ask which inference provider handles your prompts
- Ask where training and logging data are stored
- Ask what attribution the license requires
How Kimi K2.5 compares with Western options
Kimi K2.5 is not a small experimental release. It is a 1-trillion-parameter mixture-of-experts model with 32 billion active parameters and a 256,000-token context window. That is enough headroom for long coding sessions, large repositories, and agent-style workflows that need to keep many files in memory at once.
That scale matters because the Western open-source field has been uneven. Meta’s Llama 4 Scout and Maverick shipped, but they did not land with the same strength many expected. Llama 4 Behemoth has been delayed without a public release date. In practice, that leaves a gap for companies that want a strong open base model right now.
Cursor’s choice also shows how global model sourcing has become. A product from a U.S. company, valued at $50 billion, built on a Beijing-based model because that model was the best fit for the job. That is not a side note. It is how the market works when performance matters more than branding.
- Kimi K2.5: 1 trillion parameters, 32 billion active, 256k context
- Llama 4 Behemoth: delayed, no public release date
- Cursor Composer 2: built on a Kimi K2.5 base with additional RL training
- Cursor’s own claim: about 25% of compute came from the base model, 75% from its training
What developers should do next
Developers do not need to panic, but they do need better habits. If an AI tool helps write production code, treat its model lineage like dependency metadata. Know the base model. Know the host. Know the license. If the vendor cannot answer those questions clearly, that is a signal in itself.
Vendors should do the same thing software teams already do with packages and containers: disclose what they built on. Model cards, training notes, and clear product announcements are not optional niceties anymore. They are part of the trust contract.
My take is simple: the next buying decision for an AI coding tool should include one question before benchmark charts or pricing tables. What model is under the hood, and who gets credit for it? If the answer is fuzzy, your team should treat the product as a black box until it is not.
For more on how AI tooling choices affect engineering teams, see our coverage of AI coding tools and trust. The Cursor episode is a useful reminder that the best-performing model is not the whole story. The provenance matters too, and teams that ignore it will keep making the same procurement mistakes.
// Related Articles
- [TOOLS]
Why Gemini API pricing is cheaper than it looks
- [TOOLS]
Why VidHub 会员互通不是“买一次全设备通用”
- [TOOLS]
Why Bun’s Zig-to-Rust experiment is the right move
- [TOOLS]
Why OpenAI API pricing is a product strategy, not a footnote
- [TOOLS]
Why Claude Code’s prompt design beats IDE copilots
- [TOOLS]
Why Databricks Model Serving is the right default for production infe…