Cursor, Kimi, and the Open Source Bet
Cursor’s new coding model hit near-state-of-the-art performance at one-eighth the price, and its stack points straight to open source.

Cursor shipped Composer 2 to more than one million daily active users, then the internet noticed something interesting: the model behind it was built on Moonshot AI’s Kimi K2.5, an open model from China. The pricing angle is even sharper. Cursor’s model is reportedly near state-of-the-art quality at about one-eighth the cost.
That combination matters because coding assistants live or die on unit economics. If a company can get close to top-tier performance while paying a fraction of the usual bill, it can ship faster, price lower, and keep more room for product iteration.
Cursor’s bet was practical, not philosophical
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
The headline here is not that Cursor used an open model. It is that Cursor used the best open model it could find for the job, even if that model came from a Chinese lab. That is a very different story from the usual AI branding talk, where companies describe their stack in vague terms and hope nobody asks what is actually under the hood.

Tomasz Tunguz’s point is simple: startups win when open source lowers the cost of competing with incumbents. In this case, the editor powering Cursor is Visual Studio Code, which is open source. The model layer now follows the same logic.
- Cursor says Composer 2 reached more than 1 million daily active users.
- Kimi K2.5 is open source and built by Moonshot AI.
- The model was described as near parity with state-of-the-art systems at roughly one-eighth the price.
- The editor layer is VS Code, one of the most widely used open-source code editors in the world.
The economics are the real story. If your model is good enough and cheap enough, you do not need to own every layer of the stack to build a strong business. You need the right layer at the right cost.
The age gap between open models is getting hard to ignore
Tunguz highlights a striking comparison: American open-source frontier models average about eight months old, while Chinese open-source models average about seven weeks old. That is not a minor gap. In AI, eight months can mean several major model generations, especially in a category where training recipes, inference tricks, and post-training methods evolve quickly.
That gap explains why Cursor picked Kimi K2.5 instead of GPT-OSS, which he describes as roughly eight months old. The choice was about freshness, capability, and cost, not national pride or platform loyalty.
- American open-source frontier models: about 8 months old on average.
- Chinese open-source models: about 7 weeks old on average.
- That creates an age gap of roughly 5x.
- GPT-OSS was cited as 8 months old, while Kimi K2.5 was about 8 weeks old.
There is also a strategic wrinkle here. Meta, which once held the open-source crown with Llama, shifted toward closed-source development in 2025, according to Bloomberg’s reporting. That leaves a vacuum on the U.S. side just as open models become more commercially useful.
“This is the open model ecosystem we love to support.” — Moonshot AI, posting on X after Cursor’s use of Kimi K2.5 was noticed.
China’s open models are winning attention, but not trust
The usage numbers are hard to dismiss. Tunguz cites OpenRouter data showing Chinese open-source models rising from 1.2% of global AI usage in late 2024 to nearly 30% by the end of 2025. He also notes that Hugging Face downloads for Qwen passed Llama by October 2025, reaching 700 million downloads.

Those are huge numbers, and they tell us something useful: developers care about performance, price, and availability more than geography when they are prototyping. But trust is a separate issue. A model can be popular and still be blocked in enterprise and government settings.
- Chinese open-source models grew from 1.2% to nearly 30% of global AI usage in about a year.
- Qwen reached 700 million downloads on Hugging Face.
- NIST found Chinese models 12x more susceptible to agent hijacking attacks in its CAISI evaluation.
- Microsoft and News Corp have banned their use entirely in some environments.
That security gap matters because coding assistants are not passive chatbots. They read files, write code, call tools, and can be manipulated through prompt injection or agent hijacking. If a model can be steered into unsafe behavior more easily, the cost savings can disappear fast once a company has to add layers of review, filtering, and policy enforcement.
The U.S. response is finally getting more serious
The American answer is taking shape, but it is still catching up. NVIDIA announced a $26 billion commitment over five years to open-source AI through its Nemotron Coalition. Google is pushing Gemma, OpenAI released GPT-OSS, and the Allen Institute for AI is building OLMo 3.
One of the more interesting data points in Tunguz’s piece is that OLMo 3 matches Qwen 3 on math benchmarks with six times less training data. That does not mean the U.S. has solved the open-model race, but it does show that smarter training can narrow the gap without brute-force spending.
- NVIDIA: $26 billion commitment over five years to open-source AI efforts.
- Gemma is Google’s open model family for on-device and developer use.
- GPT-OSS is OpenAI’s open-weight move into this market.
- OLMo 3 reportedly matches Qwen 3 on math with 6x less training data.
The real takeaway is that open source is no longer a side story in AI. It is the supply chain. If the best open model is Chinese today, startups will use it. If American labs want that traffic back, they need models that are current, cheap, and safe enough for real deployment.
What Cursor’s choice says about the next wave
Cursor did not make a manifesto. It made a product decision. That is why this story matters. The company picked the model that gave it the best mix of quality and cost, and that decision landed on a Chinese open model built on open-source tooling.
My read: the next wave of AI coding tools will look less like a loyalty contest between labs and more like a procurement race. Whoever ships the best open model at the best price will win a lot of default traffic from startups, indie developers, and enterprise teams that want control over their stack.
The question now is whether U.S. open models can close the freshness gap before Chinese open models become the default choice for serious developer tools. If the answer is no, more companies will quietly follow Cursor’s lead and pick the best open foundation available, wherever it was built.
For builders, the takeaway is simple: watch model age, benchmark quality, and inference cost together. If one of those moves in the wrong direction, your product economics can change fast.
// Related Articles
- [IND]
Circle’s Agent Stack targets machine-speed payments
- [IND]
IREN signs Nvidia AI infrastructure pact
- [IND]
Circle launches Agent Stack for AI payments
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods