[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-cursor-composer-2-started-from-kimi-en":3,"tags-cursor-composer-2-started-from-kimi-en":29,"related-lang-cursor-composer-2-started-from-kimi-en":40,"related-posts-cursor-composer-2-started-from-kimi-en":44,"series-model-release-a1379e9a-6785-4ff5-9b0a-8cff55f8264f":81},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":10,"keywords":11,"language":17,"translated_content":10,"views":18,"is_premium":19,"created_at":20,"updated_at":20,"cover_image":21,"published_at":20,"rewrite_status":22,"rewrite_error":10,"rewritten_from_id":23,"slug":24,"category":25,"related_article_id":26,"status":27,"google_indexed_at":28,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":19},"a1379e9a-6785-4ff5-9b0a-8cff55f8264f","Cursor’s Composer 2 started from Kimi","\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.cursor.com\" target=\"_blank\" rel=\"noopener\">Cursor\u003C\u002Fa> launched \u003Ca href=\"https:\u002F\u002Fwww.cursor.com\u002Fchangelog\u002Fcomposer-2\" target=\"_blank\" rel=\"noopener\">Composer 2\u003C\u002Fa> this week with a bold pitch: “frontier-level coding intelligence.” Within hours, the company was dealing with a much less polished story. An X user claimed the model was basically \u003Ca href=\"https:\u002F\u002Fmoonshot.ai\" target=\"_blank\" rel=\"noopener\">Moonshot AI\u003C\u002Fa>’s \u003Ca href=\"https:\u002F\u002Fmoonshot.ai\u002Fkimi\" target=\"_blank\" rel=\"noopener\">Kimi 2.5\u003C\u002Fa> with extra reinforcement learning layered on top.\u003C\u002Fp>\u003Cp>The reaction mattered because Cursor is not a small side project. The company raised $2.3 billion last fall at a $29.3 billion valuation, and it has been reported to be running at more than $2 billion in annualized revenue. When a company that big ships a new model, people expect a clean origin story.\u003C\u002Fp>\u003Cp>Instead, Cursor ended up confirming that the story was more complicated. The company says Composer 2 started from an open-source base, and that base was Kimi.\u003C\u002Fp>\u003Ch2>What Cursor actually admitted\u003C\u002Fh2>\u003Cp>The first public clue came from a post by an X user going by Fynn, who pointed to code that seemed to identify Kimi as the model underneath Composer 2. Their jab was simple: if the model is built on Kimi, why hide it?\u003C\u002Fp>\u003Cp>Cursor’s vice president of developer education, \u003Ca href=\"https:\u002F\u002Fx.com\u002Fleeerob\u002Fstatus\u002F1903549488336351570\" target=\"_blank\" rel=\"noopener\">Lee Robinson\u003C\u002Fa>, responded directly. “Yep, Composer 2 started from an open-source base!” he wrote. He added that “Only ~1\u002F4 of the compute spent on the final model came from the base, the rest is from our training.”\u003C\u002Fp>\u003Cp>That detail matters. Cursor is not saying it copied Kimi and slapped on a new label. It is saying the company used Kimi as a starting point, then spent most of the compute on its own training work. Robinson also said Composer 2’s benchmark results are “very different” from Kimi’s.\u003C\u002Fp>\u003Cul>\u003Cli>Cursor says about 25% of final training compute came from the base model\u003C\u002Fli>\u003Cli>About 75% came from Cursor’s own training pipeline\u003C\u002Fli>\u003Cli>The company says benchmark behavior differs materially from Kimi\u003C\u002Fli>\u003Cli>Cursor says the use fits Kimi’s license terms\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Why the omission raised eyebrows\u003C\u002Fh2>\u003Cp>The biggest issue was not technical. It was messaging. Cursor’s launch post did not mention Moonshot AI or Kimi at all, even though the model depended on that base. That left the company open to a pretty obvious charge: if the foundation mattered this much, why not say so on day one?\u003C\u002Fp>\u003Cp>Cursor co-founder \u003Ca href=\"https:\u002F\u002Fx.com\u002Famanrsanger\u002Fstatus\u002F1903570494395662418\" target=\"_blank\" rel=\"noopener\">Aman Sanger\u003C\u002Fa> later admitted that omission. “It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model,” he wrote.\u003C\u002Fp>\u003Cp>That is a useful correction, but it also shows how sensitive model provenance has become. Developers care about training data, base models, and licensing because those details shape trust. If a company is vague about the starting point, people start asking what else is being left out.\u003C\u002Fp>\u003Cblockquote>“It was a miss to not mention the Kimi base in our blog from the start. We’ll fix that for the next model.” — Aman Sanger, Cursor co-founder\u003C\u002Fblockquote>\u003Cp>There is also a branding problem here. Cursor sells itself to developers as a tool that helps them work faster and think clearly. When the company’s own model announcement needs cleanup after launch, that creates friction with the exact audience it wants to impress.\u003C\u002Fp>\u003Ch2>The licensing and partnership angle\u003C\u002Fh2>\u003Cp>Cursor did not stop at saying the model was based on Kimi. Robinson also said the use was consistent with Kimi’s license. The Kimi account on X later echoed that point and said Cursor used Kimi “as part of an authorized commercial partnership” with \u003Ca href=\"https:\u002F\u002Fwww.fireworks.ai\" target=\"_blank\" rel=\"noopener\">Fireworks AI\u003C\u002Fa>.\u003C\u002Fp>\u003Cp>Moonshot’s account even framed the situation positively: “We are proud to see Kimi-k2.5 provide the foundation,” it wrote, adding that Cursor’s continued pretraining and high-compute RL training fit the open model ecosystem it wants to support.\u003C\u002Fp>\u003Cp>That is the key distinction here. Open models are meant to be reused, modified, and improved. The issue is not that Cursor built on Kimi. The issue is that the company introduced Composer 2 like it had come from nowhere, when the base model was part of the story from the start.\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fmoonshot.ai\u002Fkimi\" target=\"_blank\" rel=\"noopener\">Kimi 2.5\u003C\u002Fa> is open source\u003C\u002Fli>\u003Cli>Moonshot AI is backed by Alibaba and HongShan\u003C\u002Fli>\u003Cli>Cursor says its final model came from heavy additional training\u003C\u002Fli>\u003Cli>Fireworks AI was named in the partnership explanation\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>How this compares with other AI model launches\u003C\u002Fh2>\u003Cp>Cursor’s situation is easier to understand when you compare it with how other AI companies talk about model lineage. OpenAI, Anthropic, and Google usually keep the base model story fairly clear, even if they are vague on training data. When they release a new system, they usually explain whether it is a fresh model, a fine-tune, or an iteration built on earlier work.\u003C\u002Fp>\u003Cp>That matters because model origin affects how users interpret benchmark claims. If you say a model is new, people assume the gains come from your own work. If you say it started from an existing open model, then the conversation shifts to how much you changed and whether the improvements are meaningful.\u003C\u002Fp>\u003Cp>Cursor’s own numbers make the point. If roughly a quarter of the compute came from the base and the rest came from Cursor’s training, then Composer 2 is closer to a heavily reworked derivative than a clean-room model. That may be perfectly valid, but it is not the same thing as training from scratch.\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fopenai.com\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa> usually frames releases around model generations and fine-tunes\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa> tends to describe model families and capability tiers clearly\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fai.google.dev\" target=\"_blank\" rel=\"noopener\">Google AI\u003C\u002Fa> often separates base models from productized systems\u003C\u002Fli>\u003Cli>Cursor disclosed the base only after users spotted it\u003C\u002Fli>\u003C\u002Ful>\u003Cp>The other comparison is geopolitical. Building on a Chinese model is not automatically a problem, but it lands differently in 2026 than it might have a few years ago. U.S. AI startups are under pressure to look independent, especially when the public conversation keeps framing AI progress as a U.S.-China race. That makes transparency more important, not less.\u003C\u002Fp>\u003Cp>Cursor’s response suggests it understands that now. The company says it will correct the omission in future launches, which is the right move if it wants developers to keep trusting its model claims.\u003C\u002Fp>\u003Ch2>What this means for developers and buyers\u003C\u002Fh2>\u003Cp>If you are choosing a coding assistant, the main lesson is simple: ask what the model is built on. The name on the product page is only part of the story. The base model, the amount of extra training, and the license terms all matter if you care about reliability, provenance, or long-term vendor risk.\u003C\u002Fp>\u003Cp>This also tells us something about how AI products are evolving. A lot of the best systems are no longer born as single, monolithic models. They are assembled from open bases, proprietary training, reinforcement learning, and product-specific tuning. That is normal now. What is not normal is pretending the base layer does not exist.\u003C\u002Fp>\u003Cp>Cursor has now corrected the record, but the first impression still counts. My guess is that the next wave of model launches from developer tools will include a much more explicit “built on X, trained with Y” section near the top of the announcement. If they do not, users will keep finding the missing pieces themselves.\u003C\u002Fp>\u003Cp>For now, the practical question is whether Cursor can turn this into a trust win. If it is more transparent in the next release, developers may see this as a messy but honest correction. If not, every future benchmark claim will come with the same annoying follow-up: what was the base model this time?\u003C\u002Fp>\u003Cp>Related reading: \u003Ca href=\"\u002Fnews\u002Fai-coding-tools-are-starting-to-look-like-model-companies\" target=\"_blank\" rel=\"noopener\">AI coding tools are starting to look like model companies\u003C\u002Fa>.\u003C\u002Fp>","Cursor says Composer 2 began on Moonshot AI’s Kimi base, then added more training. The company says the final model is very different.","techcrunch.com","https:\u002F\u002Ftechcrunch.com\u002F2026\u002F03\u002F22\u002Fcursor-admits-its-new-coding-model-was-built-on-top-of-moonshot-ais-kimi\u002F",null,[12,13,14,15,16],"Cursor","Composer 2","Kimi 2.5","Moonshot AI","coding model","en",1,false,"2026-03-28T03:11:59.132398+00:00","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Fcover-1774595866498-425oei.png","done","890016bf-67fd-434a-8a4c-53b2166fb729","cursor-composer-2-started-from-kimi-en","model-release","d68e59a2-55eb-4a8f-95d6-edc8fcbff581","published","2026-04-09T09:00:59.103+00:00",[30,32,34,36,38],{"name":12,"slug":31},"cursor",{"name":15,"slug":33},"moonshot-ai",{"name":14,"slug":35},"kimi-2-5",{"name":16,"slug":37},"coding-model",{"name":13,"slug":39},"composer-2",{"id":26,"slug":41,"title":42,"language":43},"cursor-composer-2-started-from-kimi-zh","Cursor Composer 2 其實從 Kimi 起步","zh",[45,51,57,63,69,75],{"id":46,"slug":47,"title":48,"cover_image":49,"image_url":49,"created_at":50,"category":25},"ebd0ef7f-f14d-4e25-a54e-073b49f9d4b9","why-googles-hidden-gemini-live-models-matter-en","Why Google’s Hidden Gemini Live Models Matter More Than the Demo","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778869237748-4rqx.png","2026-05-15T18:20:23.999239+00:00",{"id":52,"slug":53,"title":54,"cover_image":55,"image_url":55,"created_at":56,"category":25},"6c57f6bf-1023-4a22-a6c0-013bd88ac3d1","minimax-m1-open-hybrid-attention-reasoning-model-en","MiniMax-M1 brings 1M-token open reasoning model","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797872005-z8uk.png","2026-05-14T22:30:39.599473+00:00",{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":25},"68a2ba2e-f07a-4f28-a69c-24bf66652d2e","gemini-omni-video-review-text-rendering-en","Gemini Omni Video Review: Text Rendering Beats Rivals","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778779286834-fy35.png","2026-05-14T17:20:44.524502+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":25},"1d5fc6b1-a87f-48ae-89ee-e5f0da86eb2d","why-xiaomi-mimo-v25-pro-changes-coding-agents-en","Why Xiaomi’s MiMo-V2.5-Pro Changes Coding Agents More Than Chatbots","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778689848027-ocpw.png","2026-05-13T16:30:29.661993+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":25},"cb3eac19-4b8d-4ee0-8f7e-d3c2f0b50af5","openai-realtime-audio-models-live-voice-en","OpenAI’s Realtime Audio Models Target Live Voice","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778451653257-dsnq.png","2026-05-10T22:20:33.31082+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":25},"84c630af-a060-4b6b-9af2-1b16de0c8f06","anthropic-10-finance-ai-agents-en","Anthropic发布10款金融AI Agent","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778389841959-ktkf.png","2026-05-10T05:10:23.345141+00:00",[82,87,92,97,102,107,112,117,122,127],{"id":83,"slug":84,"title":85,"created_at":86},"d4cffde7-9b50-4cc7-bb68-8bc9e3b15477","nvidia-rubin-ai-supercomputer-en","NVIDIA Unveils Rubin: A Leap in AI Supercomputing","2026-03-25T16:24:35.155565+00:00",{"id":88,"slug":89,"title":90,"created_at":91},"eab919b9-fbac-4048-89fc-afad6749ccef","google-gemini-ai-innovations-2026-en","Google's AI Leap with Gemini Innovations in 2026","2026-03-25T16:27:18.841838+00:00",{"id":93,"slug":94,"title":95,"created_at":96},"5f5cfc67-3384-4816-a8f6-19e44d90113d","gap-google-gemini-ai-checkout-en","Gap Teams Up with Google Gemini for AI-Driven Checkout","2026-03-25T16:27:46.483272+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"f6d04567-47f6-49ec-804c-52e61ab91225","ai-model-release-wave-march-2026-en","Navigating the AI Model Release Wave of March 2026","2026-03-25T16:28:45.409716+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"895c150c-569e-4fdf-939d-dade785c990e","small-language-models-transform-ai-en","Small Language Models: Llama 3.2 and Phi-3 Transform AI","2026-03-25T16:30:26.688313+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"38eb1d26-d961-4fd3-ae12-9c4089680f5f","midjourney-v8-alpha-features-pricing-en","Midjourney V8 Alpha: A Deep Dive into Its Features and Pricing","2026-03-26T01:25:36.387587+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"bf36bb9e-3444-4fb8-ab19-0df6bc9d8271","rag-2026-indispensable-ai-bridge-en","RAG in 2026: The Indispensable AI Bridge","2026-03-26T01:28:34.472046+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"60881d6d-2310-44ef-b1fb-7f98e9dd2f0e","xiaomi-mimo-trio-agents-robots-voice-en","Xiaomi’s MiMo trio targets agents, robots, and voice","2026-03-28T03:05:08.899895+00:00",{"id":123,"slug":124,"title":125,"created_at":126},"f063d8d1-41d1-4de4-8ebc-6c40511b9369","xiaomi-mimo-v2-pro-1t-moe-agents-en","Xiaomi MiMo-V2-Pro: 1T MoE Model for Agents","2026-03-28T03:06:19.238032+00:00",{"id":4,"slug":24,"title":5,"created_at":20}]