[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-aime-2026-leaderboard-qwen-leads-math-tests-zh":3,"tags-aime-2026-leaderboard-qwen-leads-math-tests-zh":33,"related-lang-aime-2026-leaderboard-qwen-leads-math-tests-zh":48,"related-posts-aime-2026-leaderboard-qwen-leads-math-tests-zh":52,"series-research-5f593215-e1e5-4ea1-92f8-0a08d0ab97a8":89},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":32,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":23},"5f593215-e1e5-4ea1-92f8-0a08d0ab97a8","AIME 2026 排行榜：Qwen 先拿下數學測試","\u003Cp>說真的，這份榜單很小，訊號卻很清楚。\u003Ca href=\"https:\u002F\u002Fllm-stats.com\u002Fbenchmarks\u002Faime-2026\" target=\"_blank\" rel=\"noopener\">AIME 2026\u003C\u002Fa> 只有 8 個模型上榜。最高分 0.953，最低分 0.375，差距到 0.578。\u003C\u002Fp>\u003Cp>這不是聊天測試。它用的是 20\u003Ca href=\"\u002Fnews\u002Fgo-126-stack-allocation-gc-overhead-zh\">26\u003C\u002Fa> 年 \u003Ca href=\"https:\u002F\u002Fwww.maa.org\u002Fmath-competitions\u002Faime\" target=\"_blank\" rel=\"noopener\">American Invitational Mathematics Examination\u003C\u002Fa> 的 30 題。答案只有 000 到 999。講白了，對就是對，錯就是錯，沒什麼模糊空間。\u003C\u002Fp>\u003Cp>對台灣開發者來說，這種榜單很有參考價值。因為它測的不是文采，是推理。你如果在做 \u003Ca href=\"https:\u002F\u002Fqwenlm.github.io\u002F\" target=\"_blank\" rel=\"noopener\">Qwen\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude\" target=\"_blank\" rel=\"noopener\">Claude\u003C\u002Fa> 或 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Fgpt\" target=\"_blank\" rel=\"noopener\">GPT\u003C\u002Fa> 類產品，這種數字會直接影響你選哪個模型。\u003C\u002Fp>\u003Ch2>AIME 2026 在測什麼\u003C\u002Fh2>\u003Cp>AIME 不是考常識。它在測多步驟推理。模型要先拆題，再追蹤條件，最後還不能算錯。少一步，答案就飛了。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179306019-0ww7.png\" alt=\"AIME 2026 排行榜：Qwen 先拿下數學測試\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這類題目很適合抓出 LLM 的弱點。很多模型看起來很會講。可是一碰到代數、組合、模數運算，就開始亂飄。你會發現，它不是不會講，是不會穩定算。\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Fllm-stats.com\" target=\"_blank\" rel=\"noopener\">LLM Stats\u003C\u002Fa> 把這個榜單標成數學與推理基準。語言是英文，滿分是 1。規則簡單，難度很硬。這種測法很殘酷，但也很乾淨。\u003C\u002Fp>\u003Cul>\u003Cli>共 30 題，來自 AIME I 與 AIME II\u003C\u002Fli>\u003Cli>答案只接受 000 到 999\u003C\u002Fli>\u003Cli>純文字評測，不靠圖片\u003C\u002Fli>\u003Cli>目前只有 8 個模型\u003C\u002Fli>\u003Cli>8 筆都是自報結果，還沒有驗證結果\u003C\u002Fli>\u003C\u002Ful>\u003Cp>最後一點很重要。自報分數能看趨勢，不能當終局。你可以把它當成供應商的成績單草稿。正式採購前，還是要自己跑一輪。\u003C\u002Fp>\u003Cp>我覺得這種榜單最有用的地方，是把「會講」和「會算」切開。這件事很實際。因為很多產品 demo 很順，真進到 production，錯一題就可能炸掉整條流程。\u003C\u002Fp>\u003Ch2>誰現在領先\u003C\u002Fh2>\u003Cp>目前第一名是 \u003Ca href=\"https:\u002F\u002Fqwenlm.github.io\u002Fblog\u002Fqwen3\u002F\" target=\"_blank\" rel=\"noopener\">Qwen3.6 Plus\u003C\u002Fa>，分數 0.953。這個成績很猛。第二名是 \u003Ca href=\"https:\u002F\u002Fwww.bytedance.com\u002F\" target=\"_blank\" rel=\"noopener\">ByteDance\u003C\u002Fa> 的 Seed 2.0 Pro，分數 0.942。兩者只差 0.011。\u003C\u002Fp>\u003Cp>這種差距很小。可是在高階推理榜上，小數點後兩位常常就代表一個世代的訓練策略差異。不是單純誰大誰贏。還牽涉資料配方、後訓練、解題策略，甚至推理時的採樣方式。\u003C\u002Fp>\u003Cp>第三名是 Qwen3.5-397B-A17B，分數 0.913。再往下看，\u003Ca href=\"https:\u002F\u002Fblog.google\u002Ftechnology\u002Fai\u002F\" target=\"_blank\" rel=\"noopener\">Google\u003C\u002Fa> 的 \u003Ca href=\"https:\u002F\u002Fai.google.dev\u002Fgemma\" target=\"_blank\" rel=\"noopener\">Gemma 4\u003C\u002Fa> 系列分布就很分裂。大模型能打，小模型掉得很快。\u003C\u002Fp>\u003Cblockquote>“The problem with math is not that it is hard, but that it is easy to be wrong in a way that looks right.” — \u003Ca href=\"https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTerence_Tao\" target=\"_blank\" rel=\"noopener\">Terence Tao\u003C\u002Fa>\u003C\u002Fblockquote>\u003Cp>這句話很貼切。數學題最煩的地方，就是錯得很像對。模型如果只會產生漂亮解釋，卻沒辦法穩定落在正確答案，那就只是會寫作文，不是會解題。\u003C\u002Fp>\u003Cp>你可能會想問，0.953 到底算不算高？以這種題型來看，算很高。可是一旦你看整個榜單，就知道頂端和中段的差距還不小。這不是全體一起進步，而是少數模型先衝上去。\u003C\u002Fp>\u003Ch2>數字怎麼看才有感\u003C\u002Fh2>\u003Cp>這 8 個模型的平均分數是 0.783。標準差是 0.238。白話一點說，大家不是擠在一起，而是明顯分成幾個層級。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179311406-l3xa.png\" alt=\"AIME 2026 排行榜：Qwen 先拿下數學測試\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>榜單可以直接拆開看。第一梯隊很穩。第二梯隊還能用。第三梯隊就開始明顯失真。這對企業選模很重要，因為你不能只看平均值。\u003C\u002Fp>\u003Cp>如果你的工作是解數學題、做規劃、跑規則推理，模型差 0.05 可能就是能用和不能用的分界。尤其在 a\u003Ca href=\"\u002Fnews\u002Fadk-go-1-0-brings-agents-to-production-zh\">gent\u003C\u002Fa> 流程裡，前面一個步驟算錯，後面再多補救都很難救回來。\u003C\u002Fp>\u003Cul>\u003Cli>Qwen3.6 Plus：0.953\u003C\u002Fli>\u003Cli>Seed 2.0 Pro：0.942\u003C\u002Fli>\u003Cli>Qwen3.5-397B-A17B：0.913\u003C\u002Fli>\u003Cli>Gemma 4 31B：0.892\u003C\u002Fli>\u003Cli>Gemma 4 26B-A4B：0.883\u003C\u002Fli>\u003Cli>Seed 2.0 Lite：0.883\u003C\u002Fli>\u003Cli>Gemma 4 E4B：0.425\u003C\u002Fli>\u003Cli>Gemma 4 E2B：0.375\u003C\u002Fli>\u003C\u002Ful>\u003Cp>最刺眼的，是 Gemma 小模型的掉速。31B 還在前段班，E4B 和 E2B 卻直接掉到 0.4 左右。這表示縮小參數量，不只是少一點分數，是整體推理能力一起滑坡。\u003C\u002Fp>\u003Cp>這也呼應很多團隊的實戰經驗。你以為小模型比較省錢、比較快，結果它在難題上亂掉，最後人工重工成本更高。算下來，未必比較划算。\u003C\u002Fp>\u003Ch2>跟其他基準比起來差在哪\u003C\u002Fh2>\u003Cp>AIME 跟 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fmmlu\u002F\" target=\"_blank\" rel=\"noopener\">MMLU\u003C\u002Fa> 這種廣泛知識測試不一樣。它不太在乎百科知識。它更在乎你能不能一路把推理做完。\u003C\u002Fp>\u003Cp>它也跟 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fopenai\u002Fhuman-eval\" target=\"_blank\" rel=\"noopener\">HumanEval\u003C\u002Fa> 這種程式題不同。寫 c\u003Ca href=\"\u002Fnews\u002Fclaude-code-leak-reveals-hidden-features-zh\">ode\u003C\u002Fa> 時，模型可以靠模板和常見套路撐一下。AIME 沒這麼好混。每一步都要精準。\u003C\u002Fp>\u003Cp>所以 AIME 很適合拿來看「高階推理」到底有沒有進步。很多模型在一般聊天裡看起來很會。可是一碰到競賽數學，短板就直接露出來。這種落差，產品團隊最該先知道。\u003C\u002Fp>\u003Cul>\u003Cli>MMLU 偏廣泛知識\u003C\u002Fli>\u003Cli>HumanEval 偏程式能力\u003C\u002Fli>\u003Cli>AIME 偏多步驟數學推理\u003C\u002Fli>\u003Cli>分數差距更能反映模型穩定度\u003C\u002Fli>\u003C\u002Ful>\u003Cp>如果你在選 API，我會建議你別只看公開 demo。你要自己跑題庫。尤其是跟規則、金流、排程、風控有關的軟體。這些場景很怕模型「說得像對的」。\u003C\u002Fp>\u003Cp>另外，這份榜單目前全是自報結果。這代表它有參考價值，但還不是鐵證。等有更多第三方驗證，排名才會更有說服力。\u003C\u002Fp>\u003Ch2>為什麼這種榜單越來越重要\u003C\u002Fh2>\u003Cp>現在很多公司都在談 AI 助理。可是助理要真的能上線，不能只會聊天。它要能算、能推、還不能亂編。AIME 就是在戳這個痛點。\u003C\u002Fp>\u003Cp>這幾年模型更新很快。可是真正拉開差距的，常常不是會不會說話，而是會不會做對。對台灣團隊來說，這很現實。因為你要面對的是成本、延遲、準確率三個一起來。\u003C\u002Fp>\u003Cp>如果你是做教育科技、金融分析、供應鏈規劃，這種數學推理榜單就不只是新聞。它是選型工具。你可以先用它篩掉不穩的模型，再進一步做自己的資料測試。\u003C\u002Fp>\u003Cp>我也會提醒一件事。榜單高，不代表你的場景就一定高。因為真實產品裡還有提示詞、工具呼叫、檢索、上下文長度，這些都會拉低表現。基準只是起點，不是終點。\u003C\u002Fp>\u003Ch2>接下來該看什麼\u003C\u002Fh2>\u003Cp>我猜下一輪大家會更在意兩件事。第一，這些分數能不能被驗證。第二，小模型能不能縮小落差。只要這兩件事沒解，選模還是會很吃經驗。\u003C\u002Fp>\u003Cp>如果你現在正在挑模型，我的建議很直接。先拿你自己的 20 到 50 題核心題目跑一輪。再對照 AIME 這種公開榜單。兩邊都看，才不會被 demo 騙到。\u003C\u002Fp>\u003Cp>最後丟一個問題給你：你現在用的模型，真遇到 30 題數學題，能拿幾分？如果答案你自己都沒把握，那就該開始測了。\u003C\u002Fp>","AIME 2026 排行榜只有 8 個模型，但分數差很大。Qwen3.6 Plus 以 0.953 領先，最低只有 0.375。這份數學基準很適合看 LLM 的推理穩定度。","llm-stats.com","https:\u002F\u002Fllm-stats.com\u002Fbenchmarks\u002Faime-2026",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179306019-0ww7.png",[13,14,15,16,17,18,19,20],"AIME 2026","Qwen","LLM","數學基準","推理模型","AI 排行榜","Gemma 4","Seed 2.0 Pro","zh",0,false,"2026-04-03T01:21:30.728823+00:00","2026-04-03T01:21:30.624+00:00","done","91d6d55b-732a-4198-8135-1a3b12a8cee1","aime-2026-leaderboard-qwen-leads-math-tests-zh","research","1433056d-0745-485f-9501-b6ce042e5516","published","2026-04-07T07:41:13.301+00:00",[34,36,38,40,42,43,44,46],{"name":19,"slug":35},"gemma-4",{"name":14,"slug":37},"qwen",{"name":13,"slug":39},"aime-2026",{"name":20,"slug":41},"seed-20-pro",{"name":16,"slug":16},{"name":17,"slug":17},{"name":15,"slug":45},"llm",{"name":18,"slug":47},"ai-排行榜",{"id":30,"slug":49,"title":50,"language":51},"aime-2026-leaderboard-qwen-leads-math-tests-en","AIME 2026 leaderboard: Qwen leads math tests","en",[53,59,65,71,77,83],{"id":54,"slug":55,"title":56,"cover_image":57,"image_url":57,"created_at":58,"category":29},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":60,"slug":61,"title":62,"cover_image":63,"image_url":63,"created_at":64,"category":29},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":66,"slug":67,"title":68,"cover_image":69,"image_url":69,"created_at":70,"category":29},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":72,"slug":73,"title":74,"cover_image":75,"image_url":75,"created_at":76,"category":29},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":78,"slug":79,"title":80,"cover_image":81,"image_url":81,"created_at":82,"category":29},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":84,"slug":85,"title":86,"cover_image":87,"image_url":87,"created_at":88,"category":29},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[90,95,100,105,110,115,120,125,130,135],{"id":91,"slug":92,"title":93,"created_at":94},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":96,"slug":97,"title":98,"created_at":99},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":101,"slug":102,"title":103,"created_at":104},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":106,"slug":107,"title":108,"created_at":109},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":111,"slug":112,"title":113,"created_at":114},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":116,"slug":117,"title":118,"created_at":119},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":121,"slug":122,"title":123,"created_at":124},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":126,"slug":127,"title":128,"created_at":129},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":131,"slug":132,"title":133,"created_at":134},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":136,"slug":137,"title":138,"created_at":139},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]