[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-swe-bench-verified":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"aeb14571-0546-4407-8ad9-01785c371c34","SWE-Bench Verified","swe-bench-verified",8,"SWE-bench Verified 是用真實 GitHub issue 與測試來評估模型修補程式碼能力的基準，常用來看 agentic coding、除錯與工具使用表現。它之所以重要，在於分數背後還牽涉 token 成本、上下文長度與部署可行性。","SWE-bench Verified is a benchmark for measuring how well models fix real GitHub issues against real tests, making it a useful signal for agentic coding, debugging, and tool use. It also exposes practical tradeoffs in token cost, context length, and deployment.",[12,21,29,37,45,52,59,66,73],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"9852e8e5-0ed0-47de-a7cc-f29508bf7e2a","why-llm-leaderboards-are-wrong-about-model-quality-zh","為什麼 LLM 排行榜常常選錯模型品質","LLM 排行榜有參考價值，但不適合拿來決定生產環境要用哪個模型。","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778743869534-q8ae.png","zh","2026-05-14T07:30:23.663726+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"8d3e404f-589a-477a-8457-2c27bbfb7038","kimi-k26-qwen-36-open-source-frontier-gap-zh","Kimi K2.6 與 Qwen 3.6 拉近差距","Kimi K2.6 和 Qwen 3.6 這兩個 open-weight 模型，已經在 coding 和 agent 任務上逼近閉源模型。","model-release","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777901476372-7tf9.png","2026-05-04T13:30:40.486692+00:00",{"id":30,"slug":31,"title":32,"summary":33,"category":34,"image_url":35,"cover_image":35,"language":19,"created_at":36},"b2725e14-d169-4ef3-9b57-0cc23a7e9338","ai-agents-token-spending-coding-tasks-zh","AI 代理寫程式：token 比 chat 多燒 1000 倍","這篇研究看 SWE-bench Verified 上的代理式寫程式，發現 token 花費可比一般 code chat 高出 1000 倍，且多半是 input 在燒錢，成本還很難預測。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777270011749-e4p3.png","2026-04-27T06:06:37.206415+00:00",{"id":38,"slug":39,"title":40,"summary":41,"category":26,"image_url":42,"cover_image":43,"language":19,"created_at":44},"14d41e89-8fff-4e3a-b021-2a64f29279ca","qwen36-27b-open-source-coding-model-zh","Qwen3.6-27B：更小卻更準的寫碼路線","Qwen3.6-27B 是 27B dense multimodal 模型，在 SWE-bench Verified 拿到 77.2，還贏過更大的 Qwen3.5-397B-A17B。對開發團隊來說，這代表更好部署，也更適合 agentic coding。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777260630350-1mxe.png","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Fskill-cover-qwen36-27b-zh_-1777263004.png","2026-04-27T00:12:38.326898+00:00",{"id":46,"slug":47,"title":48,"summary":49,"category":34,"image_url":50,"cover_image":50,"language":19,"created_at":51},"57fe6457-4c90-4c0d-84a2-c062d87421f8","stanford-2026-ai-index-charts-explained-zh","史丹佛 2026 AI Index 圖表解讀","史丹佛 2026 AI Index 用圖表拆解 AI 現況：模型變快、成本變高、美中差距縮小，但評測和治理都追不上。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427444966-lec9.png","2026-04-17T12:03:47.109964+00:00",{"id":53,"slug":54,"title":55,"summary":56,"category":26,"image_url":57,"cover_image":57,"language":19,"created_at":58},"5a3c6417-77a9-4526-bee5-c355979576f2","gemini-3-1-pro-googles-top-model-in-numbers-zh","Gemini 3.1 Pro 數字看真實力","Gemini 3.1 Pro 以 77.1% ARC-AGI-2、94.3% GPQA Diamond、1M token 上下文登場，價格仍維持 Gemini 3。這次重點不是噱頭，而是長文檔、程式碼與 agent 工作流的實戰成本。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775153580311-vv9w.png","2026-04-02T18:12:41.777858+00:00",{"id":60,"slug":61,"title":62,"summary":63,"category":26,"image_url":64,"cover_image":64,"language":19,"created_at":65},"57576af6-0bf2-4616-ac89-8435e39a8aa7","glm-5-zai-flagship-coding-agents-zh","GLM-5 登場：Z.AI 的寫程式旗艦","GLM-5 是 Z.AI 的新旗艦模型。744B 總參數、200K context、SWE-bench Verified 77.8、Terminal Bench 2.0 56.2，直接挑戰頂級 coding 模型。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775135063109-o1yh.png","2026-04-02T13:03:42.135022+00:00",{"id":67,"slug":68,"title":69,"summary":70,"category":26,"image_url":71,"cover_image":71,"language":19,"created_at":72},"710ff4cc-d333-4bd8-b50a-e5522d430161","open-source-llm-comparison-2026-zh","2026 開源 LLM 誰領先","Qwen 3.5、GLM-5、DeepSeek R1、Llama 4 讓開源 LLM 進入實戰。這篇整理 2026 年主流模型的 benchmark、上下文長度、授權條款與自架表現。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775131800331-8pqc.png","2026-04-02T12:09:39.445524+00:00",{"id":74,"slug":75,"title":76,"summary":77,"category":26,"image_url":78,"cover_image":78,"language":19,"created_at":79},"2478aa0c-2f56-447c-8fff-419d35183405","claude-mythos-vs-opus-46-capability-jump-zh","Claude Mythos 跟 Opus 4.6 差多少","Anthropic 傳出 Mythos 測試分數高於 Claude Opus 4.6。若 SWE-bench、推理與資安數字屬實，開發者會感受到明顯差距。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775125819094-xhdz.png","2026-04-02T09:09:38.488815+00:00"]