[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-benchmark":3},{"tag":4,"articles":10},{"id":5,"name":6,"slug":6,"article_count":7,"description_zh":8,"description_en":9},"736c4d52-f7e2-4456-a45f-50aae8402b4e","benchmark",6,"Benchmark 不只是比誰分數高，而是用固定任務檢查模型、代理與編譯器在真實條件下的穩定性。從長鏈推理、資料視覺化工作流到程式碼安全與效能，基準測試也在考驗方法是否可信。","Benchmarking is how teams check whether models, agents, and compilers hold up under fixed tasks and real constraints. It covers long-horizon reasoning, data-viz workflows, code safety, and performance, while also exposing how much a score can be distorted by the test itself.",[11,20,27,34,41,48,56],{"id":12,"slug":13,"title":14,"summary":15,"category":16,"image_url":17,"cover_image":17,"language":18,"created_at":19},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","AI 資安基準的進展已顯示自主攻擊能力正在追上防守方的規劃速度，這不是實驗室新聞，而是防線時間被壓縮的警訊。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","zh","2026-05-15T01:10:29.379041+00:00",{"id":21,"slug":22,"title":23,"summary":24,"category":16,"image_url":25,"cover_image":25,"language":18,"created_at":26},"3195f998-ce04-402b-9e87-e4b7579de296","why-gpt-5-5-should-be-default-coding-llm-2026-zh","為什麼 GPT-5.5 應該成為 2026 年的預設寫碼 LLM","GPT-5.5 應該成為 2026 年的預設寫碼 LLM，因為它在公開基準的綜合表現領先，最適合作為團隊的能力上限。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778577040199-5z21.png","2026-05-12T09:10:25.144952+00:00",{"id":28,"slug":29,"title":30,"summary":31,"category":16,"image_url":32,"cover_image":32,"language":18,"created_at":33},"519b0e2e-4287-42bc-b749-1fd42664f57b","deeptest-2026-llm-car-manual-assistant-zh","DeepTest 2026 首辦車主手冊 LLM 評測","DeepTest 2026 首度把 LLM 車主手冊問答拉進競賽式評測，讓四個工具在同一任務下比對檢索能力。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778048449427-tnji.png","2026-05-06T06:20:31.717618+00:00",{"id":35,"slug":36,"title":37,"summary":38,"category":16,"image_url":39,"cover_image":39,"language":18,"created_at":40},"d898c232-8ae5-4bae-9476-738f2e5786db","dv-world-tests-chart-agents-real-workflows-zh","DV-World 測試圖表代理真實工作流","DV-World 用試算表、視覺演化與意圖對齊三類任務，檢驗資料視覺化代理在更接近企業工作流的表現。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777442820403-xlbs.png","2026-04-29T06:06:44.930537+00:00",{"id":42,"slug":43,"title":44,"summary":45,"category":16,"image_url":46,"cover_image":46,"language":18,"created_at":47},"2468c20a-c3cf-4004-8981-44934691673a","longcot-long-horizon-chain-of-thought-benchmark-zh","LongCoT：測長鏈推理，不只看答案","LongCoT 用 2,500 題測試模型能否在長鏈、互相依賴的推理步驟中保持一致。GPT 5.2 與 Gemini 3 Pro 仍低於 10%。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776319784084-uldi.png","2026-04-16T06:09:22.856744+00:00",{"id":49,"slug":50,"title":51,"summary":52,"category":53,"image_url":54,"cover_image":54,"language":18,"created_at":55},"920762f8-7d82-488d-8e94-7ee1423c98aa","claudes-c-compiler-benchmarks-analysis-zh","Claude 的 C 編譯器把基準測試搞砸了","Claude 寫的 C compiler 能編 Linux kernel，卻在 SPEC CPU2017 把效能打到只剩 GCC 的 23.6% 到 27.1%，還有一組直接當掉。","tools","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775294153812-5l9f.png","2026-04-04T09:15:35.14438+00:00",{"id":57,"slug":58,"title":59,"summary":60,"category":61,"image_url":62,"cover_image":62,"language":18,"created_at":63},"e660d801-2421-4529-8fa9-86b82b066990","metas-llama-4-benchmark-scandal-gets-worse-zh","Meta Llama 4 分數風波又擴大","Meta 的 Llama 4 原本要延續開放模型聲勢，結果卻陷入評測分數爭議。最新報導指出，Meta 在發布前可能用不同模型跑不同 benchmark，讓分數看起來更好，信任問題也跟著擴大。","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1774516531283-08x2.png","2026-03-26T07:34:21.156421+00:00"]