[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-ai-benchmarks":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"bc621db1-f621-43ab-9821-f832ef6ceff5","AI benchmarks","ai-benchmarks",3,"AI 基準測試用來比較模型在推理、知識問答、程式能力與長上下文等面向的表現，像 ARC-AGI-2、GPQA、MMLU 這類分數常被拿來判斷新模型是否真的進步，也能看出各家在成本與能力之間的取捨。","AI benchmarks measure how models perform on reasoning, knowledge QA, coding, and long-context tasks. Scores from tests like ARC-AGI-2, GPQA, and MMLU help compare new releases, track real progress, and expose trade-offs between capability, cost, and reliability.",[12,21,29],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"ca152f29-641a-4c5b-8ca6-47a9a95b5d77","stanford-2026-ai-index-charts-explained-en","Stanford’s 2026 AI Index, explained with charts","Stanford’s 2026 AI Index shows faster adoption, rising costs, and thin US-China gaps. The charts tell a messier story than the hype.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776427445810-u5bp.png","en","2026-04-17T12:03:47.703137+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"04e78fe1-7f49-40db-bfb2-7bb4b3579276","gemini-3-1-pro-googles-top-model-in-numbers-en","Gemini 3.1 Pro: Google’s new top model in numbers","Gemini 3.1 Pro posts 77.1% on ARC-AGI-2, 94.3% on GPQA Diamond, and a 1M-token context window, while keeping Gemini 3 pricing.","model-release","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775153582956-qese.png","2026-04-02T18:12:42.161483+00:00",{"id":30,"slug":31,"title":32,"summary":33,"category":26,"image_url":34,"cover_image":34,"language":19,"created_at":35},"61ed1d6b-505f-4cf5-b132-2d57964ca4c2","gpt-5-4-vs-claude-opus-4-6-ai-benchmark-en","GPT-5.4 vs Claude Opus 4.6: 75% Win Rate","We tested GPT-5.4, Claude Opus 4.6, DeepSeek V4, and Gemini 3.1 across 12 benchmarks. One model won 9 of them.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775127830823-xco3.png","2026-04-02T09:12:38.725884+00:00"]