[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-ai-benchmark":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"d3eec64f-9a00-4779-9191-ba3d01cd8a14","AI benchmark","ai-benchmark",3,"AI benchmark 是用來比較模型能力、成本與可靠性的評測方法，從 ARC Prize 這類把分數與算力攤開的排行榜，到語言、推理與互動任務，都影響模型選型、部署成本與研究方向。","AI benchmarks compare model quality, cost, and reliability across tasks, from score-vs-compute leaderboards like ARC Prize to language, reasoning, and interactive evaluations. They shape model selection, deployment budgets, and research priorities.",[12],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"7a6580cb-935a-456c-a22d-45bab79f41c9","arc-prize-leaderboard-cost-performance-en","ARC Prize leaderboard shows cost still matters","ARC Prize’s leaderboard tracks how AI systems trade cost for score, and ARC-AGI-3 pushes agents into interactive tasks.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775143857511-5rjv.png","en","2026-04-02T15:30:39.888984+00:00"]