[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-llm-evaluation":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"01e6c9b3-37d1-4d59-962e-34209b71a5cb","LLM evaluation","llm-evaluation",3,"LLM 評估關注模型是否真的理解與推理，而不只是答對單題。常見面向包括長鏈推理、ASR 轉寫品質判定、與人類標註一致性，以及在多步驟任務中維持穩定表現的能力。","LLM evaluation examines whether models reason, judge, and stay consistent beyond producing a plausible answer. It spans long-horizon benchmarks like LongCoT, ASR quality assessment, and agreement with human labels on tasks where accuracy alone misses real failure modes.",[12],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"2468c20a-c3cf-4004-8981-44934691673a","longcot-long-horizon-chain-of-thought-benchmark-zh","LongCoT：測長鏈推理，不只看答案","LongCoT 用 2,500 題測試模型能否在長鏈、互相依賴的推理步驟中保持一致。GPT 5.2 與 Gemini 3 Pro 仍低於 10%。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776319784084-uldi.png","zh","2026-04-16T06:09:22.856744+00:00"]