[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-llm-evaluation":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"01e6c9b3-37d1-4d59-962e-34209b71a5cb","LLM evaluation","llm-evaluation",3,"LLM 評估關注模型是否真的理解與推理，而不只是答對單題。常見面向包括長鏈推理、ASR 轉寫品質判定、與人類標註一致性，以及在多步驟任務中維持穩定表現的能力。","LLM evaluation examines whether models reason, judge, and stay consistent beyond producing a plausible answer. It spans long-horizon benchmarks like LongCoT, ASR quality assessment, and agreement with human labels on tasks where accuracy alone misses real failure modes.",[12,21,29,36],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"7ac3d870-d844-4d95-a287-81b22dfa9eca","deeptest-2026-llm-car-manual-assistant-en","DeepTest 2026 benchmarks an LLM car manual assistant","DeepTest’s first LLM testing competition compared four tools on car manual retrieval, showing how to benchmark automotive assistants.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778048468789-e7sx.png","en","2026-05-06T06:20:33.071908+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"b2450abd-b108-4e4d-b1d7-1b02c17db850","why-databricks-rag-is-platform-play-not-feature-en","Why Databricks RAG Is a Platform Play, Not a Feature","Databricks treats RAG as an end-to-end platform problem, and that is the right way to build it.","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777959651374-avrm.png","2026-05-05T05:40:30.329823+00:00",{"id":30,"slug":31,"title":32,"summary":33,"category":17,"image_url":34,"cover_image":34,"language":19,"created_at":35},"32cc2350-8bcf-4970-9bcd-900a05441f2f","llms-for-asr-evaluation-beyond-wer-en","LLMs for ASR Evaluation: Beyond WER","This paper tests decoder-based LLMs as ASR evaluators and finds they beat WER on human agreement, with 92–94% on one task.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777010993439-cjdi.png","2026-04-24T06:09:38.008767+00:00",{"id":37,"slug":38,"title":39,"summary":40,"category":17,"image_url":41,"cover_image":41,"language":19,"created_at":42},"9f62add5-cae5-47eb-abd5-2e56d0d5698c","longcot-long-horizon-chain-of-thought-benchmark-en","LongCoT Benchmark: 2,500-Probl. Long-Horizon Reasoning","LongCoT is a 2,500-problem benchmark for measuring whether frontier models can sustain long, interdependent reasoning chains.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776319782523-s0wz.png","2026-04-16T06:09:23.265233+00:00"]