[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-deepseek-r1":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"38984539-f894-45d4-b254-31bdb9ad5d86","DeepSeek-R1","deepseek-r1",3,"DeepSeek-R1 是以推理能力為核心的開源大型語言模型，常被拿來和 Qwen、GLM、Llama 等模型比較。這個主題聚焦 benchmark、授權、自架部署與伺服器推論效能，對評估開源模型是否能進入實際生產很重要。","DeepSeek-R1 is an open large language model built around reasoning, often compared with Qwen, GLM, and Llama in benchmark-driven evaluations. This tag covers licensing, self-hosting, and server inference performance, all of which shape whether open models are practical for production use.",[12,21],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"710ff4cc-d333-4bd8-b50a-e5522d430161","open-source-llm-comparison-2026-zh","2026 開源 LLM 誰領先","Qwen 3.5、GLM-5、DeepSeek R1、Llama 4 讓開源 LLM 進入實戰。這篇整理 2026 年主流模型的 benchmark、上下文長度、授權條款與自架表現。","model-release","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775131800331-8pqc.png","zh","2026-04-02T12:09:39.445524+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"d9fda242-d695-4ea4-a0e0-c6c64ad72965","nvidia-sets-new-mlperf-inference-records-zh","NVIDIA 再刷 MLPerf 推論紀錄","NVIDIA 在 MLPerf Inference v6.0 再交出新成績，GB300 NVL72 對 DeepSeek-R1 伺服器推論提升 2.7x，Llama 3.1 405B 也提升 1.5x。","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775122496881-vxz0.png","2026-04-02T08:48:38.43437+00:00"]