[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-blackwell-ultra":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"5cdd393c-a202-43b2-8778-178964510dce","Blackwell Ultra","blackwell-ultra",4,"Blackwell Ultra 是 NVIDIA Blackwell 架構的高階推論平台，主打 B300、GB300 NVL72 與更大 HBM3e 容量、頻寬和機架級擴充能力。它影響大型模型推論、KV cache 配置、雲端成本與資料中心部署選型。","Blackwell Ultra is NVIDIA’s high-end inference platform built on the Blackwell architecture, centered on B300 and GB300 NVL72 systems with larger HBM3e capacity, higher bandwidth, and rack-scale scaling. It matters for LLM inference, KV cache sizing, cloud cost, and datacenter deployment choices.",[12,21],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"c701c93e-a74b-49a7-ac72-40ed577a6e92","nvidia-b300-vs-h200-deepseek-perf-zh","NVIDIA B300 對 H200：DeepSeek 實…","B300 有 288GB HBM3e 和 8TB\u002Fs 頻寬。這篇直接比 H200，拆解 DeepSeek 推論、KV cache、雲端成本與部署取捨。","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775161680437-1ibz.png","zh","2026-04-02T20:27:38.70665+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":17,"image_url":26,"cover_image":26,"language":19,"created_at":27},"d9fda242-d695-4ea4-a0e0-c6c64ad72965","nvidia-sets-new-mlperf-inference-records-zh","NVIDIA 再刷 MLPerf 推論紀錄","NVIDIA 在 MLPerf Inference v6.0 再交出新成績，GB300 NVL72 對 DeepSeek-R1 伺服器推論提升 2.7x，Llama 3.1 405B 也提升 1.5x。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775122496881-vxz0.png","2026-04-02T08:48:38.43437+00:00"]