[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-tensorrt-llm":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"9634621c-ff83-44fd-a73f-8941397f5465","TensorRT-LLM","tensorrt-llm",4,"TensorRT-LLM 是 NVIDIA 針對大型語言模型推論的最佳化框架，重點在降低延遲、提升吞吐量與硬體利用率。它常與 MLPerf、Blackwell\u002FGB300、Dynamo 等軟體堆疊一起出現，反映 LLM 伺服器效能不只看晶片，也看編譯與排程。","TensorRT-LLM is NVIDIA’s optimization stack for LLM inference, focused on lower latency, higher throughput, and better GPU utilization. It often shows up alongside MLPerf, Blackwell\u002FGB300, and Dynamo, highlighting how server performance depends on compilation, scheduling, and runtime software as much as hardware.",[12,21],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"0b5979a7-dbb3-438f-b8a1-68de0f838df0","nvidia-mlperf-software-inference-benchmarks-zh","Nvidia MLPerf 成績證明軟體還很重要","Nvidia 在 MLPerf v6.0 交出最高 2.77x 推論提升。GB300 NVL72 的成績顯示，Dynamo、TensorRT-LLM 這類軟體優化，已經和 GPU 硬體同樣重要。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775185790112-2r4u.png","zh","2026-04-03T03:09:34.300263+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"d9fda242-d695-4ea4-a0e0-c6c64ad72965","nvidia-sets-new-mlperf-inference-records-zh","NVIDIA 再刷 MLPerf 推論紀錄","NVIDIA 在 MLPerf Inference v6.0 再交出新成績，GB300 NVL72 對 DeepSeek-R1 伺服器推論提升 2.7x，Llama 3.1 405B 也提升 1.5x。","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775122496881-vxz0.png","2026-04-02T08:48:38.43437+00:00"]