[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-mlperf":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"04e80913-2de0-4f2d-a44f-44feede215a5","MLPerf","mlperf",3,"MLPerf 是用來衡量機器學習訓練與推論效能的公開基準，常被拿來比較 GPU、伺服器與軟體堆疊的實際差異。這個標籤聚焦最新成績、模型類型與最佳化手法，尤其是推論延遲、吞吐量與系統調校。","MLPerf is the public benchmark used to compare machine learning training and inference across GPUs, servers, and software stacks. This tag tracks new results, model classes, and optimization methods, especially latency, throughput, and system tuning.",[12],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"a15782d7-4678-4415-9a0b-4c642e46b022","nvidia-mlperf-software-inference-benchmarks-en","Nvidia’s MLPerf Gains Show Software Still Matters","Nvidia posted up to 2.77x MLPerf gains on GB300 NVL72, with software tricks like Dynamo and TensorRT-LLM doing heavy lifting.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775185791842-obyu.png","en","2026-04-03T03:09:35.154603+00:00"]