[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-cuda":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"603dae7f-ab7d-4827-a3cb-4abe85e1f058","CUDA","cuda",15,"CUDA 是 NVIDIA GPU 的平行運算平台與程式模型，核心在 SM、warp、shared memory、HBM 延遲隱藏與資料搬移優化。它直接影響 AI 訓練、推論、科學模擬與高效能計算的效能上限。","CUDA is NVIDIA’s parallel computing platform and programming model, centered on SMs, warps, shared memory, and latency hiding with HBM. It shapes performance in AI training, inference, scientific simulation, and other GPU-heavy workloads.",[12,21,29,36,43,50,57],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"9f973836-4d14-4435-b3b7-fb180e57b5fc","cuda-architecture-sms-cores-memory-en","CUDA Architecture Explained: SMs, Cores, Memory","CUDA GPUs split work across SMs, thousands of cores, and layered memory. Here’s why that design beats CPUs on parallel tasks.","tools","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775197314080-mnf9.png","en","2026-04-03T06:21:38.505008+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"a15782d7-4678-4415-9a0b-4c642e46b022","nvidia-mlperf-software-inference-benchmarks-en","Nvidia’s MLPerf Gains Show Software Still Matters","Nvidia posted up to 2.77x MLPerf gains on GB300 NVL72, with software tricks like Dynamo and TensorRT-LLM doing heavy lifting.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775185791842-obyu.png","2026-04-03T03:09:35.154603+00:00",{"id":30,"slug":31,"title":32,"summary":33,"category":17,"image_url":34,"cover_image":34,"language":19,"created_at":35},"a7f6594f-6643-4e71-b5c2-f0a5f44c0549","nvidia-forum-su7-cuda-lattice-engine-en","NVIDIA Forum Debates a SU(7) CUDA Lattice Engine","A CUDA forum thread on Anchor4 SU(7) mixes lattice theory, shared memory tuning, and warp-level tricks for GPU synchronization.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775178407338-4vh2.png","2026-04-03T01:06:28.835722+00:00",{"id":37,"slug":38,"title":39,"summary":40,"category":26,"image_url":41,"cover_image":41,"language":19,"created_at":42},"68bfa04a-94c4-4c8a-921c-61e93ab207aa","cuda-cp-async-ampere-hbm-latency-en","cp.async on Ampere: Hide HBM Latency on A100","Ampere’s cp.async moves data without stalling warps, cutting HBM waits from 450–600 cycles into overlapped compute on A100.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775167612143-4qvu.png","2026-04-02T22:06:36.521272+00:00",{"id":44,"slug":45,"title":46,"summary":47,"category":17,"image_url":48,"cover_image":48,"language":19,"created_at":49},"e05a606a-88b9-45cd-8c3e-7ad0b30b7b5d","cuda-in-2025-why-gpus-still-win-en","CUDA in 2025: Why GPUs Still Win","CUDA powers NVIDIA GPUs across AI, science, and simulation, with up to 10x weather-model speedups and deep learning gains in the thousands.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775149432831-x799.png","2026-04-02T17:03:38.270396+00:00",{"id":51,"slug":52,"title":53,"summary":54,"category":17,"image_url":55,"cover_image":55,"language":19,"created_at":56},"5dda57f2-dfb7-4970-98ec-2e6ad298dd8c","cuda-asinf-accuracy-no-performance-hit-en","CUDA asinf() Gets More Accurate Without Slowing Down","A developer tuned asinf() for CUDA 12.8 and kept the 26-instruction baseline while improving accuracy, a rare win for GPU math.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775142952141-rcb7.png","2026-04-02T15:15:33.15066+00:00",{"id":58,"slug":59,"title":60,"summary":61,"category":62,"image_url":63,"cover_image":63,"language":19,"created_at":64},"e454a642-f03c-4794-b185-5f651aebbaca","nvidia-gtc-2026-key-highlights-innovations-en","NVIDIA GTC 2026: Key Highlights and Innovations","Explore the latest AI advancements from NVIDIA's GTC 2026, including new platforms, partnerships, and innovative AI applications.","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1774496823463-j3oi.png","2026-03-25T16:22:47.882615+00:00"]