[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-cuda-cp-async-ampere-hbm-latency-zh":3,"tags-cuda-cp-async-ampere-hbm-latency-zh":33,"related-lang-cuda-cp-async-ampere-hbm-latency-zh":50,"related-posts-cuda-cp-async-ampere-hbm-latency-zh":54,"series-research-d458f7db-1e28-4cf1-9bd8-ad9c95dee997":91},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":32,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":23},"d458f7db-1e28-4cf1-9bd8-ad9c95dee997","Ampere 的 cp.async 怎麼藏 HBM 延遲","\u003Cp>在 \u003Ca href=\"https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdata-center\u002Fa100\u002F\" target=\"_blank\" rel=\"noopener\">NVIDIA A100\u003C\u002Fa> 上，HBM2e 一次載入大約要 450 到 600 cycles。這個數字很殘酷。你如果什麼都不做，一個 warp 可能就卡在那邊發呆。\u003C\u002Fp>\u003Cp>Am\u003Ca href=\"\u002Fnews\u002Fopenai-vs-deepmind-models-apps-2026-zh\">pe\u003C\u002Fa>re 的 \u003Ccode>cp.async\u003C\u002Fcode> 很有意思。它把資料直接搬進 shared memory。它不先佔住 register，也不會把 long scoreboard 拉滿。講白了，就是讓你先做別的事，再回頭收資料。\u003C\u002Fp>\u003Cp>這篇文章要談的，不只是指令本身。重點是思維切換。你不再想「先 load，再 compute」。你要想的是「資料先飛，算力先跑」。\u003C\u002Fp>\u003Ch2>A100 的記憶體階層，才是主角\u003C\u002Fh2>\u003Cp>A100 的效能，不是只看 CUDA core 數量。真正決定速度的，常常是記憶體階層。Registers 很快，但每個 thread 只有 255 個上限。Shared memory 和 L1 共享 192 KB。L2 cache 有 40 MB。HBM2e 理論頻寬可到 2 TB\u002Fs，但實戰通常沒那麼漂亮。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775167621432-n9fo.png\" alt=\"Ampere 的 cp.async 怎麼藏 HBM 延遲\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這些數字不是規格表裝飾品。它們直接決定 kernel 會不會翻車。Register spill 會掉到 local memory。那就是 global memory 等級的痛。Shared memory 如果打到 bank conflict，warp 也會被迫排隊。L2 miss 太多，延遲就會飆上去。\u003C\u002Fp>\u003Cp>所以你看到 source code 很順，不代表跑起來就順。GPU 最愛在這種地方打臉人。尤其是資料路徑一長，問題就會被放大。\u003C\u002Fp>\u003Cul>\u003Cli>Register file：每個 SM 約 256 KB\u003C\u002Fli>\u003Cli>Shared memory bank：32 個 bank，每個 4 bytes 寬\u003C\u002Fli>\u003Cli>L2 cache：A100 上是 40 MB\u003C\u002Fli>\u003Cli>HBM2e 理論頻寬：2 TB\u002Fs\u003C\u002Fli>\u003Cli>HBM2e 延遲：大約 450 到 600 cycles\u003C\u002Fli>\u003C\u002Ful>\u003Cp>所以 \u003Ccode>cp.async\u003C\u002Fcode> 的目的很明確。它不是消滅延遲。它是把延遲藏起來。這兩件事差很多。\u003C\u002Fp>\u003Ch2>\u003Ca href=\"https:\u002F\u002Fdocs.nvidia.com\u002Fcuda\u002Fparallel-thread-execution\u002Findex.html\" target=\"_blank\" rel=\"noopener\">cp.async\u003C\u002Fa> 到底改了什麼\u003C\u002Fh2>\u003Cp>傳統 global load 會先進 register。這代表 warp 要等資料回來，才能繼續用那些目的暫存器。硬體會把這段等待算進 long scoreboard。你就只能乾等。\u003C\u002Fp>\u003Cp>\u003Ccode>cp.async\u003C\u002Fcode> 不一樣。它把資料直接從 global memory 搬到 shared memory。中間不經過目的 register。這樣一來，warp 發出指令後，可以立刻去做其他運算。\u003C\u002Fp>\u003Cp>這個差異看起來很小，實際上很兇。因為它把 load 和 compute 拆開了。你可以在算上一批 tile 的時候，讓下一批資料自己飛進來。這就是 overlap。\u003C\u002Fp>\u003Cblockquote>“Latency hiding is the name of the game.” — Mark Harris\u003C\u002Fblockquote>\u003Cp>這句話很老派，但一直有效。\u003Ca href=\"https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fauthor\u002Fmarkharris\u002F\" target=\"_blank\" rel=\"noopener\">Mark Harris\u003C\u002Fa> 一直在講同一件事。GPU 程式設計的核心，不是讓記憶體變快。是讓算力不要閒著。\u003C\u002Fp>\u003Cp>我覺得 \u003Ccode>cp.async\u003C\u002Fcode> 厲害的地方，就在這裡。它不是魔法。它是把原本硬碰硬的等待，改成排程問題。\u003C\u002Fp>\u003Ch2>commit、wait、double buffer 才是實戰\u003C\u002Fh2>\u003Cp>\u003Ccode>cp.async.commit_g\u003Ca href=\"\u002Fnews\u002Fkiro-aws-healthomics-bioinformatics-workflow-zh\">ro\u003C\u002Fa>up\u003C\u002Fcode> 和 \u003Ccode>cp.async.wait_group\u003C\u002Fcode> 這組搭配，才是實戰重點。前者只是做分組記帳。後者則是等到還剩幾組在飛。你如果設成 \u003Ccode>wait_group 1\u003C\u002Fcode>，就代表允許一組還在路上。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775167618039-xf30.png\" alt=\"Ampere 的 cp.async 怎麼藏 HBM 延遲\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這樣就能做 double buffer。A buffer 在算，B buffer 在載。下一輪再交換。Kernel 不需要把 memory 變快。它只要讓 machine 一直忙。\u003C\u002Fp>\u003Cp>這種做法很像工廠產線。不是每個工人都等同一個零件。是把流程拆開，讓每個站都不空轉。GPU 很吃這套。\u003C\u002Fp>\u003Cul>\u003Cli>傳統路徑：load 到 register，再等資料回來\u003C\u002Fli>\u003Cli>\u003Ccode>cp.async\u003C\u002Fcode> 路徑：直接進 shared memory\u003C\u002Fli>\u003Cli>\u003Ccode>commit_group\u003C\u002Fcode>：把一批 async copy 分組\u003C\u002Fli>\u003Cli>\u003Ccode>wait_group 1\u003C\u002Fcode>：保留一組在飛，其他先算\u003C\u002Fli>\u003C\u002Ful>\u003Cp>但這裡有代價。Shared memory 佔用會增加。stage 數一多，occupancy 可能掉。這不是免費午餐。你如果 kernel 算術密度不夠，可能反而賠。\u003C\u002Fp>\u003Cp>所以像 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcutlass\" target=\"_blank\" rel=\"noopener\">CUTLASS\u003C\u002Fa> 這種 library，才會把 pipeline depth 當成可調參數。不是每個 kernel 都適合同一種 stage 數。這種事真的只能量。\u003C\u002Fp>\u003Ch2>Profiler 看到的差別很直接\u003C\u002Fh2>\u003Cp>你如果想知道 kernel 有沒有吃到 \u003Ccode>cp.async\u003C\u002Fcode> 的好處，別先看感覺。直接看 profiler。傳統 load-heavy kernel，常常是 long scoreboard stall 佔大頭。你會看到 warp 很多時間都在等資料。\u003C\u002Fp>\u003Cp>改成好的 pipelined 版本後，情況會變。long scoreboard 會明顯下降。FMA pipe 會更忙。這才是你要的畫面。不是「理論頻寬很高」，而是「實際有在算」。\u003C\u002Fp>\u003Cp>這裡有個常見誤區。很多人只盯著 bandwidth。其實 kernel 快不快，不只看搬多少 GB\u002Fs。更重要的是，搬資料的時候，有沒有順便把 compute 填滿。\u003C\u002Fp>\u003Cul>\u003Cli>改前：\u003Ccode>smsp__warp_issue_stalled_long_scoreboard\u003C\u002Fcode> 常見 40% 到 70%\u003C\u002Fli>\u003Cli>改後：long scoreboard 可能降到 5% 以下\u003C\u002Fli>\u003Cli>調好後：\u003Ccode>smsp__pipe_fma_cycles_active\u003C\u002Fcode> 可到 70% 到 90%\u003C\u002Fli>\u003Cli>A100 L2 帶寬：約 4 TB\u002Fs aggregate\u003C\u002Fli>\u003C\u002Ful>\u003Cp>如果你想自己看，\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fcuda-samples\" target=\"_blank\" rel=\"noopener\">NVIDIA CUDA Samples\u003C\u002Fa> 是很好的起點。先看原版 kernel，再做一版 tiled + async copy。差異通常很明顯。\u003C\u002Fp>\u003Cp>工具面也別省。\u003Ca href=\"https:\u002F\u002Fdocs.nvidia.com\u002Fcuda\u002Fprofiler-users-guide\u002Findex.html\" target=\"_blank\" rel=\"noopener\">NVIDIA Nsight Compute\u003C\u002Fa> 的 stall reason 和 issue activity，真的值得看。沒有這些數據，很多優化都只是猜。\u003C\u002Fp>\u003Ch2>跟 Hopper 比，Ampere 還差哪裡\u003C\u002Fh2>\u003Cp>如果你把視野拉大，Ampere 只是中繼站。Hopper 又往前推了一步。它有 \u003Ca href=\"https:\u002F\u002Fwww.nvidia.com\u002Fen-us\u002Fdata-center\u002Fhopper-gpu-architecture\u002F\" target=\"_blank\" rel=\"noopener\">Tensor Memory Accelerator\u003C\u002Fa>，也就是 TMA。這東西把資料搬運再往硬體化推進。\u003C\u002Fp>\u003Cp>這代表什麼？代表資料移動越來越不像 blocking load。它更像一個可排程的搬運任務。程式設計師還是要想資料布局，但不用把每次搬運都當成同步事件。\u003C\u002Fp>\u003Cp>我自己的看法很直接。你如果還在寫那種「load、wait、compute、repeat」的 kernel，通常還有空間可以挖。尤其在 A100 這種卡上，\u003Ccode>cp.async\u003C\u002Fcode> 很值得試。\u003C\u002Fp>\u003Cul>\u003Cli>Ampere：靠 \u003Ccode>cp.async\u003C\u002Fcode> 做 overlap\u003C\u002Fli>\u003Cli>Hopper：再往前，加入 TMA\u003C\u002Fli>\u003Cli>競品面：AMD ROCm 也在推資料搬運優化，但 API 路線不同\u003C\u002Fli>\u003Cli>實務面：GEMM、convolution、stencil 類 kernel 最常吃到好處\u003C\u002Fli>\u003C\u002Ful>\u003Cp>但別亂上。不是每個 kernel 都適合 async copy。資料量太小、算術密度太低，或 occupancy 已經很差的時候，硬上只會更亂。先量，再改。\u003C\u002Fp>\u003Ch2>這件事其實是 CUDA 老問題的新解法\u003C\u002Fh2>\u003Cp>CUDA 很多年來都在講 overlap。只是早期工具沒那麼順。你要自己拆 load、自己控同步、自己顧 pipeline。現在 \u003Ccode>cp.async\u003C\u002Fcode> 只是把這套做得更自然。\u003C\u002Fp>\u003Cp>這也解釋了為什麼很多高效能 library 都很愛它。像 GEMM、att\u003Ca href=\"\u002Fnews\u002Fprompt-engineering-agents-structured-outputs-zh\">ent\u003C\u002Fa>ion、卷積這些工作，資料搬運本來就很重。只要能把搬運藏到計算後面，整體效率就會好看很多。\u003C\u002Fp>\u003Cp>台灣做 AI 軟體的人，很多都只盯模型。其實底層 kernel 才是血肉。模型跑得快，不只是 Transformer 參數多。還要看資料怎麼走。這點很現實，也很煩，但就是事實。\u003C\u002Fp>\u003Ch2>下一步怎麼做\u003C\u002Fh2>\u003Cp>如果你手上有 A100 或其他 Ampere GPU，我會建議你先挑一個熱點 kernel。看它是不是被 long scoreboard 卡住。再試一版 double-buffer 的 \u003Ccode>cp.async\u003C\u002Fcode> 寫法。不要一次改太多。\u003C\u002Fp>\u003Cp>如果 stall 比例下降，FMA 利用率上升，那就代表方向對了。若沒有，問題可能在資料布局、shared memory bank conflict，或 occupancy 本身。這時候別硬拗，回頭看 profiler。\u003C\u002Fp>\u003Cp>說到底，\u003Ccode>cp.async\u003C\u002Fcode> 的價值很務實。它不是讓 HBM 變不慢。它是讓你少等一點。對做 CUDA 的人來說，少等 100 個 cycles，常常就夠有感了。\u003C\u002Fp>\u003Cp>你如果現在就在調 kernel，我的建議很簡單：先量 long scoreboard，再試 async pipeline。別先信直覺。GPU 很少照直覺走。\u003C\u002Fp>","A100 上一次 HBM2e 載入約要 450 到 600 cycles。Ampere 的 cp.async 讓資料直進 shared memory，搭配 pipeline 把等待時間藏進計算裡。","softwarefrontier.substack.com","https:\u002F\u002Fsoftwarefrontier.substack.com\u002Fp\u002Fmastering-cuda-and-high-performance-ea1",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775167621432-n9fo.png",[13,14,15,16,17,18,19,20],"CUDA","cp.async","Ampere","A100","HBM2e","shared memory","Nsight Compute","CUTLASS","zh",0,false,"2026-04-02T22:06:36.022671+00:00","2026-04-02T22:06:35.906+00:00","done","bffdfadf-7883-411e-a9aa-ddfb6108265d","cuda-cp-async-ampere-hbm-latency-zh","research","68bfa04a-94c4-4c8a-921c-61e93ab207aa","published","2026-04-07T09:01:02.246+00:00",[34,36,38,40,42,44,46,48],{"name":14,"slug":35},"cpasync",{"name":20,"slug":37},"cutlass",{"name":13,"slug":39},"cuda",{"name":19,"slug":41},"nsight-compute",{"name":18,"slug":43},"shared-memory",{"name":17,"slug":45},"hbm2e",{"name":15,"slug":47},"ampere",{"name":16,"slug":49},"a100",{"id":30,"slug":51,"title":52,"language":53},"cuda-cp-async-ampere-hbm-latency-en","cp.async on Ampere: Hide HBM Latency on A100","en",[55,61,67,73,79,85],{"id":56,"slug":57,"title":58,"cover_image":59,"image_url":59,"created_at":60,"category":29},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":62,"slug":63,"title":64,"cover_image":65,"image_url":65,"created_at":66,"category":29},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":68,"slug":69,"title":70,"cover_image":71,"image_url":71,"created_at":72,"category":29},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":74,"slug":75,"title":76,"cover_image":77,"image_url":77,"created_at":78,"category":29},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":80,"slug":81,"title":82,"cover_image":83,"image_url":83,"created_at":84,"category":29},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":86,"slug":87,"title":88,"cover_image":89,"image_url":89,"created_at":90,"category":29},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[92,97,102,107,112,117,122,127,132,137],{"id":93,"slug":94,"title":95,"created_at":96},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":123,"slug":124,"title":125,"created_at":126},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":128,"slug":129,"title":130,"created_at":131},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":133,"slug":134,"title":135,"created_at":136},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":138,"slug":139,"title":140,"created_at":141},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]