[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-rvcc-llvm-incubator-riscv-optimizations-zh":3,"article-related-rvcc-llvm-incubator-riscv-optimizations-zh":33,"series-industry-959105f1-6f60-4334-aa2c-875c0da1b095":90},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":32,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":23},"959105f1-6f60-4334-aa2c-875c0da1b095","RVCC 想加速 RISC-V 調校，LLVM 先打槍","\u003Cp>RISC-V 的編譯器調校，現在不是玩票。\u003Ca href=\"https:\u002F\u002Fllvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM\u003C\u002Fa> 社群正在討論一個叫 \u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">RVCC\u003C\u002Fa> 的孵化器提案。目標很直接：把 RISC-V 的最佳化 patch 集中處理，讓測試和迭代跑更快。\u003C\u002Fp>\u003Cp>這件事會吵起來，不意外。因為 \u003Ca href=\"https:\u002F\u002Fclang.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">Clang\u003C\u002Fa> 和 LLVM 的流程本來就很嚴。你想快一點，maintainer 就會問：那品質誰扛？\u003C\u002Fp>\u003Cp>更有意思的是，RISC-V 已經不是小眾圈內話題。它開始進伺服器、嵌入式和開發板。講白了，編譯器好不好，會直接影響產品選型和效能表現。\u003C\u002Fp>\u003Ch2>RVCC 到底想解什麼問題\u003C\u002Fh2>\u003Cp>RVCC 的核心想法很簡單。不要讓每個 RISC-V patch 一開始就擠進 LLVM 主流程。先放到一個共享空間裡，先跑 b\u003Ca href=\"\u002Fnews\u002Faime-2026-leaderboard-qwen-leads-math-tests-zh\">en\u003C\u002Fa>chmark，再把表現好的改動往上送。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179497036-82pb.png\" alt=\"RVCC 想加速 RISC-V 調校，LLVM 先打槍\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這種做法的吸引力很實際。RISC-V 的優化工作，常常卡在 review 速度。不同公司、不同板子、不同微架構，都在試自己的調法。結果就是大家都在重複踩坑。\u003C\u002Fp>\u003Cp>如果有一個共同場域，至少可以少一點各自為政。對做工具鏈的人來說，這比單打獨鬥省事很多。對上游來說，也可能少收一些品質不穩的 patch。\u003C\u002Fp>\u003Cul>\u003Cli>聚焦 RISC-V 的 LLVM 與 Clang 最佳化\u003C\u002Fli>\u003Cli>先用 benchmark 驗證，再決定是否上游\u003C\u002Fli>\u003Cli>降低各家私有 toolchain 分支的分裂風險\u003C\u002Fli>\u003Cli>讓 patch 先在共享空間成熟再送審\u003C\u002Fli>\u003C\u002Ful>\u003Cp>這個概念很像 Linux kernel 的 staging 區。差別在於，這裡處理的是編譯器，不是 driver。編譯器的副作用更難抓，因為你改一行，可能整個 c\u003Ca href=\"\u002Fnews\u002Fclaude-code-leak-reveals-hidden-features-zh\">ode\u003C\u002Fa>\u003Ca href=\"\u002Fnews\u002Fadk-go-1-0-brings-agents-to-production-zh\">gen\u003C\u002Fa> 都變。\u003C\u002Fp>\u003Cp>所以 RVCC 不是單純要「讓大家快點送 patch」。它其實是在問：能不能先把實驗和正式審查拆開，讓 RISC-V 的調校更有效率。\u003C\u002Fp>\u003Ch2>LLVM 為什麼先反對\u003C\u002Fh2>\u003Cp>問題來了。LLVM maintainer \u003Ca href=\"https:\u002F\u002Fllvm.org\u002Fdocs\u002FDeveloperPolicy.html\" target=\"_blank\" rel=\"noopener\">Nikita Popov\u003C\u002Fa> 在 \u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM Discourse\u003C\u002Fa> 上直接回了強烈反對。他的意思很直白：這看起來像一個 LLVM fork，再加上一些不符合 LLVM 標準的 patch。\u003C\u002Fp>\u003Cblockquote>“This proposal gets a strong no from me. We should not have an incubator for what is basically an LLVM fork plus patches that fail to meet LLVM’s usual quality standards.” — Nikita Popov\u003C\u002Fblockquote>\u003Cp>這句話很重，但也很 LLVM。這個專案一直靠嚴格審查維持一致性。你一旦開一個比較鬆的入口，maintainer 很自然會擔心，最後是不是要幫別人的技術債收尾。\u003C\u002Fp>\u003Cp>還有一個更現實的問題。若 RVCC 變成 RISC-V patch 的預設入口，大家會不會開始把它當成「夠用就好」的第二管道？那樣一來，主線 LLVM 的門檻就會被間接稀釋。\u003C\u002Fp>\u003Cp>我覺得這才是爭點核心。不是要不要加速，而是加速的方法，會不會把 LLVM 的治理模型弄鬆。\u003C\u002Fp>\u003Ch2>這跟其他編譯器流程差在哪\u003C\u002Fh2>\u003Cp>編譯器專案本來就有不同的工作流。只是每種方法，代價都不一樣。LLVM 的主線 review 很嚴，優點是穩，缺點是慢。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179494262-9rfl.png\" alt=\"RVCC 想加速 RISC-V 調校，LLVM 先打槍\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>RISC-V 的情況又更複雜。\u003Ca href=\"https:\u002F\u002Friscv.org\u002F\" target=\"_blank\" rel=\"noopener\">RISC-V International\u003C\u002Fa> 的生態很雜，從新創晶片商到學界原型都有。你在一顆核心上跑得漂亮，不代表別顆也會買單。\u003C\u002Fp>\u003Cp>所以 benchmark 不是裝飾品，是必要條件。尤其是 RISC-V 這種實作差異大的架構，光看單一機器的結果，常常會誤判。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fllvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM\u003C\u002Fa>：集中審查，品質一致，但速度較慢\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">RVCC 提案\u003C\u002Fa>：先孵化再上游，迭代快，但容易分岔\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.kernel.org\u002F\" target=\"_blank\" rel=\"noopener\">Linux kernel staging\u003C\u002Fa>：先成熟再進主線，是可參考的模式\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Friscv.org\u002F\" target=\"_blank\" rel=\"noopener\">RISC-V International\u003C\u002Fa>：硬體差異大，測試條件更難統一\u003C\u002Fli>\u003C\u002Ful>\u003Cp>再看競品或替代路線，事情就更清楚。GCC 也有自己的 RISC-V backend 調校節奏。各家晶片商若覺得上游太慢，通常就會先在私有分支修。問題是，私有分支越多，最後越難對齊。\u003C\u002Fp>\u003Cp>RVCC 若真要成立，至少要回答三個問題：誰決定收 patch、怎麼量化效能、怎麼保證最後會回到主線。少一個，這東西就很像半個實驗室，半個 fork。\u003C\u002Fp>\u003Ch2>RISC-V 的工具鏈壓力從哪來\u003C\u002Fh2>\u003Cp>RISC-V 這幾年受關注，不是因為口號，而是因為實際部署開始變多。從低功耗裝置到伺服器，大家都在看它能不能省成本、保彈性、保供應鏈。\u003C\u002Fp>\u003Cp>但硬體一旦多樣化，編譯器壓力就上來。你不能只看 ISA 規格。你還要看 pipeline、快取、分支預測、記憶體延遲，甚至是廠商自己加的擴充。\u003C\u002Fp>\u003Cp>這也是為什麼 RISC-V 的效能優化特別吃人力。不是寫一個演算法就結束。你得在不同板子上反覆驗證，還要知道哪些變動是局部有效，哪些是全域有效。\u003C\u002Fp>\u003Cp>產業脈絡也很直接。過去幾年，很多團隊先做原型，再慢慢拉上游。現在情況變了。當晶片真的要出貨，toolchain 的成熟度就不能只靠「差不多能跑」。\u003C\u002Fp>\u003Cp>我會說，RVCC 的出現本身就說明一件事：RISC-V 的競爭，已經從硬體規格表，轉到編譯器和工具鏈的細節戰了。\u003C\u002Fp>\u003Ch2>接下來會怎麼走\u003C\u002Fh2>\u003Cp>目前 RVCC 還只是提案，不是正式項目。接下來要看的，不是誰聲音大，而是誰能拿出可驗證的流程。\u003C\u002Fp>\u003Cp>我猜 LLVM 不太可能接受一個看起來像 soft fork 的設計。比較可能的版本，是一個更窄的合作空間。規則要更硬，benchmark 方法要更透明，回主線的路也要更短。\u003C\u002Fp>\u003Cp>如果這案子真的要活下來，它必須證明自己是在幫 LLVM 篩 patch，不是在幫人繞過 LLVM。這條線很細，但很重要。\u003C\u002Fp>\u003Cp>對台灣開發者來說，這件事也不算遠。只要你碰到 RISC-V 開發板、嵌入式軟體，或自己編譯 toolchain，這場爭論最後都會回到你手上。因為編譯器怎麼長，會直接影響你拿到的效能和除錯成本。\u003C\u002Fp>\u003Cp>如果你想跟進這條線，建議順手看 \u003Ca href=\"https:\u002F\u002Fllvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM\u003C\u002Fa> 的 release 節奏，以及 \u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">Discourse\u003C\u002Fa> 上的 RISC-V 討論。接下來 6 到 12 個月，這類工具鏈治理問題只會更常見，不會更少。\u003C\u002Fp>","RVCC 想在 LLVM 內做 RISC-V 優化孵化器，讓編譯器調校更快進入 Clang 與 LLVM，但 maintainer Nikita Popov 已公開反對，爭點在流程、品質與是否會變成半個 fork。","www.phoronix.com","https:\u002F\u002Fwww.phoronix.com\u002Fnews\u002FLLVM-RVCC-Incubator-Proposed",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179497036-82pb.png",[13,14,15,16,17,18,19,20],"RISC-V","LLVM","RVCC","Clang","編譯器最佳化","工具鏈","Nikita Popov","RISC-V International","zh",1,false,"2026-04-03T01:24:25.716607+00:00","2026-04-03T01:24:25.695+00:00","done","5728e8b8-01e5-4329-8187-2610feeb0e9d","rvcc-llvm-incubator-riscv-optimizations-zh","industry","ba4d8580-aa49-4ade-8016-578a12e7794f","published","2026-04-07T07:41:13.211+00:00",{"tags":34,"relatedLang":49,"relatedPosts":53},[35,37,39,41,42,44,46,48],{"name":16,"slug":36},"clang",{"name":19,"slug":38},"nikita-popov",{"name":15,"slug":40},"rvcc",{"name":18,"slug":18},{"name":13,"slug":43},"risc-v",{"name":14,"slug":45},"llvm",{"name":20,"slug":47},"risc-v-international",{"name":17,"slug":17},{"id":30,"slug":50,"title":51,"language":52},"rvcc-llvm-incubator-riscv-optimizations-en","RVCC Wants Faster RISC-V Tuning in LLVM","en",[54,60,66,72,78,84],{"id":55,"slug":56,"title":57,"cover_image":58,"image_url":58,"created_at":59,"category":29},"491c49cd-6b0b-4c4a-8120-402254ec0f4a","how-to-follow-gemini-and-apple-watch-12-rumors-zh","怎麼追 Gemini 與 Apple Watch 12 傳聞","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778933028697-qnhw.png","2026-05-16T12:03:23.685907+00:00",{"id":61,"slug":62,"title":63,"cover_image":64,"image_url":64,"created_at":65,"category":29},"92424d3d-23ac-4ae5-bedf-08db6a01eb9a","jensen-huang-trump-china-trip-zh","黃仁勳搭上川普專機赴中","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778930030195-daad.png","2026-05-16T11:13:26.928711+00:00",{"id":67,"slug":68,"title":69,"cover_image":70,"image_url":70,"created_at":71,"category":29},"cde2a775-0898-485e-9b0e-38c4288501b8","chatgpt-vs-gemini-9-tests-1-clear-winner-2026-zh","ChatGPT vs Gemini：9 項測試，誰更值得選","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778925827606-i3zy.png","2026-05-16T10:03:29.803046+00:00",{"id":73,"slug":74,"title":75,"cover_image":76,"image_url":76,"created_at":77,"category":29},"a4380666-3f3c-4465-be35-903068c7045e","how-to-reduce-ai-model-serving-friction-zh","怎麼降低 AI 模型部署摩擦","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778922836413-ff99.png","2026-05-16T09:13:31.665292+00:00",{"id":79,"slug":80,"title":81,"cover_image":82,"image_url":82,"created_at":83,"category":29},"bfbcb15a-47ab-478e-822a-38d89dc8cb84","lora-vs-qlora-vs-full-fine-tuning-zh","LoRA vs QLoRA vs 全量微調","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778915627798-evv7.png","2026-05-16T07:13:32.474543+00:00",{"id":85,"slug":86,"title":87,"cover_image":88,"image_url":88,"created_at":89,"category":29},"3c8fd898-40aa-4f98-b0d1-178e7b4d1c69","why-global-ai-regulation-2026-rewards-modular-compliance-zh","為什麼 2026 全球 AI 監管獎勵模組化合規","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778913216545-oxy8.png","2026-05-16T06:33:19.724845+00:00",[91,96,101,106,111,116,121,126,131,136],{"id":92,"slug":93,"title":94,"created_at":95},"ee073da7-28b3-4752-a319-5a501459fb87","ai-in-2026-what-actually-matters-now-zh","2026 AI 真正重要的事","2026-03-26T07:09:12.008134+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"83bd1795-8548-44c9-9a7e-de50a0923f71","trump-ai-framework-power-speech-state-preemption-zh","川普 AI 框架瞄準電力、言論與州權","2026-03-26T07:12:18.695466+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"ea6be18b-c903-4e54-97b7-5f7447a612e0","nvidia-gtc-2026-big-ai-announcements-zh","NVIDIA GTC 2026 重點拆解","2026-03-26T07:14:26.62638+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"4bcec76f-4c36-4daa-909f-54cd702f7c93","claude-users-spreading-out-and-getting-better-zh","Claude 用戶更分散，也更會用","2026-03-26T07:22:52.325888+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"bd903b15-2473-4178-9789-b7557816e535","openclaw-raises-hard-question-for-ai-models-zh","OpenClaw 逼問 AI 模型價值","2026-03-26T07:24:54.707486+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"eeac6b9e-ad9d-4831-8eec-8bba3f9bca6a","gap-google-gemini-checkout-fashion-search-zh","Gap 把結帳搬進 Gemini","2026-03-26T07:28:23.937768+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"0740e53f-605d-4d57-8601-c10beb126f3c","google-pushes-gemini-transition-to-march-2026-zh","Google 把 Gemini 轉換延到 2026 年 3…","2026-03-26T07:30:12.825269+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"e660d801-2421-4529-8fa9-86b82b066990","metas-llama-4-benchmark-scandal-gets-worse-zh","Meta Llama 4 分數風波又擴大","2026-03-26T07:34:21.156421+00:00",{"id":132,"slug":133,"title":134,"created_at":135},"183f9e7c-e143-40bb-a6d5-67ba84a3a8bc","accenture-mistral-ai-sovereign-enterprise-deal-zh","Accenture 攜手 Mistral AI 賣主權 AI","2026-03-26T07:38:14.818906+00:00",{"id":137,"slug":138,"title":139,"created_at":140},"191d9b1b-768a-478c-978c-dd7431a38149","mistral-ai-faces-its-hardest-year-yet-zh","Mistral AI 迎來最硬的一年","2026-03-26T07:40:23.716374+00:00"]