[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-rvcc-llvm-incubator-riscv-optimizations-en":3,"tags-rvcc-llvm-incubator-riscv-optimizations-en":30,"related-lang-rvcc-llvm-incubator-riscv-optimizations-en":41,"related-posts-rvcc-llvm-incubator-riscv-optimizations-en":45,"series-industry-ba4d8580-aa49-4ade-8016-578a12e7794f":82},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":20},"ba4d8580-aa49-4ade-8016-578a12e7794f","RVCC Wants Faster RISC-V Tuning in LLVM","\u003Cp>A new proposal wants to bring RISC-V performance work into an \u003Ca href=\"https:\u002F\u002Fllvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM\u003C\u002Fa> incubator called \u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">RVCC\u003C\u002Fa>. The pitch is simple: collect optimization patches in one place, test them faster, and move better code into \u003Ca href=\"https:\u002F\u002Fclang.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">Clang\u003C\u002Fa> and LLVM with less friction.\u003C\u002Fp>\u003Cp>The timing matters because RISC-V is no longer a hobbyist curiosity. It is showing up in servers, embedded parts, and developer boards, which means compiler quality now affects real product decisions, not just benchmark bragging rights.\u003C\u002Fp>\u003Cp>RVCC is meant to act like a staging area for RISC-V compiler work. Instead of sending every patch straight into LLVM proper, contributors could iterate in a shared space, run benchmarks across hardware, and then submit the strongest changes upstream.\u003C\u002Fp>\u003Ch2>What RVCC is trying to fix\u003C\u002Fh2>\u003Cp>The proposal is aimed at a very specific pain point: RISC-V optimization work can move slowly when every change has to pass through LLVM’s main review pipeline from day one. That is a problem when vendors, board makers, and compiler engineers are all trying different tricks for the same instruction set.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179487144-likv.png\" alt=\"RVCC Wants Faster RISC-V Tuning in LLVM\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>According to the proposal, RVCC would collect RISC-V performance patches for LLVM and Clang, validate them against benchmarks, and reduce the chance that each company builds its own private toolchain path. That last part matters more than it sounds. Toolchain fragmentation can make performance tuning harder to compare and harder to upstream.\u003C\u002Fp>\u003Cp>The idea is loosely similar to the Linux kernel staging area, where code can mature before it gets the full treatment from maintainers. Here, though, the target is compiler work for one architecture rather than drivers or subsystems.\u003C\u002Fp>\u003Cul>\u003Cli>Focus: RISC-V compiler optimization patches for LLVM and Clang\u003C\u002Fli>\u003Cli>Method: benchmark-driven testing across multiple RISC-V hardware platforms\u003C\u002Fli>\u003Cli>Goal: faster iteration before upstream LLVM review\u003C\u002Fli>\u003Cli>Risk being addressed: vendor-specific toolchain fragmentation\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Why LLVM people are already pushing back\u003C\u002Fh2>\u003Cp>That plan did not land in a vacuum. LLVM maintainer \u003Ca href=\"https:\u002F\u002Fllvm.org\u002Fdocs\u002FDeveloperPolicy.html\" target=\"_blank\" rel=\"noopener\">Nikita Popov\u003C\u002Fa> replied on the \u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM Discourse\u003C\u002Fa> thread with a hard rejection. His concern is that an incubator for RVCC would amount to an LLVM fork with patches that do not meet LLVM’s normal quality bar.\u003C\u002Fp>\u003Cblockquote>“This proposal gets a strong no from me. We should not have an incubator for what is basically an LLVM fork plus patches that fail to meet LLVM’s usual quality standards.” — Nikita Popov\u003C\u002Fblockquote>\u003Cp>That quote gets to the heart of the debate. LLVM has spent years building a reputation for disciplined review and predictable code quality. Anything that looks like a side channel for lower-bar contributions is going to trigger alarm bells among maintainers who have to clean up the mess later.\u003C\u002Fp>\u003Cp>There is also a governance question hiding underneath the technical one. If RVCC becomes a normal place to land RISC-V work first, does it help LLVM move faster, or does it create a second-class pipeline that people start treating as good enough on its own?\u003C\u002Fp>\u003Ch2>How this compares with other compiler workflows\u003C\u002Fh2>\u003Cp>Compiler projects already use a range of release and incubation models, but the numbers and trade-offs are different. LLVM’s mainline review process is strict by design, while other ecosystems sometimes tolerate more experimentation in parallel branches or vendor trees.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179488482-kcbb.png\" alt=\"RVCC Wants Faster RISC-V Tuning in LLVM\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>For RISC-V specifically, the pressure is higher because the architecture is still expanding in the market. The \u003Ca href=\"https:\u002F\u002Friscv.org\u002F\" target=\"_blank\" rel=\"noopener\">RISC-V International\u003C\u002Fa> ecosystem includes chips from startups, established silicon vendors, and academic groups, which means one optimization can behave differently across implementations.\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fllvm.org\u002F\" target=\"_blank\" rel=\"noopener\">LLVM\u003C\u002Fa>: centralized review, high consistency, slower experimental turnaround\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fdiscourse.llvm.org\u002F\" target=\"_blank\" rel=\"noopener\">RVCC proposal\u003C\u002Fa>: faster iteration for RISC-V-specific patches, higher risk of drift\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.kernel.org\u002F\" target=\"_blank\" rel=\"noopener\">Linux kernel staging\u003C\u002Fa>: a known model for maturing code before mainline review\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Friscv.org\u002F\" target=\"_blank\" rel=\"noopener\">RISC-V International\u003C\u002Fa>: broad hardware diversity makes benchmarking more complicated\u003C\u002Fli>\u003C\u002Ful>\u003Cp>There is a practical reason benchmark data matters here. A compiler tweak that helps one in-order core can hurt an out-of-order design, and a patch that looks great on synthetic tests may do little for real workloads. That is why the proposal emphasizes testing on different RISC-V platforms rather than trusting one lab setup.\u003C\u002Fp>\u003Cp>LLVM has already made room for specialized workflows in other areas, but the bar is high. If RVCC wants to survive the review debate, it will need to prove that it improves upstream quality instead of just creating a holding pen for risky code.\u003C\u002Fp>\u003Ch2>What happens next for RISC-V compiler work\u003C\u002Fh2>\u003Cp>For now, RVCC is still a proposal, not an approved project. That means the immediate question is whether LLVM leadership sees it as a useful pressure valve or as an unnecessary detour from the main codebase.\u003C\u002Fp>\u003Cp>The most likely outcome is some compromise shape: an external collaboration space with tighter rules, or a narrower scope that limits what can be staged there. If that happens, the real test will be whether the project produces patches that are easier to review, easier to benchmark, and easier to upstream.\u003C\u002Fp>\u003Cp>My bet is that LLVM will not sign off on anything that looks like a soft fork. If RVCC survives, it will probably do so only by acting as a short-lived proving ground with strict gates, clear benchmark methodology, and a direct path back into mainline LLVM.\u003C\u002Fp>\u003Cp>For developers working on RISC-V today, the useful takeaway is simple: compiler performance is becoming a coordination problem, not just an optimization problem. If you are shipping RISC-V software, watch this discussion closely, because the outcome could shape how quickly new backend work reaches your toolchain.\u003C\u002Fp>\u003Cp>And if you want the broader compiler context, keep an eye on our coverage of \u003Ca href=\"\u002Fnews\u002Fllvm-clang-22-release\" target=\"_blank\" rel=\"noopener\">LLVM\u002FClang 22\u003C\u002Fa> and \u003Ca href=\"\u002Fnews\u002Fllvm-human-in-the-loop-ai-contributions\" target=\"_blank\" rel=\"noopener\">LLVM’s policy on AI-assisted contributions\u003C\u002Fa>, since both show how careful the project has become about process as much as code.\u003C\u002Fp>","RVCC is being proposed as an LLVM incubator to speed up RISC-V compiler tuning, but LLVM maintainer Nikita Popov already objects.","www.phoronix.com","https:\u002F\u002Fwww.phoronix.com\u002Fnews\u002FLLVM-RVCC-Incubator-Proposed",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179487144-likv.png",[13,14,15,16,17],"LLVM","RISC-V","compiler optimization","Clang","RVCC","en",0,false,"2026-04-03T01:24:25.94061+00:00","2026-04-03T01:24:25.918+00:00","done","5728e8b8-01e5-4329-8187-2610feeb0e9d","rvcc-llvm-incubator-riscv-optimizations-en","industry","959105f1-6f60-4334-aa2c-875c0da1b095","published","2026-04-07T07:41:13.168+00:00",[31,33,35,37,39],{"name":16,"slug":32},"clang",{"name":17,"slug":34},"rvcc",{"name":15,"slug":36},"compiler-optimization",{"name":14,"slug":38},"risc-v",{"name":13,"slug":40},"llvm",{"id":27,"slug":42,"title":43,"language":44},"rvcc-llvm-incubator-riscv-optimizations-zh","RVCC 想加速 RISC-V 調校，LLVM 先打槍","zh",[46,52,58,64,70,76],{"id":47,"slug":48,"title":49,"cover_image":50,"image_url":50,"created_at":51,"category":26},"6ff3920d-c8ea-4cf3-8543-9cf9efc3fe36","circles-agent-stack-targets-machine-speed-payments-en","Circle’s Agent Stack targets machine-speed payments","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778871659638-hur1.png","2026-05-15T19:00:44.756112+00:00",{"id":53,"slug":54,"title":55,"cover_image":56,"image_url":56,"created_at":57,"category":26},"1270e2f4-6f3b-4772-9075-87c54b07a8d1","iren-signs-nvidia-ai-infrastructure-pact-en","IREN signs Nvidia AI infrastructure pact","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778871059665-3vhi.png","2026-05-15T18:50:38.162691+00:00",{"id":59,"slug":60,"title":61,"cover_image":62,"image_url":62,"created_at":63,"category":26},"b308c85e-ee9c-4de6-b702-dfad6d8da36f","circle-agent-stack-ai-payments-en","Circle launches Agent Stack for AI payments","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778870450891-zv1j.png","2026-05-15T18:40:31.462625+00:00",{"id":65,"slug":66,"title":67,"cover_image":68,"image_url":68,"created_at":69,"category":26},"f7028083-46ba-493b-a3db-dd6616a8c21f","why-nebius-ai-pivot-is-more-real-than-hype-en","Why Nebius’s AI Pivot Is More Real Than Hype","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778823055711-tbfv.png","2026-05-15T05:30:26.829489+00:00",{"id":71,"slug":72,"title":73,"cover_image":74,"image_url":74,"created_at":75,"category":26},"b63692ed-db6a-4dbd-b771-e1babdc94af7","nvidia-backs-corning-factories-with-billions-en","Nvidia backs Corning factories with billions","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778822444685-tvx6.png","2026-05-15T05:20:28.914908+00:00",{"id":77,"slug":78,"title":79,"cover_image":80,"image_url":80,"created_at":81,"category":26},"26ab4480-2476-4ec7-b43a-5d46def6487e","why-anthropic-gates-foundation-ai-public-goods-en","Why Anthropic and the Gates Foundation should fund AI public goods","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778796645685-wbw0.png","2026-05-14T22:10:22.60302+00:00",[83,88,93,98,103,108,113,118,123,128],{"id":84,"slug":85,"title":86,"created_at":87},"d35a1bd9-e709-412e-a2df-392df1dc572a","ai-impact-2026-developments-market-en","AI's Impact in 2026: Key Developments and Market Shifts","2026-03-25T16:20:33.205823+00:00",{"id":89,"slug":90,"title":91,"created_at":92},"5ed27921-5fd6-492e-8c59-78393bf37710","trumps-ai-legislative-framework-en","Trump's AI Legislative Framework: What's Inside?","2026-03-25T16:22:20.005325+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"e454a642-f03c-4794-b185-5f651aebbaca","nvidia-gtc-2026-key-highlights-innovations-en","NVIDIA GTC 2026: Key Highlights and Innovations","2026-03-25T16:22:47.882615+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"0ebb5b16-774a-4922-945d-5f2ce1df5a6d","claude-usage-diversifies-learning-curves-en","Claude Usage Diversifies, Learning Curves Emerge","2026-03-25T16:25:50.770376+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"69934e86-2fc5-4280-8223-7b917a48ace8","openclaw-ai-commoditization-concerns-en","OpenClaw's Rise Raises Concerns of AI Model Commoditization","2026-03-25T16:26:30.582047+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"b4b2575b-2ac8-46b2-b90e-ab1d7c060797","google-gemini-ai-rollout-2026-en","Google's Gemini AI Rollout Extended to 2026","2026-03-25T16:28:14.808842+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"6e18bc65-42ae-4ad0-b564-67d7f66b979e","meta-llama4-fabricated-results-scandal-en","Meta's Llama 4 Scandal: Fabricated AI Test Results Unveiled","2026-03-25T16:29:15.482836+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"bf888e9d-08be-4f47-996c-7b24b5ab3500","accenture-mistral-ai-deployment-en","Accenture and Mistral AI Team Up for AI Deployment","2026-03-25T16:31:01.894655+00:00",{"id":124,"slug":125,"title":126,"created_at":127},"5382b536-fad2-49c6-ac85-9eb2bae49f35","mistral-ai-high-stakes-2026-en","Mistral AI: Facing High Stakes in 2026","2026-03-25T16:31:39.941974+00:00",{"id":129,"slug":130,"title":131,"created_at":132},"9da3d2d6-b669-4971-ba1d-17fdb3548ed5","cursors-meteoric-rise-pressures-en","Cursor's Meteoric Rise Faces Industry Pressures","2026-03-25T16:32:21.899217+00:00"]