[IND] 6 min readOraCore Editors

RVCC Wants Faster RISC-V Tuning in LLVM

RVCC is being proposed as an LLVM incubator to speed up RISC-V compiler tuning, but LLVM maintainer Nikita Popov already objects.

Share LinkedIn
RVCC Wants Faster RISC-V Tuning in LLVM

A new proposal wants to bring RISC-V performance work into an LLVM incubator called RVCC. The pitch is simple: collect optimization patches in one place, test them faster, and move better code into Clang and LLVM with less friction.

The timing matters because RISC-V is no longer a hobbyist curiosity. It is showing up in servers, embedded parts, and developer boards, which means compiler quality now affects real product decisions, not just benchmark bragging rights.

RVCC is meant to act like a staging area for RISC-V compiler work. Instead of sending every patch straight into LLVM proper, contributors could iterate in a shared space, run benchmarks across hardware, and then submit the strongest changes upstream.

What RVCC is trying to fix

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

The proposal is aimed at a very specific pain point: RISC-V optimization work can move slowly when every change has to pass through LLVM’s main review pipeline from day one. That is a problem when vendors, board makers, and compiler engineers are all trying different tricks for the same instruction set.

RVCC Wants Faster RISC-V Tuning in LLVM

According to the proposal, RVCC would collect RISC-V performance patches for LLVM and Clang, validate them against benchmarks, and reduce the chance that each company builds its own private toolchain path. That last part matters more than it sounds. Toolchain fragmentation can make performance tuning harder to compare and harder to upstream.

The idea is loosely similar to the Linux kernel staging area, where code can mature before it gets the full treatment from maintainers. Here, though, the target is compiler work for one architecture rather than drivers or subsystems.

  • Focus: RISC-V compiler optimization patches for LLVM and Clang
  • Method: benchmark-driven testing across multiple RISC-V hardware platforms
  • Goal: faster iteration before upstream LLVM review
  • Risk being addressed: vendor-specific toolchain fragmentation

Why LLVM people are already pushing back

That plan did not land in a vacuum. LLVM maintainer Nikita Popov replied on the LLVM Discourse thread with a hard rejection. His concern is that an incubator for RVCC would amount to an LLVM fork with patches that do not meet LLVM’s normal quality bar.

“This proposal gets a strong no from me. We should not have an incubator for what is basically an LLVM fork plus patches that fail to meet LLVM’s usual quality standards.” — Nikita Popov

That quote gets to the heart of the debate. LLVM has spent years building a reputation for disciplined review and predictable code quality. Anything that looks like a side channel for lower-bar contributions is going to trigger alarm bells among maintainers who have to clean up the mess later.

There is also a governance question hiding underneath the technical one. If RVCC becomes a normal place to land RISC-V work first, does it help LLVM move faster, or does it create a second-class pipeline that people start treating as good enough on its own?

How this compares with other compiler workflows

Compiler projects already use a range of release and incubation models, but the numbers and trade-offs are different. LLVM’s mainline review process is strict by design, while other ecosystems sometimes tolerate more experimentation in parallel branches or vendor trees.

RVCC Wants Faster RISC-V Tuning in LLVM

For RISC-V specifically, the pressure is higher because the architecture is still expanding in the market. The RISC-V International ecosystem includes chips from startups, established silicon vendors, and academic groups, which means one optimization can behave differently across implementations.

  • LLVM: centralized review, high consistency, slower experimental turnaround
  • RVCC proposal: faster iteration for RISC-V-specific patches, higher risk of drift
  • Linux kernel staging: a known model for maturing code before mainline review
  • RISC-V International: broad hardware diversity makes benchmarking more complicated

There is a practical reason benchmark data matters here. A compiler tweak that helps one in-order core can hurt an out-of-order design, and a patch that looks great on synthetic tests may do little for real workloads. That is why the proposal emphasizes testing on different RISC-V platforms rather than trusting one lab setup.

LLVM has already made room for specialized workflows in other areas, but the bar is high. If RVCC wants to survive the review debate, it will need to prove that it improves upstream quality instead of just creating a holding pen for risky code.

What happens next for RISC-V compiler work

For now, RVCC is still a proposal, not an approved project. That means the immediate question is whether LLVM leadership sees it as a useful pressure valve or as an unnecessary detour from the main codebase.

The most likely outcome is some compromise shape: an external collaboration space with tighter rules, or a narrower scope that limits what can be staged there. If that happens, the real test will be whether the project produces patches that are easier to review, easier to benchmark, and easier to upstream.

My bet is that LLVM will not sign off on anything that looks like a soft fork. If RVCC survives, it will probably do so only by acting as a short-lived proving ground with strict gates, clear benchmark methodology, and a direct path back into mainline LLVM.

For developers working on RISC-V today, the useful takeaway is simple: compiler performance is becoming a coordination problem, not just an optimization problem. If you are shipping RISC-V software, watch this discussion closely, because the outcome could shape how quickly new backend work reaches your toolchain.

And if you want the broader compiler context, keep an eye on our coverage of LLVM/Clang 22 and LLVM’s policy on AI-assisted contributions, since both show how careful the project has become about process as much as code.