rtk cuts token waste in AI coding tools
rtk adds token-saving config templates for Claude Code, Cursor, Gemini CLI, Codex, and more.

rtk is a setup helper that installs token-saving configs for popular AI coding tools.
If you use an AI coding assistant every day, the hidden cost is often context bloat, not model fees alone. The short post on Zhihu points to a small utility called rtk that generates starter config for several agents with one command.
| Tool | Install command | Agent target |
|---|---|---|
| Default / Claude Code / Copilot | rtk init -g | General |
| Gemini CLI | rtk init -g --gemini | Gemini |
| Codex | rtk init -g --codex | OpenAI Codex |
| Cursor | rtk init -g --agent cursor | Cursor |
| Windsurf | rtk init --agent windsurf | Windsurf |
| Cline / Roo Code | rtk init --agent cline | Cline / Roo Code |
| Kilo Code | rtk init --agent kilocode | Kilo Code |
| Google Antigravity | rtk init --agent antigravity | Antigravity |
What rtk is trying to fix
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
AI coding tools are fast, but they can waste tokens when they repeatedly read the same project instructions, style rules, and tool preferences. That becomes expensive in long sessions, especially when you jump between multiple assistants.

rtk focuses on the boring part that matters: creating the right starter files so each tool begins with cleaner context. If the setup is good, the assistant spends less time re-reading your preferences and more time on code.
- One command creates tool-specific config
- Supports several agents from one workflow
- Targets token waste caused by repeated context loading
- Fits teams that switch between editors and CLI agents
Why this matters for daily coding
The practical benefit is simple. Smaller, better-scoped prompts usually mean fewer wasted tokens and fewer weird answers caused by noisy context. For solo developers, that can mean lower usage. For teams, it can make agent behavior more consistent across machines.
That matters because AI coding is no longer one tool. A developer might use Claude Code for terminal work, Cursor for IDE editing, and Gemini CLI for quick shell tasks. Each one has its own config style, and rtk tries to normalize that setup step.
"The best code is the code you never have to write." — Martin Fowler
Fowler’s line is about software design, but it fits this kind of tooling too. If a helper can remove repetitive setup across agents, it saves time before the first prompt even lands.
How the commands differ across tools
The command list in the post shows a split between a generic install path and agent-specific flags. The generic form is rtk init -g, while some tools use --agent and others use named flags like --gemini or --codex.

That may look minor, but it reveals the product’s real goal: one interface, many targets. Instead of asking developers to memorize eight different setup guides, rtk packages them into a single entry point.
-gappears in the default, Gemini, and Codex commands--agentis used for Cursor, Windsurf, Cline, Kilo Code, and Antigravity- The post names at least 8 supported tool targets
- Claude Code and Copilot share the default install path
What to watch next
rtk will matter most if it keeps its templates aligned with how each assistant actually reads project instructions. AI tools change fast, and config files that work today can get stale when vendors tweak agent behavior.
For now, the appeal is obvious: fewer setup decisions, less duplicated context, and a cleaner starting point for AI-assisted coding. If you already switch between editors and terminal agents, this is the kind of utility worth testing on one repo before rolling it out everywhere.
My bet is that tools like rtk become standard team plumbing, the same way formatter configs and lint rules did. The real question is which assistants keep their config surfaces stable enough for a shared helper to stay useful.
// Related Articles
- [TOOLS]
Why VidHub 会员互通不是“买一次全设备通用”
- [TOOLS]
Why Bun’s Zig-to-Rust experiment is the right move
- [TOOLS]
Why OpenAI API pricing is a product strategy, not a footnote
- [TOOLS]
Why Claude Code’s prompt design beats IDE copilots
- [TOOLS]
Why Databricks Model Serving is the right default for production infe…
- [TOOLS]
Why IBM’s Bob is the right kind of AI coding assistant