[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-rtk-cuts-claude-code-token-spend-en":3,"tags-rtk-cuts-claude-code-token-spend-en":31,"related-lang-rtk-cuts-claude-code-token-spend-en":44,"related-posts-rtk-cuts-claude-code-token-spend-en":48,"series-blockchain-0794f597-b908-402a-b660-729034ffdbf6":85},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":19,"translated_content":10,"views":20,"is_premium":21,"created_at":22,"updated_at":22,"cover_image":11,"published_at":23,"rewrite_status":24,"rewrite_error":10,"rewritten_from_id":25,"slug":26,"category":27,"related_article_id":28,"status":29,"google_indexed_at":30,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":21},"0794f597-b908-402a-b660-729034ffdbf6","RTK cuts Claude Code token spend fast","\u003Cp>If your \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> bill has been climbing, \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Frtk-ai\u002Frtk\" target=\"_blank\" rel=\"noopener\">RTK\u003C\u002Fa> is the kind of tool that makes you stop and look at your usage tab twice. The pitch is simple: wire it into your AI tool once, then let it handle work in the background so the model burns far fewer tokens.\u003C\u002Fp>\u003Cp>In the Chinese post that kicked this off, the author says the setup can cut token consumption by around 80%. That is a bold number, but the workflow behind it is easy to understand: instead of asking the model to narrate every step, you let a local command layer do the repetitive work.\u003C\u002Fp>\u003Ch2>What RTK is doing under the hood\u003C\u002Fh2>\u003Cp>RTK is an open source command wrapper that connects to agent tools through a single init command. The idea is to make your AI coding assistant behave more like a shell-native worker and less like a chat window that explains every move.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058016504-gxp5.png\" alt=\"RTK cuts Claude Code token spend fast\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>That matters because token waste usually comes from two places: back-and-forth prompting and repeated context dumps. If a tool can execute commands locally, read files directly, and keep the model focused on decisions, you spend less on narration and more on the parts that actually need language understanding.\u003C\u002Fp>\u003Cp>The setup in the post is short enough to fit in a terminal note:\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ccode>rtk init -g --codex\u003C\u002Fcode> for \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Fcodex\" target=\"_blank\" rel=\"noopener\">Codex\u003C\u002Fa>\u003C\u002Fli>\u003Cli>\u003Ccode>rtk init -g --opencode\u003C\u002Fcode> for \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsst\u002Fopencode\" target=\"_blank\" rel=\"noopener\">OpenCode\u003C\u002Fa>\u003C\u002Fli>\u003Cli>\u003Ccode>rtk init -g --agent cursor\u003C\u002Fcode> for \u003Ca href=\"https:\u002F\u002Fcursor.com\" target=\"_blank\" rel=\"noopener\">Cursor\u003C\u002Fa>\u003C\u002Fli>\u003Cli>\u003Ccode>rtk init --agent windsurf\u003C\u002Fcode> for \u003Ca href=\"https:\u002F\u002Fwindsurf.com\" target=\"_blank\" rel=\"noopener\">Windsurf\u003C\u002Fa>\u003C\u002Fli>\u003C\u002Ful>\u003Cp>After that, the author says you just restart the AI tool and keep working. The point is that RTK stays in the background while your agent does the visible part of the job.\u003C\u002Fp>\u003Ch2>Why token bills explode so fast\u003C\u002Fh2>\u003Cp>Anyone who has used a coding agent for a week knows the pattern. A small task turns into a long conversation, then the assistant re-reads half the repo, then it explains each command before running it. That is convenient, but it is expensive.\u003C\u002Fp>\u003Cp>Anthropic has been pushing \u003Ca href=\"\u002Fnews\u002Fclaude-code-march-2026-update-fixes-bugs-en\">Claude Code\u003C\u002Fa> as a terminal-first assistant, and that design already helps reduce some of the chatty overhead. RTK pushes further by shifting routine actions out of the model’s text loop and into local execution.\u003C\u002Fp>\u003Cblockquote>“Claude Code is my favorite coding assistant right now.” — \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Y8x2fU5Q1g8\" target=\"_blank\" rel=\"noopener\">Simon Willison\u003C\u002Fa>\u003C\u002Fblockquote>\u003Cp>That quote from Simon Willison matters because it captures the tradeoff well. A strong coding assistant is useful, but once you start using it for real work, the bill becomes part of the product experience.\u003C\u002Fp>\u003Cp>RTK tries to attack that problem in a practical way. It does not promise smarter code generation. It promises less waste around the edges, which is often where the money disappears.\u003C\u002Fp>\u003Ch2>How this compares with plain agent usage\u003C\u002Fh2>\u003Cp>The most useful way to think about RTK is as a control layer. It does not replace your model, and it does not compete with the editor. It changes how often the model needs to speak when a machine can do the job faster and cheaper.\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058044525-6bt0.png\" alt=\"RTK cuts Claude Code token spend fast\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>That difference becomes obvious when you compare a normal agent loop with an RTK-assisted one:\u003C\u002Fp>\u003Cul>\u003Cli>Without RTK, the model may describe a command, wait, parse output, then continue in another turn.\u003C\u002Fli>\u003Cli>With RTK, the command can run locally with less conversational overhead.\u003C\u002Fli>\u003Cli>Without RTK, repeated file inspection can cost extra context tokens.\u003C\u002Fli>\u003Cli>With RTK, the tool can reduce the amount of text the model needs to carry forward.\u003C\u002Fli>\u003C\u002Ful>\u003Cp>For teams that live inside \u003Ca href=\"https:\u002F\u002Fcursor.com\" target=\"_blank\" rel=\"noopener\">Cursor\u003C\u002Fa>, \u003Ca href=\"https:\u002F\u002Fwindsurf.com\" target=\"_blank\" rel=\"noopener\">Windsurf\u003C\u002Fa>, or terminal-based agents like \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Fcodex\" target=\"_blank\" rel=\"noopener\">Codex\u003C\u002Fa>, the value is not abstract. If a workflow really trims token use by even half, that changes whether you leave an agent running for a quick fix or turn it off and do it yourself.\u003C\u002Fp>\u003Cp>The 80% figure from the post should be treated as a claim from one user, not a universal benchmark. Still, the direction is believable. Once you remove conversational padding from repetitive coding tasks, the savings can get large very quickly.\u003C\u002Fp>\u003Ch2>Who should try it first\u003C\u002Fh2>\u003Cp>RTK makes the most sense for developers who already use AI tools every day and can feel the pain of usage-based pricing. If you are prototyping, refactoring, or running lots of small shell tasks, the savings may show up fast.\u003C\u002Fp>\u003Cp>It is also a good fit if you like terminal workflows and want your assistant to feel less like a chat app and more like a background operator. The setup is light, the commands are short, and the integration list already covers several popular tools.\u003C\u002Fp>\u003Cp>There is a catch, of course. Any wrapper that changes how your agent runs can also change how predictable it feels. If you care more about full transparency than lower token spend, you may prefer a plain setup.\u003C\u002Fp>\u003Cp>For readers who want more context on agent pricing and workflow design, we covered a related angle in \u003Ca href=\"\u002Fnews\u002Fclaude-code-cost-control-guide\" target=\"_blank\" rel=\"noopener\">our Claude Code cost-control guide\u003C\u002Fa>. The bigger pattern is clear: the winning tools are the ones that make the model do fewer unnecessary turns.\u003C\u002Fp>\u003Cp>My read is simple. If RTK really keeps token use down by anything close to the claimed 80% on your own projects, it will be less of a niche hack and more of a standard helper for people living in AI coding tools. The next question is whether your workflow is chat-heavy enough to benefit, or whether your current setup is already lean enough to ignore it.\u003C\u002Fp>","RTK claims it can cut Claude Code token use by up to 80% by routing work through local shell commands and agents.","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2020443990093222058",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058016504-gxp5.png",[13,14,15,16,17,18],"Claude Code","RTK","token costs","AI coding agents","Cursor","Windsurf","en",1,false,"2026-04-01T10:24:29.50277+00:00","2026-04-01T10:24:29.484+00:00","done","a7f50f1d-edf0-441b-bb5a-c80ff87a8568","rtk-cuts-claude-code-token-spend-en","blockchain","b8e39b58-6b9d-4714-92d3-26df18a3e0f4","published","2026-04-09T09:00:53.723+00:00",[32,34,36,38,40,42],{"name":18,"slug":33},"windsurf",{"name":35,"slug":35},"rtk",{"name":17,"slug":37},"cursor",{"name":13,"slug":39},"claude-code",{"name":16,"slug":41},"ai-coding-agents",{"name":15,"slug":43},"token-costs",{"id":28,"slug":45,"title":46,"language":47},"rtk-cuts-claude-code-token-spend-zh","RTK 讓 Claude Code 少燒 Token","zh",[49,55,61,67,73,79],{"id":50,"slug":51,"title":52,"cover_image":53,"image_url":53,"created_at":54,"category":27},"4fff2f0d-27be-4693-8ef1-6b9e94dd53d1","web3-communication-trust-infrastructure-2026-en","Web3 Communication Is Becoming Trust Infrastructure","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797253042-oimi.png","2026-05-14T22:20:33.794426+00:00",{"id":56,"slug":57,"title":58,"cover_image":59,"image_url":59,"created_at":60,"category":27},"261f5f0f-f863-404d-be2c-1064e6c05eb9","why-bases-x402-protocol-matters-more-than-100m-en","Why Base’s x402 Protocol Matters More Than the $100M Milestone","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778719246895-99at.png","2026-05-14T00:40:21.084384+00:00",{"id":62,"slug":63,"title":64,"cover_image":65,"image_url":65,"created_at":66,"category":27},"debaea26-43fa-48ad-aefc-cb515fa88566","gala-games-web3-gaming-2026-en","Gala Games Finds New Life in Web3 Gaming","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778689263380-q9x0.png","2026-05-13T16:20:43.068732+00:00",{"id":68,"slug":69,"title":70,"cover_image":71,"image_url":71,"created_at":72,"category":27},"6b939445-f4a4-474a-a85f-54a05f4e2f9a","why-lace-20-matters-more-than-cardanos-next-hard-fork-en","Why Lace 2.0 Matters More Than Cardano’s Next Hard Fork","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778681473377-lu3q.png","2026-05-13T14:10:26.725967+00:00",{"id":74,"slug":75,"title":76,"cover_image":77,"image_url":77,"created_at":78,"category":27},"4b1b1e76-b825-4011-b108-eb3da0bd5e2e","why-ethereum-treasury-buying-is-a-bad-bet-en","Why Ethereum Treasury Buying Is Becoming a Bad Long-Term Bet","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778386242176-tk04.png","2026-05-10T04:10:22.329255+00:00",{"id":80,"slug":81,"title":82,"cover_image":83,"image_url":83,"created_at":84,"category":27},"9bbe48b2-19ad-4bbf-bb20-af02e7d15a03","yakovenko-warns-ai-could-crack-pqc-wallets-en","Yakovenko Warns AI Could Crack PQC Wallets","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778170258841-108q.png","2026-05-07T16:10:42.813868+00:00",[86,91,96,97,102,107,112,117,122,127],{"id":87,"slug":88,"title":89,"created_at":90},"cdf2780b-1da6-4aca-a87b-f0974b815b03","moonpay-open-wallet-standard-ai-payments-en","MoonPay's Open Wallet Standard Targets AI Payments","2026-03-28T03:08:33.547032+00:00",{"id":92,"slug":93,"title":94,"created_at":95},"f06da3a4-3b15-4c7b-a250-6077505f5119","next-gen-crypto-simulators-ai-web3-training-en","Next-Gen Crypto Simulators Are Getting Smarter","2026-04-01T09:36:34.200192+00:00",{"id":4,"slug":26,"title":5,"created_at":22},{"id":98,"slug":99,"title":100,"created_at":101},"5101ffbf-7ea9-4baa-b5e2-64729ff55b20","openclaw-flaw-exposes-ai-admin-hijack-risk-en","Openclaw Flaw Exposes AI Admin Hijack Risk","2026-04-01T13:12:33.481569+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"fadea65e-f7c8-41b0-a186-809d21787b4c","how-web3-marketing-changed-in-2026-en","How Web3 Marketing Changed in 2026","2026-04-02T01:36:36.504086+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"88f88741-ff27-41d1-8151-776d0afb9508","ai-agentic-defi-web3-grants-march-2026-en","AI, Agentic DeFi, and Web3 Grants to Watch","2026-04-02T05:51:37.696422+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"43fafe43-772e-48c8-bb95-da8d64cf60e3","why-crypto-is-fixated-on-ai-agents-en","Why Crypto Is Fixated on AI Agents","2026-04-02T05:54:29.121481+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"320ef5e4-fe56-47ab-9a92-290d6fbd3f60","web3-explained-what-it-is-why-it-matters-en","Web3 Explained: What It Is and Why It Matters","2026-04-02T06:15:33.001112+00:00",{"id":123,"slug":124,"title":125,"created_at":126},"f49cffaf-2c57-4f48-9486-7062cca91ba0","trust-wallet-ai-trading-agents-220m-users-en","Trust Wallet Adds AI Trading Agents for 220M Users","2026-04-02T06:24:28.043029+00:00",{"id":128,"slug":129,"title":130,"created_at":131},"2b8501e2-39af-4de3-ade1-29616a58e9fb","trust-wallet-agent-kit-ai-trade-25-chains-en","Trust Wallet's Agent Kit Lets AI Trade on 25+ Chains","2026-04-02T06:27:33.425312+00:00"]