[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-rtk-cuts-claude-code-token-spend-zh":3,"tags-rtk-cuts-claude-code-token-spend-zh":35,"related-lang-rtk-cuts-claude-code-token-spend-zh":52,"related-posts-rtk-cuts-claude-code-token-spend-zh":56,"series-blockchain-b8e39b58-6b9d-4714-92d3-26df18a3e0f4":93},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":23,"translated_content":10,"views":24,"is_premium":25,"created_at":26,"updated_at":26,"cover_image":11,"published_at":27,"rewrite_status":28,"rewrite_error":10,"rewritten_from_id":29,"slug":30,"category":31,"related_article_id":32,"status":33,"google_indexed_at":34,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":25},"b8e39b58-6b9d-4714-92d3-26df18a3e0f4","RTK 讓 Claude Code 少燒 Token","\u003Cp>如果你的 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> 帳單一直往上衝，\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Frtk-ai\u002Frtk\" target=\"_blank\" rel=\"noopener\">RTK\u003C\u002Fa> 這工具真的會讓人多看一次用量頁面。作者在中文分享裡直接講，Token 消耗有機會砍到 20% 左右。這數字很兇，但背後邏輯其實很直白。\u003C\u002Fp>\u003Cp>講白了，就是把重複工作搬回本機。不要每一步都叫 LLM 用文字講一遍。能跑 shell 的就跑 shell，能讀檔的就直接讀。模型只負責判斷，不負責碎念。\u003C\u002Fp>\u003Cp>這種做法很像把 AI 助手從聊天框，改造成終端機工人。你可能會想問，真的差這麼多嗎？如果你的流程本來就很 chat-heavy，答案通常是會。\u003C\u002Fp>\u003Ch2>RTK 到底在做什麼\u003C\u002Fh2>\u003Cp>RTK 是一個開源的 command wrapper。它用單一 init 指令，接到多種 a\u003Ca href=\"\u002Fnews\u002Fagent-memory-framework-analysis-zh\">gent\u003C\u002Fa> 工具。這種設計很像在 AI 工具外面包一層控制層。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058021081-9yi4.png\" alt=\"RTK 讓 Claude Code 少燒 Token\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>它的目的不是讓模型變聰明。它是讓模型少說廢話。很多 Token 都浪費在重複提示、反覆讀上下文、還有每個指令都要解釋一次。\u003C\u002Fp>\u003Cp>RTK 把這些動作交給本機。模型只要看結果，再決定下一步。這對終端機工作流很合拍，也很符合開發者的直覺。\u003C\u002Fp>\u003Cp>作者列的接法很短，幾個命令就能看出方向：\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ccode>rtk init -g --codex\u003C\u002Fcode> 給 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Fcodex\" target=\"_blank\" rel=\"noopener\">Codex\u003C\u002Fa>\u003C\u002Fli>\u003Cli>\u003Ccode>rtk init -g --opencode\u003C\u002Fcode> 給 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsst\u002Fopencode\" target=\"_blank\" rel=\"noopener\">OpenCode\u003C\u002Fa>\u003C\u002Fli>\u003Cli>\u003Ccode>rtk init -g --agent cursor\u003C\u002Fcode> 給 \u003Ca href=\"https:\u002F\u002Fcursor.com\" target=\"_blank\" rel=\"noopener\">Cursor\u003C\u002Fa>\u003C\u002Fli>\u003Cli>\u003Ccode>rtk init --agent windsurf\u003C\u002Fcode> 給 \u003Ca href=\"https:\u002F\u002Fwindsurf.com\" target=\"_blank\" rel=\"noopener\">Windsurf\u003C\u002Fa>\u003C\u002Fli>\u003C\u002Ful>\u003Cp>設好後，重開 AI 工具就能繼續用。RTK 會留在背景。你看到的是 a\u003Ca href=\"\u002Fnews\u002Fcontext-is-the-new-os-zettlab-agent-computer-zh\">gent\u003C\u002Fa> 在工作。真正省錢的是，那些不必進入對話的步驟。\u003C\u002Fp>\u003Ch2>為什麼 Token 帳單會爆得這麼快\u003C\u002Fh2>\u003Cp>用過 coding a\u003Ca href=\"\u002Fnews\u002Fagent-harness-ai-engineering-2026-zh\">gent\u003C\u002Fa> 一週的人，大概都懂那種感覺。一個小任務，最後變成長對話。模型先讀 repo，再解釋命令，再等你回覆，接著又重講一次。\u003C\u002Fp>\u003Cp>這種流程很方便，但很吃 Token。尤其是大型專案。上下文一長，成本就跟著上去。你以為只是改一個檔案，結果模型把整個資料夾都重看一遍。\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> 本來就是偏 terminal-first 的設計。這已經比純聊天介面省一點。RTK 再往前推一步，把例行操作交給本機執行，讓 LLM 少碰那些低價值文字。\u003C\u002Fp>\u003Cblockquote>“Claude Code is my favorite coding assistant right now.” — \u003Ca href=\"https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=Y8x2fU5Q1g8\" target=\"_blank\" rel=\"noopener\">Simon Willison\u003C\u002Fa>\u003C\u002Fblockquote>\u003Cp>這句話很有意思。Simon Willison 不是在吹功能。他是在點出一個現實：好用的 coding assistant，最後都會碰到成本問題。\u003C\u002Fp>\u003Cp>所以 RTK 的切入點很務實。它沒說自己會生出更強的程式碼。它只說，很多沒必要的文字往返可以少掉。老實說，這往往才是錢燒最快的地方。\u003C\u002Fp>\u003Ch2>跟一般 agent 用法比，差在哪\u003C\u002Fh2>\u003Cp>把 RTK 想成控制層就對了。它不是要取代模型。它是要減少模型開口的次數。只要機器能做，就別讓 LLM 用文字做。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058038791-8vq0.png\" alt=\"RTK 讓 Claude Code 少燒 Token\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>差別看起來不大，實際上很明顯。一般 agent 會先講命令，再等輸出，再解釋下一步。RTK 則把這些流程盡量壓到本機。少一次對話，就少一段 Token。\u003C\u002Fp>\u003Cp>如果你常用 \u003Ca href=\"https:\u002F\u002Fcursor.com\" target=\"_blank\" rel=\"noopener\">Cursor\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwindsurf.com\" target=\"_blank\" rel=\"noopener\">Windsurf\u003C\u002Fa>，或像 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Fcodex\" target=\"_blank\" rel=\"noopener\">Codex\u003C\u002Fa> 這種終端機 agent，差異會更有感。因為這些工具本來就靠大量互動吃飯。\u003C\u002Fp>\u003Cp>可以直接拿來做個對照：\u003C\u002Fp>\u003Cul>\u003Cli>沒有 RTK 時，模型常要先描述命令。\u003C\u002Fli>\u003Cli>有 RTK 時，命令能直接在本機跑。\u003C\u002Fli>\u003Cli>沒有 RTK 時，重複讀檔會吃更多上下文。\u003C\u002Fli>\u003Cli>有 RTK 時，模型帶著更少文字往下走。\u003C\u002Fli>\u003Cli>沒有 RTK 時，聊天輪次容易變多。\u003C\u002Fli>\u003Cli>有 RTK 時，很多例行步驟可被壓縮。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>中文文章提到的 80% 省量，應該先當成單一使用者的實測，不要直接拿來當通則。但方向很合理。只要你把聊天雜訊拿掉，節省速度會比很多人想像快。\u003C\u002Fp>\u003Ch2>誰最適合先試\u003C\u002Fh2>\u003Cp>RTK 最適合已經天天用 AI 工具的人。尤其是會一直跑小任務的人。像是 refactor、修 bug、跑 shell 指令、檢查檔案結構，這些都很容易累積 Token。\u003C\u002Fp>\u003Cp>如果你本來就喜歡終端機，這工具會特別順手。它的介面不花俏。它的價值也不在畫面漂亮。它就是要讓 agent 像背景工人，而不是像一直插話的同事。\u003C\u002Fp>\u003Cp>但也有代價。任何 wrapper 都會多一層抽象。你如果很在意流程透明度，可能會覺得這種做法太黑箱。這點我覺得要老實講。\u003C\u002Fp>\u003Cp>如果你想先看類似的成本控制思路，可以對照我們整理過的 \u003Ca href=\"\u002Fnews\u002Fclaude-code-cost-control-guide\" target=\"_blank\" rel=\"noopener\">Claude Code 成本控制指南\u003C\u002Fa>。重點其實很一致：少做無效輪次，帳單就會比較正常。\u003C\u002Fp>\u003Ch2>這波背後的產業脈絡\u003C\u002Fh2>\u003Cp>現在的 AI coding 工具，幾乎都在比誰更像「會做事的助理」。但真正的商業問題不是誰會講，而是誰講得少、做得多。Token 計價一上來，這件事就變得很現實。\u003C\u002Fp>\u003Cp>對開發團隊來說，成本控制不是小事。假設一個團隊 20 人，每人每天多燒 50 萬 Token，一個月下來就很有感。你不用等到帳單炸裂，財務就會先來問。\u003C\u002Fp>\u003Cp>這也是為什麼本機執行、shell-native、agent wrapper 這幾個詞最近一直冒出來。大家都在找方法，把模型從「全程聊天」改成「只做判斷」。這種路線不新，但現在特別值錢。\u003C\u002Fp>\u003Cp>我覺得 RTK 的意義不在於它多炫。它的價值在於，它很清楚地碰到痛點：模型太愛講話。能少講一句，就少燒一點錢。\u003C\u002Fp>\u003Ch2>接下來怎麼看\u003C\u002Fh2>\u003Cp>如果你現在就用 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fclaude-code\" target=\"_blank\" rel=\"noopener\">Claude Code\u003C\u002Fa> 或其他 agent 工具，我會建議你先試一輪 RTK。先看自己是不是那種高互動、高重複的工作型態。只要是，省量通常會很明顯。\u003C\u002Fp>\u003Cp>我的預測很直接。接下來半年，這類控制層工具會越來越多。不是因為大家突然愛寫 wrapper，而是因為 Token 成本真的會逼人想辦法。誰能把無效對話壓下來，誰就更容易留在日常工作流裡。\u003C\u002Fp>\u003Cp>問題只剩一個：你的 AI 工具現在是在幫你做事，還是在幫你燒錢？如果你已經開始懷疑帳單，那 RTK 值得排進測試清單。\u003C\u002Fp>","RTK 主打把 Claude Code 的重複工作搬到本機執行，官方說法可把 Token 消耗降到 20% 左右。這篇拆解它怎麼運作、跟 Cursor 和 Windsurf 比起來差在哪。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2020443990093222058",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775058021081-9yi4.png",[13,14,15,16,17,18,19,20,21,22],"RTK","Claude Code","Token","AI coding assistant","Claude","Cursor","Windsurf","Codex","OpenCode","終端機 AI 工具","zh",0,false,"2026-04-01T10:24:29.259497+00:00","2026-04-01T10:24:29.13+00:00","done","a7f50f1d-edf0-441b-bb5a-c80ff87a8568","rtk-cuts-claude-code-token-spend-zh","blockchain","0794f597-b908-402a-b660-729034ffdbf6","published","2026-04-09T09:00:53.759+00:00",[36,38,40,42,44,46,48,50],{"name":19,"slug":37},"windsurf",{"name":39,"slug":39},"rtk",{"name":18,"slug":41},"cursor",{"name":16,"slug":43},"ai-coding-assistant",{"name":14,"slug":45},"claude-code",{"name":17,"slug":47},"claude",{"name":22,"slug":49},"終端機-ai-工具",{"name":15,"slug":51},"token",{"id":32,"slug":53,"title":54,"language":55},"rtk-cuts-claude-code-token-spend-en","RTK cuts Claude Code token spend fast","en",[57,63,69,75,81,87],{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":31},"8c37fa14-a081-4810-b5b8-2a2a184a7d1d","web3-communication-trust-infrastructure-2026-zh","Web3 溝通正在變成信任基礎設施","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797251989-it0w.png","2026-05-14T22:20:32.600359+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":31},"9059e494-8f72-4c34-a888-2424c682da10","why-bases-x402-protocol-matters-more-than-100m-zh","為什麼 Base 的 x402 協議比 1 億美元里程碑更重要","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778719260627-a0va.png","2026-05-14T00:40:19.962138+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":31},"74969a5b-7ec5-4686-80ee-fa39a5cc43d4","gala-games-web3-gaming-2026-zh","Gala Games 在 Web3 遊戲找回存在感","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778689265110-p0y5.png","2026-05-13T16:20:41.782583+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":31},"d330d44a-4eff-4ba6-aa72-5ef246e31c64","why-lace-20-matters-more-than-cardanos-next-hard-fork-zh","為什麼 Lace 2.0 比 Cardano 下一次硬分叉更重要","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778681462051-f600.png","2026-05-13T14:10:25.488549+00:00",{"id":82,"slug":83,"title":84,"cover_image":85,"image_url":85,"created_at":86,"category":31},"0af0a4b2-b0a1-4a52-8fe9-1328bde87c8e","why-ethereum-treasury-buying-is-a-bad-bet-zh","為什麼 Ethereum Treasury Buying 正在變成一筆差勁的長…","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778386236909-ytls.png","2026-05-10T04:10:21.784208+00:00",{"id":88,"slug":89,"title":90,"cover_image":91,"image_url":91,"created_at":92,"category":31},"ab3ef302-99ee-40b3-b2d0-4b67a9049ec4","yakovenko-warns-ai-could-crack-pqc-wallets-zh","Yakovenko 警告：AI 可能破解 PQC 錢包","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778170266863-wnnh.png","2026-05-07T16:10:41.097774+00:00",[94,99,104,105,110,115,120,125,130,135],{"id":95,"slug":96,"title":97,"created_at":98},"e1b4b518-f86b-410c-8c82-8cfb787ff2ef","moonpay-open-wallet-standard-ai-payments-zh","MoonPay 推 OWS，瞄準 AI 付款","2026-03-28T03:08:33.379969+00:00",{"id":100,"slug":101,"title":102,"created_at":103},"e72bae29-ddbd-437b-aaa4-cd662605394b","next-gen-crypto-simulators-ai-web3-training-zh","新一代加密模擬器更聰明了","2026-04-01T09:36:33.917023+00:00",{"id":4,"slug":30,"title":5,"created_at":26},{"id":106,"slug":107,"title":108,"created_at":109},"7ff10146-4ca0-4670-a02c-384dde04f610","trm-labs-ai-agents-crypto-investigations-zh","TRM Labs 將 AI agent 帶進加密調查","2026-04-01T10:33:30.166266+00:00",{"id":111,"slug":112,"title":113,"created_at":114},"00668dea-9f0e-4019-b861-03817d5a8877","how-web3-marketing-changed-in-2026-zh","2026 Web3 行銷怎麼變了","2026-04-02T01:36:34.973322+00:00",{"id":116,"slug":117,"title":118,"created_at":119},"e7992274-42ee-40bc-bb05-97250098c56c","ai-agentic-defi-web3-grants-march-2026-zh","AI、Agentic DeFi 與 Web3 補助案","2026-04-02T05:51:36.857954+00:00",{"id":121,"slug":122,"title":123,"created_at":124},"5cef810b-af3d-467a-8b41-627769eca895","why-crypto-is-fixated-on-ai-agents-zh","為何加密圈盯上 AI Agent","2026-04-02T05:54:28.919864+00:00",{"id":126,"slug":127,"title":128,"created_at":129},"d30e6203-d522-41a1-b529-fcf4499cd985","web3-explained-what-it-is-why-it-matters-zh","Web3 是什麼，為何重要","2026-04-02T06:15:32.580114+00:00",{"id":131,"slug":132,"title":133,"created_at":134},"f29e65ae-64df-463b-ba22-afd9dcbd0f8f","trust-wallet-agent-kit-ai-trade-25-chains-zh","Trust Wallet 讓 AI 幫你交易","2026-04-02T06:27:33.183404+00:00",{"id":136,"slug":137,"title":138,"created_at":139},"91022b4c-b53e-4c18-abfe-914a8eca6e28","blockchain-in-ai-real-use-cases-zh","區塊鏈加 AI，真實落地在哪裡","2026-04-02T06:30:44.026286+00:00"]