[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-llms-procedural-execution-diagnostic-study-zh":3,"tags-llms-procedural-execution-diagnostic-study-zh":30,"related-lang-llms-procedural-execution-diagnostic-study-zh":41,"related-posts-llms-procedural-execution-diagnostic-study-zh":45,"series-research-140a1bc8-8432-4950-9ed7-f28ea3060068":82},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":20},"140a1bc8-8432-4950-9ed7-f28ea3060068","LLM 會算，但不一定照步驟做","\u003Cp data-speakable=\"summary\">這篇研究在測 \u003Ca href=\"\u002Ftag\u002Fllm\">LLM\u003C\u002Fa> 能不能照步驟執行指令，而不是只看最後答案對不對。\u003C\u002Fp>\u003Cp>很多 LLM 評測都盯著 final answer。這很方便，但也可能遮住一個更基礎的問題：\u003Ca href=\"\u002Fnews\u002Fhycop-modular-interpretable-pde-surrogates-zh\">模型\u003C\u002Fa>看起來會解題，卻沒有真的照著流程做。\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.00817\">When LLMs Stop Following Steps: A Diagnostic Study of Procedural Execution in Language Models\u003C\u002Fa> 就是直接抓這個落差，檢查模型能不能把簡單的算術程序按原樣跑完。\u003C\u002Fp>\u003Cp>這篇論文真正關心的，不是「模型會不會算」，而是「模型有沒有照做」。這個差別很重要。只要工作流程依賴固定步驟、狀態更新、或中間值傳遞，模型一旦跳步、提早收尾、或自己多加操作，最後答案就可能錯得很安靜。\u003C\u002Fp>\u003Ch2>這篇在補哪個洞\u003C\u002Fh2>\u003Cp>作者鎖定的是常見 benchmark 的盲點。最後答案正確，只能證明結果對；不能證明過程有被忠實執行。對開發者來說，這個差異很現實，因為很多 LLM 應用本來就是程序型任務：先解析輸入，再更新變數，接著依序套規則，最後輸出結果。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777875651857-35bu.png\" alt=\"LLM 會算，但不一定照步驟做\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>在這種情境下，模型就算偶爾靠捷徑答對，也不代表可靠。它可能在短流程表現正常，但一旦步驟變長、需要保留中間值、或輸出必須反映完整操作順序，就開始失真。這篇研究就是要把這個風險量化出來。\u003C\u002Fp>\u003Cp>論文使用的是一個診斷型 benchmark。任務本身刻意保持簡單：模型拿到一個分步的算術演算法，再加上兩個數字輸入，最後要回傳算出的結果。難點不在數學，而在程序長度變長，以及步驟之間有前後依賴。\u003C\u002Fp>\u003Ch2>方法怎麼做，白話版\u003C\u002Fh2>\u003Cp>這個 benchmark 的設計重點，是把「忠實執行指令」和「猜對答案」拆開。它不是要測廣泛推理能力，而是要看模型能不能按指定演算法逐步跑。這樣一來，研究者比較容易看出模型是在追蹤流程，還是在偷懶猜結果。\u003C\u002Fp>\u003Cp>有兩個設計很關鍵。第一，算術本身很簡單，所以不是在考高難度計算。第二，程序會越來越長，而且某些步驟要回頭依賴前面算出的中間值。這就形成一個控制良好的壓力測試：流程一拉長，模型還能不能維持一致的執行軌跡。\u003C\u002Fp>\u003Cp>這篇研究總共評估 14 個模型、55 個 datasets。原始摘要沒有提供更多 benchmark 細節，所以沒有其他數字可以再延伸。不過，這樣的設定已經足夠看出一個趨勢：程序越長，模型越容易失去忠實度。\u003C\u002Fp>\u003Cul>\u003Cli>輸入：分步算術演算法與兩個數值\u003C\u002Fli>\u003Cli>任務：回傳最後計算結果\u003C\u002Fli>\u003Cli>壓力來源：更長的流程、前後依賴的中間值\u003C\u002Fli>\u003Cli>規模：14 個模型、55 個 datasets\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>結果到底說了什麼\u003C\u002Fh2>\u003Cp>最直接的結果，是 first-answer accuracy 隨著程序變長而大幅下滑。跨 14 個模型與 55 個 datasets，平均 first-answer accuracy 從 5-step procedures 的 61%，掉到 95-step procedures 的 20%。對一個算術本身不難的任務來說，這個落差很大。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777875644936-ooat.png\" alt=\"LLM 會算，但不一定照步驟做\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這代表問題不只是「題目太難」。模型更像是在維持執行軌跡時失手了。也就是說，短流程時看起來還行，步驟一多、依賴一深，可靠度就明顯下降。\u003C\u002Fp>\u003Cp>作者也分析了 generation-level 的失敗模式，讓結果比單一正確率更有畫面。文中提到幾種反覆出現的模式：missing answers、premature answers、self-correction after an initial error、un\u003Ca href=\"\u002Fnews\u002Fgithub-copilot-code-review-actions-minutes-zh\">der\u003C\u002Fa>-executed traces，以及 hallucinated extra steps。這些都不是小瑕疵，而是模型明顯偏離原始程序的訊號。\u003C\u002Fp>\u003Cp>摘要沒有提供更細的 benchmark 分項，也沒有更完整的表格數字。換句話說，這是一篇診斷研究，不是那種把各種系統性能一口氣攤開的全面評測。\u003C\u002Fp>\u003Ch2>對開發者有什麼影響\u003C\u002Fh2>\u003Cp>如果你把 LLM 放進需要精準步驟順序的流程，這篇研究是個警訊。模型可能在推理型 benchmark 看起來很強，但一旦要求它忠實執行程序，表現就不一定穩。這包含結構化資料轉換、規則式工作流、多步驟計算，或任何需要保留中間狀態的 prompt。\u003C\u002Fp>\u003Cp>對工程團隊來說，重點不是不用 LLM，而是不要把「答案看起來對」和「真的照程序做」混為一談。只檢查最後輸出，很容易漏掉提早結束、跳過步驟、或自己補出不存在操作的情況。這些錯誤一旦進到自動化流程，成本可能不低。\u003C\u002Fp>\u003Cp>這篇研究也有它的限制。它測的是算術程序，所以是受控的診斷情境，不是完整的真實世界工作流。摘要沒有主張更大範圍的產品部署結果，也沒有提供超出上述 aggregate accuracy 與失敗類型以外的 benchmark 細節。所以它最適合被讀成一個具體弱點的證據，而不是對 LLM 推理能力的總結判決。\u003C\u002Fp>\u003Cp>但核心訊息很清楚：最後答案正確，不代表過程有被忠實執行。只要你的應用在乎流程一致性，就不能只看單次生成結果。這篇研究提供了一個很直接的理由，去做更多 guardrails。\u003C\u002Fp>\u003Cp>實務上，最值得做的事，是直接測 step fidelity。只要 prompt 或 workf\u003Ca href=\"\u002Fnews\u002Fcloudflare-ai-code-review-prompt-injection-zh\">lo\u003C\u002Fa>w 裡有順序，就不要假設模型有照著走，除非你真的驗過。這篇研究顯示，流程一拉長，可靠度會掉得很快，即使底層任務本身簡單到讓人以為很安全。\u003C\u002Fp>\u003Cp>換句話說，LLM 不只是會不會答對的問題，還有會不會老實照做的問題。對想把它接進產品的人來說，這篇論文提醒得很實際：如果流程不能錯，光靠一個生成結果通常不夠。\u003C\u002Fp>","這篇診斷研究直接測 LLM 能不能照程序一步一步執行。結果顯示，步驟一拉長，模型的程序忠實度就明顯下滑，算術本身卻不難。","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.00817",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777875651857-35bu.png",[13,14,15,16,17],"LLM","procedural execution","instruction following","diagnostic benchmark","step fidelity","zh",1,false,"2026-05-04T06:20:26.283075+00:00","2026-05-04T06:20:26.192+00:00","done","6f9d28f7-e8f0-4354-874f-bcd3cbf63610","llms-procedural-execution-diagnostic-study-zh","research","f414aa1a-27e8-45d9-b407-d542121915d2","published","2026-05-04T09:00:13.596+00:00",[31,33,35,37,39],{"name":17,"slug":32},"step-fidelity",{"name":13,"slug":34},"llm",{"name":16,"slug":36},"diagnostic-benchmark",{"name":14,"slug":38},"procedural-execution",{"name":15,"slug":40},"instruction-following",{"id":27,"slug":42,"title":43,"language":44},"llms-procedural-execution-diagnostic-study-en","When LLMs Stop Following Procedural Steps","en",[46,52,58,64,70,76],{"id":47,"slug":48,"title":49,"cover_image":50,"image_url":50,"created_at":51,"category":26},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":53,"slug":54,"title":55,"cover_image":56,"image_url":56,"created_at":57,"category":26},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":59,"slug":60,"title":61,"cover_image":62,"image_url":62,"created_at":63,"category":26},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":65,"slug":66,"title":67,"cover_image":68,"image_url":68,"created_at":69,"category":26},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":71,"slug":72,"title":73,"cover_image":74,"image_url":74,"created_at":75,"category":26},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":77,"slug":78,"title":79,"cover_image":80,"image_url":80,"created_at":81,"category":26},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[83,88,93,98,103,108,113,118,123,128],{"id":84,"slug":85,"title":86,"created_at":87},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":89,"slug":90,"title":91,"created_at":92},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":124,"slug":125,"title":126,"created_at":127},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":129,"slug":130,"title":131,"created_at":132},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]