[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-parallel-sft-code-rl-cross-language-transfer-zh":3,"tags-parallel-sft-code-rl-cross-language-transfer-zh":31,"related-lang-parallel-sft-code-rl-cross-language-transfer-zh":44,"related-posts-parallel-sft-code-rl-cross-language-transfer-zh":48,"series-research-b418bc8d-86c6-44d6-93f0-e26473db9649":85},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":19,"translated_content":10,"views":20,"is_premium":21,"created_at":22,"updated_at":22,"cover_image":11,"published_at":23,"rewrite_status":24,"rewrite_error":10,"rewritten_from_id":25,"slug":26,"category":27,"related_article_id":28,"status":29,"google_indexed_at":30,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":21},"b418bc8d-86c6-44d6-93f0-e26473db9649","Parallel-SFT 讓 code RL 更會跨語言","\u003Cp>很多 code model 在 Python、C++ 看起來很強，一換到低資源程式語言就掉速。\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.20835\">Parallel-SFT: Improving Zero-Shot Cross-Programming-Language Transfer for Code RL\u003C\u002Fa> 這篇論文想處理的，就是這個落差。作者的判斷不是「程式能力只屬於某一種語言」，而是現有訓練流程沒有把這些能力好好推向可轉移的表示。\u003C\u002Fp>\u003Cp>它的核心想法很直接：如果模型在 RL 之前，就先看過多種語言寫出的等價程式，或許能先學到比較語言無關的內部表徵。這樣一來，後面的 reinforcement learning 不會只把能力鎖在來源語言，而是更有機會往其他語言擴散。\u003C\u002Fp>\u003Ch2>這篇論文要解的痛點\u003C\u002Fh2>\u003Cp>這篇研究聚焦在 zero-shot cross-programming-language transfer for code RL。白話一點，就是先在某個來源語言上做 code generation 的強化學習，再看模型能不能把 RL 帶來的好處，直接轉到其他目標語言，而且不用再針對目標語言額外做 RL。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924588963-c6d5.png\" alt=\"Parallel-SFT 讓 code RL 更會跨語言\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這件事很重要，因為現實世界的程式語言分布很不平均。常見語言像 Python、C++ 資料多、模型也比較熟；但很多低資源語言資料少，效果通常就差一截。論文把這個問題視為資料與訓練設定的組合問題：模型不是不會寫程式，而是它看到的訓練訊號太偏向少數語言。\u003C\u002Fp>\u003Cp>作者也指出一個關鍵現象：在 Llama-3.1 上，針對來源程式語言做 RL，並不會自動讓其他目標語言一起變好，甚至可能讓表現退步。也就是說，RL 的收益未必會自然跨語言傳遞，這正是 \u003Ca href=\"\u002Fnews\u002Fspeechparaling-bench-paralinguistic-speech-generation-zh\">Para\u003C\u002Fa>llel-SFT 想修補的缺口。\u003C\u002Fp>\u003Ch2>Parallel-SFT 到底怎麼做\u003C\u002Fh2>\u003Cp>這個方法不是直接改 RL 本身，而是先改 RL 前面的 su\u003Ca href=\"\u002Fnews\u002Fflorida-criminal-probe-openai-chatgpt-zh\">pe\u003C\u002Fa>rvised fine-tuning。作者的假設是：如果 SFT 階段就讓模型建立比較能跨語言泛化的初始化，後面的 RL 才比較容易把能力帶到別的語言。\u003C\u002Fp>\u003Cp>Parallel-SFT 的做法是把「parallel programs」混進 SFT 資料裡。這些程式在功能上等價，但分別用多種程式語言實作。模型不再只看單一語言版本，而是同時看見同一個任務在不同語法外觀下的對應關係。\u003C\u002Fp>\u003Cp>這個設計很像在幫模型建立「語意對齊」：不要先把每個語言當成獨立技能，而是先讓模型意識到，底層做的事情其實相同，只是表達方式不同。論文的主張也不是模型因此變成某種通用編譯器式表示，而是這種 SFT 初始化，能讓後續 RL 的轉移性更好。\u003C\u002Fp>\u003Cp>所以，Parallel-SFT 不是一個新的 RL 演算法。它比較像是前置訓練策略，目標是把模型帶到一個比較適合跨語言轉移的起點，再交給 RL 去放大效果。\u003C\u002Fp>\u003Ch2>論文實際證明了什麼\u003C\u002Fh2>\u003Cp>摘要裡最明確的結果，是方向性的：作者在 Parallel-SFT 模型上做 RL 後，觀察到對未見過的程式語言有更好的泛化，優於基準設定。不過摘要沒有公開完整 benchmark 細節，所以這裡沒有數字可直接對照。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924604704-bx3r.png\" alt=\"Parallel-SFT 讓 code RL 更會跨語言\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>論文還做了內部表示分析。作者指出，Parallel-SFT 會讓 latent space 更偏向功能導向，也就是說，不同語言但功能相同的程式，會在表示空間裡更靠近。作者認為，這種更緊密的聚類，可能就是它能提升 RL 後跨語言轉移的原因之一。\u003C\u002Fp>\u003Cp>這點很值得注意。因為如果改善只是來自某種表面上的微調技巧，那影響可能很脆弱；但如果模型真的開始用「這段程式在做什麼」來組織表示，而不是只看語法長相，那跨語言任務就比較有機會受益。對 code RL 來說，這是很合理的方向。\u003C\u002Fp>\u003Cp>不過，摘要能支持的證據也就到這裡。它告訴我們方法有效，但沒有交代完整評估任務、提升幅度、涵蓋哪些語言、或不同模型尺寸與 RL 目標是否都一致受益。\u003C\u002Fp>\u003Ch2>對開發者有什麼意義\u003C\u002Fh2>\u003Cp>如果你在做 code model、coding assistant，或是需要多語言支援的 a\u003Ca href=\"\u002Fnews\u002Ffree-ai-agent-resources-bookmark-guide-zh\">gent\u003C\u002Fa>，這篇論文的訊息很直接：不要只盯著 RL 配方，SFT 的初始化可能同樣關鍵。很多人會把重點放在 reward、rollout、policy update，但這篇工作提醒你，模型一開始學到的表示方式，會影響後面 RL 的收益能不能跨語言延伸。\u003C\u002Fp>\u003Cp>對資料資源有限的團隊來說，這也提供一個可操作的思路：如果目標語言資料少，也許可以先用多語言等價程式把共享語意教進去，再進行 RL。這不代表問題就消失，但至少能把來源語言的監督訊號，轉成比較可轉移的形式。\u003C\u002Fp>\u003Cul>\u003Cli>用對齊的多語言實作，教模型共享語意，而不只是記住語法。\u003C\u002Fli>\u003Cli>不要假設某個語言上的 RL 成果，會自然複製到其他語言。\u003C\u002Fli>\u003Cli>把 SFT 視為表示塑形，不只是 instruction following。\u003C\u002Fli>\u003Cli>若要支援長尾程式語言，前置資料設計可能比後段優化更重要。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>從工程角度看，這篇論文也在提醒一件事：如果你的系統要跨格式、跨方言、跨語言泛化，先讓模型看見平行樣本，可能比直接把優化火力開大更穩。這種做法不一定最炫，但常常更實用。\u003C\u002Fp>\u003Ch2>限制與還沒回答的問題\u003C\u002Fh2>\u003Cp>這篇摘要的限制也很明顯。它沒有說明用了多少種程式語言，也沒有列出來源語言和目標語言是哪些。各語言家族之間是否都能同樣受益，摘要裡也看不出來。\u003C\u002Fp>\u003Cp>另一個現實限制是，Parallel programs 本身不容易取得。對低資源語言來說，功能等價、又彼此對齊的程式資料可能比一般訓練資料更稀缺。換句話說，這個方法雖然概念清楚，但資料建置本身可能就是門檻。\u003C\u002Fp>\u003Cp>此外，作者的表示分析很有說服力，但還不能算鐵證。功能相近的程式在 latent space 更靠近，確實和更好的轉移能力一致，但這不代表已經完全證明因果關係。作者目前的說法比較像是提出一個合理機制，後續還需要更多驗證。\u003C\u002Fp>\u003Cp>即使如此，這篇工作仍然有一個很實際的提醒：code RL 的成敗，不只看 reward 怎麼設，也不只看 rollout 多漂亮。若你想讓模型真的跨語言，可能得先在 RL 之前，把它的語意表示往「功能」而不是「語法」的方向推。\u003C\u002Fp>","Parallel-SFT 用多語言等價程式做 SFT，想讓後續 code RL 的零樣本跨語言轉移更穩，特別是低資源程式語言。","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.20835",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776924588963-c6d5.png",[13,14,15,16,17,18],"Parallel-SFT","code RL","zero-shot transfer","cross-programming-language transfer","SFT","latent space","zh",1,false,"2026-04-23T06:09:32.299476+00:00","2026-04-23T06:09:32.275+00:00","done","dd4b6a20-4847-4923-83be-3330fd2ba51c","parallel-sft-code-rl-cross-language-transfer-zh","research","0e7d8f32-289f-4117-861c-6feb9bd2eb29","published","2026-04-23T09:00:09.13+00:00",[32,34,36,38,40,42],{"name":17,"slug":33},"sft",{"name":18,"slug":35},"latent-space",{"name":16,"slug":37},"cross-programming-language-transfer",{"name":13,"slug":39},"parallel-sft",{"name":14,"slug":41},"code-rl",{"name":15,"slug":43},"zero-shot-transfer",{"id":28,"slug":45,"title":46,"language":47},"parallel-sft-code-rl-cross-language-transfer-en","Parallel-SFT aims to make code RL transfer better","en",[49,55,61,67,73,79],{"id":50,"slug":51,"title":52,"cover_image":53,"image_url":53,"created_at":54,"category":27},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":56,"slug":57,"title":58,"cover_image":59,"image_url":59,"created_at":60,"category":27},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":62,"slug":63,"title":64,"cover_image":65,"image_url":65,"created_at":66,"category":27},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":68,"slug":69,"title":70,"cover_image":71,"image_url":71,"created_at":72,"category":27},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":74,"slug":75,"title":76,"cover_image":77,"image_url":77,"created_at":78,"category":27},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":80,"slug":81,"title":82,"cover_image":83,"image_url":83,"created_at":84,"category":27},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[86,91,96,101,106,111,116,121,126,131],{"id":87,"slug":88,"title":89,"created_at":90},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":92,"slug":93,"title":94,"created_at":95},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":132,"slug":133,"title":134,"created_at":135},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]