[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-rubric-based-dpo-visual-preference-tuning-zh":3,"tags-rubric-based-dpo-visual-preference-tuning-zh":30,"related-lang-rubric-based-dpo-visual-preference-tuning-zh":39,"related-posts-rubric-based-dpo-visual-preference-tuning-zh":43,"series-research-d3ac3e85-c296-4015-94f0-559222351ea3":80},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":20},"d3ac3e85-c296-4015-94f0-559222351ea3","用 rubric 讓視覺偏好訓練更精準","\u003Cp>很多偏好最佳化流程，都默認一件事：只要 A 比 B 好，模型就能從這組 pair 學到東西。但這篇 arXiv 論文提醒我們，這個假設放到多模態任務時，常常太粗。論文提出 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.13029\">Visual Preference Optimization with Rubric Rewards\u003C\u002Fa>，簡稱 rDPO，想把原本只看整體輸贏的訊號，改成更像 checklist 的、針對單一樣本設計的 rubric。\u003C\u002Fp>\u003Cp>白話一點說，視覺任務的好壞，常常不是「整體感覺比較好」而已，而是有沒有看對物件、有沒有跟著指令做、以及有沒有處理好圖像裡那些很細的限制。若偏好資料沒有把這些差異寫清楚，最佳化器很可能學到錯的重點。rDPO 的出發點，就是讓偏好資料更貼近多模態系統真正需要的判準。\u003C\u002Fp>\u003Ch2>這篇論文想修正什麼問題\u003C\u002Fh2>\u003Cp>論文先從 DPO 的老問題切入：方法本身沒有魔法，關鍵還是背後的偏好資料。作者指出，在多模態場景裡，既有流程常依賴 off-policy 擾動，或是只看結果好壞的粗訊號。這種訊號拿來做大方向排序可以，但不太適合細粒度的視覺推理。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776233216658-4juh.png\" alt=\"用 rubric 讓視覺偏好訓練更精準\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>原因很直接。圖像任務很少只是產生一個「整體上更好」的回答。回覆可以很順，但還是漏掉關鍵視覺細節。若資料蒐集只記錄整體結果，模型就不一定知道，對某一張圖、某一條指令來說，到底是哪個標準在區分好壞。\u003C\u002Fp>\u003Cp>所以這篇論文的核心主張是：如果任務本身是 instance-specific，那 reward 訊號也應該是 instance-specific。rDPO 不再只問「哪個答案比較好」，而是要求以 rubric-based evaluation 來拆解成 essential 與 additional c\u003Ca href=\"\u002Fnews\u002Fscenecritic-symbolic-evaluator-3d-scenes-zh\">rit\u003C\u002Fa>\u003Ca href=\"\u002Fnews\u002Fcertik-opens-ai-auditor-to-global-developers-zh\">er\u003C\u002Fa>ia。\u003C\u002Fp>\u003Ch2>rDPO 到底怎麼運作\u003C\u002Fh2>\u003Cp>rDPO 的做法，是為每個 image-instruction pair 建一份 checklist 式 rubric。這份 rubric 會列出評分所需的 crit\u003Ca href=\"\u002Fnews\u002Flayer-2-blockchain-scalability-explained-zh\">er\u003C\u002Fa>ia，而且是可以拿來評估任何 policy 的輸出，不會綁死在某個模型的語氣或輸出風格上。換句話說，rubric 描述的是這個例子真正重要的事，而不是某個模型看起來「比較像樣」的表達方式。\u003C\u002Fp>\u003Cp>論文還提到，instruction-rubric pool 是在離線階段先建立，再拿去重用於 on-policy data 的建構。這點很重要，因為它不是每次訓練步都重新做一次標註，而是先把可重用的 criteria 做好，之後再拿來配合目前 policy 產生的資料。\u003C\u002Fp>\u003Cp>從工程角度看，這等於把兩個常被拆開的流程接起來：一個是資料生成，一個是 reward 評估。on-policy 的好處，是資料更接近模型當下的行為；rubric 的好處，是把 feedback 拉回到更精準的 criterion-level，而不是只給一個 win\u002Floss 標籤。\u003C\u002Fp>\u003Cp>論文把這種設計視為更適合 visual preference optimization 的方式，因為 feedback 同時具備兩個特性：一是 local，因為它綁定特定的圖文樣本；二是 structured，因為它不是單一總分，而是由 essential 與 additional criteria 組成。\u003C\u002Fp>\u003Ch2>論文實際證明了什麼\u003C\u002Fh2>\u003Cp>這篇摘要給了三類結果：reward modeling benchmarks、downstream benchmarks，以及一個綜合性的 scalability benchmark。和很多只有概念沒有數字的摘要相比，這篇至少有一些可比較的數值。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776233209738-3gdb.png\" alt=\"用 rubric 讓視覺偏好訓練更精準\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>在 public reward modeling benchmarks 上，rubric-based prompting 對一個 30B-A3B judge 有「大幅」提升，並且把表現拉到接近 GPT-5.4。摘要沒有提供這個比較的完整分數，所以只能依照作者的文字判斷提升幅度很明顯，但無法從摘要精確算出差距。\u003C\u002Fp>\u003Cp>在 public downstream benchmarks 上，rubric-based filtering 讓 macro average 提升到 82.69。相對地，outcome-based filtering 會從 81.14 掉到 75.82。這一組數字很關鍵，因為它直接顯示：在作者的設定裡，單純看結果的過濾方式，不只不夠細，還可能把表現拉低；改成 rubric 之後，效果明顯更好。\u003C\u002Fp>\u003Cp>第三個結果是 scalability benchmark。rDPO 在一個 comprehensive benchmark 上拿到 61.01，優於 style-constrained baseline 的 52.36，也超過 59.48 的 base model。摘要沒有在這裡寫出 benchmark 名稱，所以最安全的解讀是：在該評測上，rDPO 的擴展性與整體表現都比 baseline 更好，甚至超過起始模型。\u003C\u002Fp>\u003Cp>但也要注意，摘要沒有提供訓練成本、延遲、標註吞吐量，或是 rubric 建置的實際難度。也就是說，這篇論文證明了方法「可能有效」，但還沒在摘要裡回答「要付出多少代價」。\u003C\u002Fp>\u003Ch2>對開發者有什麼影響\u003C\u002Fh2>\u003Cp>如果你在做 multimodal assistant、vision-language evaluator，或任何依賴 preference tuning 的系統，這篇論文其實在提醒一個常見失誤：太通用的偏好標籤，對圖像推理來說可能太粗。模型可能會學著產出看起來更好的回答，卻還是漏掉真正重要的視覺條件。\u003C\u002Fp>\u003Cp>rDPO 提供的方向很工程化：把 reward signal 做得更明確，而且要跟任務本身綁緊。這不只對訓練 policy 有用，對做 judge model 也有參考價值。摘要提到 rubric-based prompting 讓 30B-A3B judge 的表現大幅改善，這暗示它可能不只是 policy optimization 的技巧，也可能是評估品質的補強手段。\u003C\u002Fp>\u003Cp>另外，這篇論文也透露一個資料工程上的想法：先離線建立 instruction-rubric pool，再重複使用。這代表 rubric 不是每筆資料都重做一次，而是有機會攤提成本。對想把標註流程做規模化的團隊來說，這可能是最實用的部分。\u003C\u002Fp>\u003Ch2>限制與還沒說清楚的地方\u003C\u002Fh2>\u003Cp>摘要對結果講得不少，但對實作細節講得很少，所以還有幾個問題沒被回答。rubric 的建立成本，會不會比一般 preference labeling 高很多？不同標註者或自動 judge 對 rubric 的一致性如何？提升到底有多少來自更好的資料選擇，又有多少是 rubric 結構本身的功勞？\u003C\u002Fp>\u003Cp>還有一個更大的問題是可移植性。論文強調 rubrics 是 instance-specific，這正是它的優點，但也意味著它可能比較難直接套到差異很大的領域。若指令與圖像內容變化很大，團隊就需要一套夠穩的 rubric 生成流程，否則規模化會變得麻煩。\u003C\u002Fp>\u003Cp>另外，摘要沒有列出所有 benchmark 的名稱，也沒有附上完整 ablation。換句話說，雖然方向很清楚：on-policy data 加上 criterion-level feedback，通常比粗粒度 outcome filtering 更好；但真正的適用邊界，還得看完整論文才能判斷。\u003C\u002Fp>\u003Cp>總結來說，這篇論文的重點很直接：如果你的多模態偏好資料太模糊，最佳化器學到的東西也會太模糊。rDPO 想做的事，就是把視覺偏好學習變成 rubric-driven process，而摘要裡的數字顯示，這種額外結構確實有機會帶來實際收益。\u003C\u002Fp>\u003Cul>\u003Cli>核心想法：用 instance-specific rubric 取代粗粒度偏好標籤。\u003C\u002Fli>\u003Cli>訓練設計：先離線建立 instruction-rubric pool，再重用到 on-policy data 建構。\u003C\u002Fli>\u003Cli>已公開結果：reward modeling、downstream macro average、scalability benchmark 都有提升。\u003C\u002Fli>\u003Cli>主要限制：摘要沒有交代 rubric 成本、完整 benchmark 名稱與完整 ablation。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>對實作端來說，這篇的價值不只在分數。它提供的是一個設計模式：如果你的任務依賴細微的視覺判準，就先把判準寫清楚，再讓模型去優化。\u003C\u002Fp>","rDPO 用每個圖文任務的專屬 rubric 取代粗粒度偏好訊號，讓視覺偏好最佳化更細緻，並在過濾與 benchmark 上帶來提升。","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2604.13029",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776233216658-4juh.png",[13,14,15,16,17],"DPO","vision-language","preference optimization","rubric","reward modeling","zh",0,false,"2026-04-15T06:06:32.083225+00:00","2026-04-15T06:06:31.94+00:00","done","8e5a0aa1-e1e0-47b8-8fd2-53e7d8390af3","rubric-based-dpo-visual-preference-tuning-zh","research","b6739170-e7c9-4e98-b99b-a54670dafe59","published","2026-04-15T09:00:08.286+00:00",[31,32,34,36,38],{"name":16,"slug":16},{"name":17,"slug":33},"reward-modeling",{"name":15,"slug":35},"preference-optimization",{"name":13,"slug":37},"dpo",{"name":14,"slug":14},{"id":27,"slug":40,"title":41,"language":42},"rubric-based-dpo-visual-preference-tuning-en","Rubric-Based DPO for Visual Preference Tuning","en",[44,50,56,62,68,74],{"id":45,"slug":46,"title":47,"cover_image":48,"image_url":48,"created_at":49,"category":26},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":51,"slug":52,"title":53,"cover_image":54,"image_url":54,"created_at":55,"category":26},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":57,"slug":58,"title":59,"cover_image":60,"image_url":60,"created_at":61,"category":26},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":63,"slug":64,"title":65,"cover_image":66,"image_url":66,"created_at":67,"category":26},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":69,"slug":70,"title":71,"cover_image":72,"image_url":72,"created_at":73,"category":26},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":75,"slug":76,"title":77,"cover_image":78,"image_url":78,"created_at":79,"category":26},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[81,86,91,96,101,106,111,116,121,126],{"id":82,"slug":83,"title":84,"created_at":85},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":87,"slug":88,"title":89,"created_at":90},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":92,"slug":93,"title":94,"created_at":95},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]