[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-unipool-shared-expert-pool-moe-zh":3,"tags-unipool-shared-expert-pool-moe-zh":35,"related-lang-unipool-shared-expert-pool-moe-zh":45,"related-posts-unipool-shared-expert-pool-moe-zh":49,"series-research-072a2114-1f7f-4d61-99f7-be82c686c286":86},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":19,"translated_content":10,"views":20,"is_premium":21,"created_at":22,"updated_at":22,"cover_image":11,"published_at":23,"rewrite_status":24,"rewrite_error":10,"rewritten_from_id":25,"slug":26,"category":27,"related_article_id":28,"status":29,"google_indexed_at":30,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":31,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":21},"072a2114-1f7f-4d61-99f7-be82c686c286","UniPool：共享 MoE 專家池","\u003Cp data-speakable=\"summary\">UniPool 把 \u003Ca href=\"\u002Ftag\u002Fmoe\">MoE\u003C\u002Fa> 的分層專家改成全域共享池，讓不同層共用同一批 experts，降低重複參數。\u003C\u002Fp>\u003Cp>傳統 MoE Transformer 的做法很直覺：每一層都配自己的專家組。這種設計好理解，也好實作，但代價是專家容量會跟著層數一路線性增加。\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.06665\">UniPool: A Globally Shared Expert Pool for Mixture-of-Experts\u003C\u002Fa> 認為，這個預設其實可能太死板了，尤其當某些層的專家容量並沒有真的被充分用滿時。\u003C\u002Fp>\u003Cp>這篇摘要想處理的痛點很明確。若大量專家參數只是依照層數重複複製，那 MoE 雖然名義上更有容量，實際上卻可能在付出不必要的參數成本。作者提到的 routing probe 顯示，把較深層的 learned top-k router 換成 uniform random routing，在多個 produ\u003Ca href=\"\u002Fnews\u002Factcam-joint-camera-motion-control-zh\">ct\u003C\u002Fa>ion MoE 模型上只讓下游準確率下降 1.0 到 1.6 個百分點。這不代表 routing 不重要，但至少說明：有些層的專家配置，可能沒有標準架構想像中那麼不可替代。\u003C\u002Fp>\u003Ch2>這篇論文想解什麼問題\u003C\u002Fh2>\u003Cp>在常見的 MoE 架構裡，每一層都擁有自己的 experts。這樣的好處是邏輯清楚，層與層之間的容量切分也容易管理；但壞處是，專家參數會隨深度線性膨脹。對要做更大模型的團隊來說，這是一個很硬的成本結構：每多一層，通常就得再多養一批專家權重。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778221269156-lam7.png\" alt=\"UniPool：共享 MoE 專家池\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>UniPool 的出發點，就是打破這種「每層自帶一組專家」的綁定關係。它把專家容量視為一個全域共享資源，而不是每層私有的固定配額。模型仍然保留每層自己的 router，但 router 不再只指向本層專屬的 experts，而是去同一個共享池裡挑選。\u003C\u002Fp>\u003Cp>這個想法的價值，在於它重新定義了 MoE 的容量分配方式。MoE 本來就是為了用稀疏路由換取更高容量，同時避免\u003Ca href=\"\u002Ftag\u002F推論成本\">推論成本\u003C\u002Fa>暴增；如果架構本身因為層層重複而浪費了專家參數，那整個效率論述就會被削弱。UniPool 想做的，就是在不放棄 sparse expert routing 的前提下，讓容量配置更彈性。\u003C\u002Fp>\u003Ch2>UniPool 的方法怎麼運作\u003C\u002Fh2>\u003Cp>UniPool 的核心改動其實很簡單：把原本屬於各層的 experts，改成一個全域共享的 expert pool。每一層還是有自己的 router，負責決定哪些 \u003Ca href=\"\u002Ftag\u002Ftoken\">token\u003C\u002Fa> 要送去哪些 experts；差別在於，這些 router 指向的是同一批 experts，而不是各自獨立的一套。\u003C\u002Fp>\u003Cp>這樣做的結果，是不同層可以重複使用同一份專家容量。對模型來說，expert parameter 不再需要隨層數重複堆疊。對工程上來說，這等於把 expert capacity 從「每層固定配額」改成「全模型共用預算」。\u003C\u002Fp>\u003Cp>不過，共享也會帶來新問題。當所有層都能打到同一個 pool 時，某些 experts 可能會被過度使用，另一些則幾乎閒置。為了避免訓練失衡，論文加入了 pool-level auxiliary loss，目的是在整個共享池內平衡 expert utilization。作者也使用 NormRouter，並把它描述為能在共享 expert pool 中提供 sparse 且 scale-stable 的 routing。\u003C\u002Fp>\u003Cp>所以這不是單純把 experts 合併就結束了，而是「架構共享」加上「訓練控制」一起上。對想實作這個方法的人來說，重點不只是共享本身，還要注意共享之後\u003Ca href=\"\u002Fnews\u002Fhow-to-build-advanced-rag-in-n8n-zh\">怎麼\u003C\u002Fa>避免少數 experts 吃掉大部分流量。\u003C\u002Fp>\u003Ch2>論文實際證明了什麼\u003C\u002Fh2>\u003Cp>摘要中的實驗範圍是五個 LLaMA 架構規模：182M、469M、650M、830M、978M 參數。這些模型都在 \u003Ca href=\"\u002Fnews\u002Fibm-think-2026-control-over-ai-zh\">Th\u003C\u002Fa>e Pile 的 30B tokens 上訓練。作者表示，UniPool 在這些規模上都能穩定優於對應的 vanilla MoE baseline，指標包含 validation loss 與 perplexity。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778221272233-c2zw.png\" alt=\"UniPool：共享 MoE 專家池\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>摘要裡最具體的數字，是 UniPool 相較於 vanilla MoE，validation loss 最多可下降 0.0386。另一個重點是 reduced-pool 版本：它們只用了 vanilla expert-parameter budget 的 41.6% 到 66.7%，卻仍能在測試規模上達到與 layer-wise MoE 相當或更好的表現。這是這篇最值得注意的地方，因為它直接回應了 MoE 最常被質疑的問題之一：是不是一定要堆那麼多分層專家，效果才會好。\u003C\u002Fp>\u003Cp>作者也把這個結果延伸成一個 scaling 觀點。對 UniPool 來說，pool size 變成一個可以直接調整的 depth-scaling hyperparameter。換句話說，專家容量不必再預設跟深度線性成長；在這些實驗裡，sublinear 的成長方式也能維持，甚至超過原本的 layer-wise 設計。\u003C\u002Fp>\u003Cp>摘要還提到，UniPool 的好處可以和更細粒度的 expert decomposition 一起使用。雖然摘要沒有進一步展開細節，但至少可以看出這個共享池不是只能在單一設定下才有效的特殊技巧。\u003C\u002Fp>\u003Ch2>對開發者有什麼影響\u003C\u002Fh2>\u003Cp>如果你在做或調 MoE 系統，這篇的訊息很直接：expert placement 可能比傳統的分層專屬設計更有彈性。UniPool 提供了一種把 expert capacity 視為共享資源的方式，理論上可以減少參數重複，也讓深度擴展的成本不必那麼快上升。\u003C\u002Fp>\u003Cp>對實務上的工程團隊來說，這個方向至少有幾個可想像的好處：\u003C\u002Fp>\u003Cul>\u003Cli>在相同模型規模下，可能需要更少的 expert 參數。\u003C\u002Fli>\u003Cli>pool size 變成明確可調的容量旋鈕，不必被層數綁死。\u003C\u002Fli>\u003Cli>某些 MoE 層可能不需要完全獨立的 expert set，也能維持效果。\u003C\u002Fli>\u003Cli>訓練時可以用 pool-level 的機制去平衡共享 experts 的使用率。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>不過，這篇摘要沒有提供推論 latency、throughput、記憶體拆分或部署成本的數字，所以不能直接推論它在系統層面一定更快或更省。它目前最強的證據，還是來自訓練後的 validation loss 與 perplexity 改善，以及更低 expert budget 下仍能維持表現。\u003C\u002Fp>\u003Ch2>還有哪些限制要注意\u003C\u002Fh2>\u003Cp>從摘要能看到的資訊來說，這篇確實給了很清楚的方向，但還不是 production-ready 的完整答案。首先，摘要沒有公開完整 benchmark 細節，所以我們看不到更細的訓練設定、路由失衡情況，或不同 pool size 下的敏感度分析。\u003C\u002Fp>\u003Cp>其次，實驗範圍雖然涵蓋五個 LLaMA 規模，但仍然是特定架構、特定資料集、特定訓練條件下的結果。是否能直接推到其他模型家族、其他資料分布，或更大規模的線上服務情境，摘要沒有給出答案。\u003C\u002Fp>\u003Cp>最後，這篇的核心主張是「分層專屬 experts 可能太保守」。UniPool 用全域共享池證明，MoE 的 capacity allocation 不一定要照傳統方式切得那麼死。在它展示的實驗裡，這種做法不只沒有傷害表現，還能在較少 expert 參數下維持甚至改善驗證指標。對開發者來說，這是一個值得認真看的架構替代方案。\u003C\u002Fp>","UniPool 把 MoE 的分層專家改成全域共享池，減少重複參數，並在五個 LLaMA 規模模型上改善驗證損失。","arxiv.org","https:\u002F\u002Farxiv.org\u002Fabs\u002F2605.06665",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778221269156-lam7.png",[13,14,15,16,17,18],"Mixture-of-Experts","MoE","expert pool","routing","LLaMA","validation loss","zh",1,false,"2026-05-08T06:20:40.070989+00:00","2026-05-08T06:20:39.992+00:00","done","edf414e0-b118-4e0d-b91a-dea7ddbd6aa4","unipool-shared-expert-pool-moe-zh","research","1e4ba03d-b371-427a-8d9e-d694f09827b1","published","2026-05-08T09:00:14.125+00:00",[32,33,34],"UniPool 把分層專屬 experts 改成全域共享池，減少重複參數。","在五個 LLaMA 規模與 The Pile 30B tokens 的訓練中，UniPool 改善 validation loss 與 perplexity。","摘要沒有提供推論延遲、吞吐量或部署成本數字，系統層面的收益仍待驗證。",[36,38,40,41,43],{"name":17,"slug":37},"llama",{"name":39,"slug":39},"mixture-of-experts",{"name":16,"slug":16},{"name":15,"slug":42},"expert-pool",{"name":14,"slug":44},"moe",{"id":28,"slug":46,"title":47,"language":48},"unipool-shared-expert-pool-moe-en","UniPool shares MoE experts across layers","en",[50,56,62,68,74,80],{"id":51,"slug":52,"title":53,"cover_image":54,"image_url":54,"created_at":55,"category":27},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":57,"slug":58,"title":59,"cover_image":60,"image_url":60,"created_at":61,"category":27},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":63,"slug":64,"title":65,"cover_image":66,"image_url":66,"created_at":67,"category":27},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":69,"slug":70,"title":71,"cover_image":72,"image_url":72,"created_at":73,"category":27},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":75,"slug":76,"title":77,"cover_image":78,"image_url":78,"created_at":79,"category":27},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":81,"slug":82,"title":83,"cover_image":84,"image_url":84,"created_at":85,"category":27},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[87,92,97,102,107,112,117,122,127,132],{"id":88,"slug":89,"title":90,"created_at":91},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":93,"slug":94,"title":95,"created_at":96},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":123,"slug":124,"title":125,"created_at":126},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":128,"slug":129,"title":130,"created_at":131},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":133,"slug":134,"title":135,"created_at":136},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]