[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-large-language-models":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"64c461d0-6a5a-4e83-a1a3-cfde93450a4d","large language models","large-language-models",4,"大型語言模型（LLM）正從聊天工具走向基礎AI層，牽動模型訓練、推理成本、能力評測、提示工程與可解釋性等議題。這個主題也涵蓋模型安全、企業合作與部署策略，影響產品設計與算力布局。","Large language models are becoming a core layer of AI systems, shaping how teams train, evaluate, prompt, and deploy models. This topic covers model safety, explainability, inference cost, and the business deals that determine who gets access to compute and capability.",[12,21],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"c8144dbd-f25d-40d8-82e9-0b9125de95b3","selective-llm-regularization-recommenders-zh","選擇性 LLM 正則化推薦器","這篇論文在談怎麼把 LLM 當成訓練時的輔助訊號，選擇性地做正則化，提升推薦模型，但不必重寫整套推薦系統。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778053264719-obld.png","zh","2026-05-06T07:40:36.630123+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":17,"image_url":26,"cover_image":26,"language":19,"created_at":27},"37045a8c-9166-4ba7-8f62-fcd8e0593665","ae-llm-adaptive-efficiency-optimization-zh","AE-LLM 要讓大模型更省算力","AE-LLM 主打大型語言模型的自適應效率最佳化，想在不固定耗算力的前提下，讓模型依工作負載調整效率；但摘要沒有公開完整 benchmark 細節。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778051455312-7tw1.png","2026-05-06T07:10:32.541013+00:00"]