[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-long-context":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"9bacdf64-f75d-4c59-a836-d469cbe34dfc","long context","long-context",8,"長上下文指的是模型在一次推理中維持大量前後文的能力，牽涉記憶壓縮、檢索、快權重更新與推理穩定性。從 1M\u002F2M token 視窗到 state-space、TTT 與 agent 工作流，都是它的實作重點。","Long context refers to an LLM’s ability to keep and use very large histories in one pass, shaping memory design, retrieval, fast-weight updates, and stable reasoning. It shows up in 1M-2M token windows, state-space memory, TTT, and agent workflows.",[12,21,29,36],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"d63e9d93-e613-4bbf-8135-9599fde11d08","why-xiaomi-mimo-v25-pro-changes-coding-agents-zh","為什麼 Xiaomi 的 MiMo-V2.5-Pro 改變的是 Coding …","MiMo-V2.5-Pro 的重點不在聊天能力，而在長時間、重工具呼叫的 coding agent 工作；它代表 AI 競爭焦點正從會說話，轉向能把任務做完。","model-release","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778689858139-v38e.png","zh","2026-05-13T16:30:27.893951+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"4700e9ba-80dd-47e9-9665-7461287fbbcb","sessa-attention-inside-state-space-memory-zh","Sessa 把注意力放進狀態空間記憶","Sessa 把 attention 放進 state-space 的回饋路徑，想同時保留長上下文檢索與穩定記憶。摘要主打 power-law 記憶尾巴，並宣稱長上下文 benchmark 表現領先。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776751615013-ugp9.png","2026-04-21T06:06:37.215599+00:00",{"id":30,"slug":31,"title":32,"summary":33,"category":26,"image_url":34,"cover_image":34,"language":19,"created_at":35},"75d63765-ec7c-4833-8c77-5caabb7b5c46","in-place-ttt-llms-adapt-at-inference-zh","In-Place TTT 讓 LLM 推理時自適應","這篇論文把 test-time training 做成可直接嵌入 LLM 的推理更新機制，讓模型在長上下文下用 fast weights 即時適應，不必整個重訓。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775628411507-jici.png","2026-04-08T06:06:33.015125+00:00",{"id":37,"slug":38,"title":39,"summary":40,"category":41,"image_url":42,"cover_image":42,"language":19,"created_at":43},"f8c44ca5-e1b5-4b51-a7e5-61cdf8fa5ab9","prompt-engineering-agents-structured-outputs-zh","Agent 與結構化輸出提示詞實戰","LLM 進到生產環境後，提示詞不再是寫得漂亮就好。這篇拆解推理、長上下文、JSON 合約與 agent 迴圈，講清楚怎麼把 GPT、Claude 和本地模型用得更穩。","ai-agent","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775164928194-j63i.png","2026-04-02T21:21:45.59991+00:00"]