[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-attention":3},{"tag":4,"articles":10},{"id":5,"name":6,"slug":6,"article_count":7,"description_zh":8,"description_en":9},"daa534d2-e208-4693-968f-b9e6a4ef4e86","attention",3,"注意力機制是大型語言模型的核心，決定模型如何在長上下文中檢索資訊、維持狀態與控制計算成本。這個主題涵蓋傳統 Transformer、KV cache、長上下文優化，以及把 attention 與 state-space、記憶模組結合的新設計。","Attention is the core mechanism that lets LLMs route information across tokens, shaping long-context recall, state tracking, and compute cost. This topic covers classic Transformers, KV cache tradeoffs, and newer hybrids that blend attention with state-space or memory modules.",[11,20],{"id":12,"slug":13,"title":14,"summary":15,"category":16,"image_url":17,"cover_image":17,"language":18,"created_at":19},"8171cdaa-97e2-43fc-88f1-45be756c0a8e","persistent-visual-memory-lvml-visual-drift-en","Persistent Visual Memory fixes LVLM visual drift","PVM is a lightweight LVLM module that keeps visual information available during long generations, reducing visual signal decay.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777876266627-m0bt.png","en","2026-05-04T06:30:32.009827+00:00",{"id":21,"slug":22,"title":23,"summary":24,"category":16,"image_url":25,"cover_image":25,"language":18,"created_at":26},"c1aac50e-0c41-471c-946e-329652f04565","sessa-attention-inside-state-space-memory-en","Sessa: Attention and State-Space Memory for Long Context","Sessa mixes attention with recurrent state-space feedback to improve long-context recall, with power-law memory tails and strong benchmark results.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776751621598-1d0l.png","2026-04-21T06:06:37.564074+00:00"]