[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-kv-cache":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"422aade2-8ccd-4b7c-b4a5-7836c6353ec7","KV cache","kv-cache",13,"KV cache 是大型語言模型推論時最吃記憶體的部分之一，長上下文、低延遲服務與雲端部署都會直接受它影響。這個主題涵蓋量化、壓縮、HBM 容量與頻寬取捨，以及像 TurboQuant 這類降低 KV cache 成本的方法。","KV cache is the working memory that lets LLMs reuse past tokens during inference, and it often becomes the main limit on context length, latency, and serving cost. This tag covers quantization, compression, HBM capacity and bandwidth trade-offs, and papers like TurboQuant.",[12,21,28,35,42],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"a259bf3b-e800-46fa-8550-605b5b8f4115","why-turboquant-changes-kv-cache-debate-en","Why TurboQuant changes the KV cache debate","TurboQuant makes KV cache compression a theoretical win, not just an engineering trick.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778016643980-zx6u.png","en","2026-05-05T21:30:24.349733+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":17,"image_url":26,"cover_image":26,"language":19,"created_at":27},"fdb997e1-6691-46c5-bb2d-e1ca3f730c25","turboquant-google-paper-explained-en","TurboQuant Explained: Why Google’s New Paper Matters","Google’s TurboQuant paper targets KV cache bottlenecks with lower-bit quantization, aiming to cut LLM memory use and inference costs.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775160958409-7jj5.png","2026-04-02T20:15:40.601225+00:00",{"id":29,"slug":30,"title":31,"summary":32,"category":17,"image_url":33,"cover_image":33,"language":19,"created_at":34},"d4867ede-353b-4812-aac7-aebe28ef3613","turboquant-wont-fix-memory-crunch-en","TurboQuant Won’t Fix the Memory Crunch","Google’s TurboQuant can cut KV-cache memory use 6x, but longer contexts may keep DRAM and NAND demand climbing.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775132152400-1kew.png","2026-04-02T12:15:32.095995+00:00",{"id":36,"slug":37,"title":38,"summary":39,"category":17,"image_url":40,"cover_image":40,"language":19,"created_at":41},"cdcfe76f-c9bf-44ac-98d9-e9041d414d6c","sebastian-raschka-llm-architecture-gallery-en","Sebastian Raschka’s LLM Architecture Gallery","Raschka’s gallery compares GPT-2, Llama 3, OLMo 2, DeepSeek, and Qwen stacks with exact layer, cache, and attention data.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775121663908-8tcs.png","2026-04-02T07:27:33.848813+00:00",{"id":43,"slug":44,"title":45,"summary":46,"category":17,"image_url":47,"cover_image":47,"language":19,"created_at":48},"27f0d044-b9f9-4a58-99e8-1a181ea32f19","universal-yoco-efficient-depth-scaling-en","Universal YOCO aims to scale depth without cache bloat","YOCO-U mixes recursive computation with efficient attention to scale LLM depth while keeping inference overhead and KV cache growth in check.","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775115621645-wqql.png","2026-04-02T06:06:26.960639+00:00"]