[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-turboquant":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"d8bd452a-7bae-471e-99b0-b081e34f288d","TurboQuant","turboquant",13,"TurboQuant 聚焦 LLM 推論時最吃記憶體的 KV cache，透過低位元量化與向量量化降低佔用，進而壓低伺服器成本並提升吞吐量；同時也牽涉到 QJL、PolarQuant、benchmark 公平性與引用爭議。","TurboQuant targets the KV-cache bottleneck in LLM inference, using low-bit and vector quantization to reduce memory pressure and server cost. The topic also connects to QJL, PolarQuant, benchmark fairness, and citation disputes.",[12,21,28,35,42,49,56,63,71,78],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","TurboQuant 傳聞指向 Google 搜尋評分範圍擴大，小型網站可能因此更容易進入排名名單。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","zh","2026-05-15T10:20:27.319472+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":17,"image_url":26,"cover_image":26,"language":19,"created_at":27},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","vLLM 首次大規模比較 TurboQuant 與 FP8 KV-cache。結果很直白：FP8 在速度上更穩，TurboQuant 的高壓縮版本則常掉準確率。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":29,"slug":30,"title":31,"summary":32,"category":17,"image_url":33,"cover_image":33,"language":19,"created_at":34},"b26bb416-9349-48f2-8218-2487e74e97f7","why-turboquant-changes-kv-cache-debate-zh","為什麼 TurboQuant 重新定義 KV cache 辯論","TurboQuant 不是單純把 KV cache 壓小，而是把壓縮從工程技巧提升成可證明的效率方案。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778016645951-x6mu.png","2026-05-05T21:30:23.533526+00:00",{"id":36,"slug":37,"title":38,"summary":39,"category":17,"image_url":40,"cover_image":40,"language":19,"created_at":41},"4242e1bf-4f38-488d-9f92-ccb4f5b70319","turboquant-eden-citation-fight-zh","TurboQuant、EDEN 與引用爭議","TurboQuant 主打 KV-cache 6x 壓縮，卻被指和 DRIVE、EDEN 同源，還有 scale 選擇與 benchmark 公平性爭議。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1777467063814-l8dk.png","2026-04-29T12:50:45.096442+00:00",{"id":43,"slug":44,"title":45,"summary":46,"category":17,"image_url":47,"cover_image":47,"language":19,"created_at":48},"82766fdc-4368-445d-bb4a-03377726df02","turboquant-cuts-memory-use-without-accuracy-loss-zh","TurboQuant 省 6 倍記憶體，還不掉準確率","Google Research 發表 TurboQuant，主打記憶體用量降到 1\u002F6、推論快 8 倍，且在報告測試中沒有準確率損失。這篇看它怎麼改 AI 伺服器成本。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775161134112-ftrj.png","2026-04-02T20:18:39.266389+00:00",{"id":50,"slug":51,"title":52,"summary":53,"category":17,"image_url":54,"cover_image":54,"language":19,"created_at":55},"fdb08bdf-a3bd-4c4d-acaf-ce8035f24449","turboquant-google-paper-explained-zh","TurboQuant 是什麼？Google 新論文重點","Google 的 TurboQuant 盯上 LLM 的 KV cache 瓶頸，用低位元量化降低記憶體用量與推論成本。這篇帶你看它在解什麼問題、和其他優化法差在哪。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775160957331-6iua.png","2026-04-02T20:15:40.07166+00:00",{"id":57,"slug":58,"title":59,"summary":60,"category":17,"image_url":61,"cover_image":61,"language":19,"created_at":62},"6ea121bb-a78e-4bc2-bda3-9be1e048ab95","googles-turboquant-cuts-llm-memory-costs-zh","Google TurboQuant 壓低 LLM 記憶體成本","Google 推出 TurboQuant，結合 QJL 與 PolarQuant，主打壓低 vector quantization 的記憶體開銷，並宣稱 LLM inference 最高可快 8 倍。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775160769707-5e2g.png","2026-04-02T20:12:31.803679+00:00",{"id":64,"slug":65,"title":66,"summary":67,"category":68,"image_url":69,"cover_image":69,"language":19,"created_at":70},"d233c90c-e7d8-418d-a8dc-f76080f1b968","turboquant-fast-cold-starts-rust-gpu-zh","TurboQuant、冷啟動與 GPU Rust","TurboQuant 把 KV cache 壓到 4.6 倍，GPU state restore 盯上 32B 模型冷啟動，Rust 也更深入 CUDA 開發。","tools","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775146380823-5d5u.png","2026-04-02T16:12:38.23896+00:00",{"id":72,"slug":73,"title":74,"summary":75,"category":17,"image_url":76,"cover_image":76,"language":19,"created_at":77},"9d1ed0f2-aace-46ce-9b0a-0c0d8655e8e8","turboquant-wont-fix-memory-crunch-zh","TurboQuant 解不了記憶體荒","Google 的 TurboQuant 可把 KV-cache 記憶體用量降到 6 倍，但更長上下文、更多 agent 與更高吞吐，可能把 DRAM 和 NAND 需求繼續往上推。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775132150405-6fvw.png","2026-04-02T12:15:31.810812+00:00",{"id":79,"slug":80,"title":81,"summary":82,"category":83,"image_url":84,"cover_image":84,"language":19,"created_at":85},"0dcc2c61-c2a6-480d-adb8-dd225fc68914","march-2026-ai-model-news-what-mattered-zh","2026 年 3 月 AI 模型新聞重點","2026 年 3 月的 AI 圈看起來很安靜，其實重點早就不在新模型。真正有料的是推論速度、KV cache 壓縮、Agent 權限控制，還有 OpenAI 內部重組。對開發者來說，這些變化比排行榜多 1 分更實際。","model-release","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1774516475683-ar4c.png","2026-03-26T07:32:08.386348+00:00"]