[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-inference":3},{"tag":4,"articles":10},{"id":5,"name":6,"slug":6,"article_count":7,"description_zh":8,"description_en":9},"0750e826-30ea-499e-858d-2c46a7bfe1fb","inference",6,"Inference 指的是模型在部署後進行推理與生成的階段，牽涉延遲、吞吐量、GPU 排程、記憶體壓縮與成本控制。從 Kubernetes AI 控制平面到量化與 TensorRT-LLM，這是 AI 走向生產環境的核心層。","Inference is the production stage where models serve predictions or generate outputs, so latency, throughput, GPU scheduling, memory footprint, and cost all matter. Recent work spans Kubernetes as an AI control plane, quantization, and TensorRT-LLM optimizations.",[11,20,28,35],{"id":12,"slug":13,"title":14,"summary":15,"category":16,"image_url":17,"cover_image":17,"language":18,"created_at":19},"37045a8c-9166-4ba7-8f62-fcd8e0593665","ae-llm-adaptive-efficiency-optimization-zh","AE-LLM 要讓大模型更省算力","AE-LLM 主打大型語言模型的自適應效率最佳化，想在不固定耗算力的前提下，讓模型依工作負載調整效率；但摘要沒有公開完整 benchmark 細節。","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778051455312-7tw1.png","zh","2026-05-06T07:10:32.541013+00:00",{"id":21,"slug":22,"title":23,"summary":24,"category":25,"image_url":26,"cover_image":26,"language":18,"created_at":27},"b2f9469b-f74a-44b1-9e08-8b1539632542","kubernetes-becoming-ais-control-plane-zh","Kubernetes 正在變成 AI 控制平面","KubeCon Europe 2026 釋出明確訊號：Kubernetes 正從容器編排，轉向 AI 基礎設施控制平面，重點落在 inference、GPU 與開放標準。","industry","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775178595353-m3ll.png","2026-04-03T01:09:30.415473+00:00",{"id":29,"slug":30,"title":31,"summary":32,"category":25,"image_url":33,"cover_image":33,"language":18,"created_at":34},"779f5798-9c39-4ce2-95d7-f0abfd24a695","five-ai-infra-frontiers-bessemer-2026-zh","Bessemer 看準的 5 個 AI 基礎設施前線","Bessemer 2026 AI infra 藍圖指向 memory、continual learning、RL、inference 與 world models。重點不是更大模型，而是讓 AI 真正進到生產環境。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775164388114-uo7t.png","2026-04-02T21:12:39.852377+00:00",{"id":36,"slug":37,"title":38,"summary":39,"category":16,"image_url":40,"cover_image":40,"language":18,"created_at":41},"6ea121bb-a78e-4bc2-bda3-9be1e048ab95","googles-turboquant-cuts-llm-memory-costs-zh","Google TurboQuant 壓低 LLM 記憶體成本","Google 推出 TurboQuant，結合 QJL 與 PolarQuant，主打壓低 vector quantization 的記憶體開銷，並宣稱 LLM inference 最高可快 8 倍。","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775160769707-5e2g.png","2026-04-02T20:12:31.803679+00:00"]