[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tag-gemma-4":3},{"tag":4,"articles":11},{"id":5,"name":6,"slug":7,"article_count":8,"description_zh":9,"description_en":10},"12444745-e99b-4dea-aabc-084f61af541b","Gemma 4","gemma-4",4,"Gemma 4 是 Google 的開放權重模型系列，重點在長上下文、多模態與雲端部署彈性。它支援 256K context、vision、audio 與 Apache 2.0 授權，適合關注 Vertex AI、Cloud Run、GKE 和 TPU 的開發者。","Gemma 4 is Google’s open-weight model family focused on long context, multimodal input, and flexible cloud deployment. With 256K context, vision, audio, and Apache 2.0 licensing, it matters for teams using Vertex AI, Cloud Run, GKE, or TPUs.",[12,21,29],{"id":13,"slug":14,"title":15,"summary":16,"category":17,"image_url":18,"cover_image":18,"language":19,"created_at":20},"6dcd6852-b95a-4f62-853a-cc7eb32fff1a","gemma-4-assistant-models-faster-draft-tokens-en","Gemma 4 assistant models get faster draft tokens","Gemma 4 E2B and E4B assistant models use centroid masking to cut lm_head work about 45x with little quality loss.","tools","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778278254841-r19z.png","en","2026-05-08T22:10:34.02358+00:00",{"id":22,"slug":23,"title":24,"summary":25,"category":26,"image_url":27,"cover_image":27,"language":19,"created_at":28},"94f75563-cdbc-47f2-83c1-0589da2710e1","gemma-4-lands-on-google-cloud-en","Gemma 4 lands on Google Cloud","Google Cloud brings Gemma 4 to Vertex AI, Cloud Run, GKE, and TPUs, with 256K context, vision, audio, and Apache 2.0 licensing.","model-release","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775239441417-bla2.png","2026-04-03T18:03:41.196901+00:00",{"id":30,"slug":31,"title":32,"summary":33,"category":34,"image_url":35,"cover_image":35,"language":19,"created_at":36},"1433056d-0745-485f-9501-b6ce042e5516","aime-2026-leaderboard-qwen-leads-math-tests-en","AIME 2026 leaderboard: Qwen leads math tests","Qwen3.6 Plus tops the AIME 2026 math benchmark with 0.953, while 8 models show a wide gap in olympiad-style reasoning.","research","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775179307904-87vj.png","2026-04-03T01:21:30.991592+00:00"]