[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-open-source-llm-comparison-2026-zh":3,"tags-open-source-llm-comparison-2026-zh":35,"related-lang-open-source-llm-comparison-2026-zh":52,"related-posts-open-source-llm-comparison-2026-zh":56,"series-model-release-710ff4cc-d333-4bd8-b50a-e5522d430161":93},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":23,"translated_content":10,"views":24,"is_premium":25,"created_at":26,"updated_at":26,"cover_image":11,"published_at":27,"rewrite_status":28,"rewrite_error":10,"rewritten_from_id":29,"slug":30,"category":31,"related_article_id":32,"status":33,"google_indexed_at":34,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":25},"710ff4cc-d333-4bd8-b50a-e5522d430161","2026 開源 LLM 誰領先","\u003Cp>2026 年的開源 LLM，不再只是玩具。\u003Ca href=\"https:\u002F\u002Fcomputingforgeeks.com\u002Fopen-source-llm-comparison\u002F\" target=\"_blank\" rel=\"noopener\">ComputingForGeeks\u003C\u002Fa> 整理的比較表很直接：\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\" target=\"_blank\" rel=\"noopener\">Qwen 3.5\u003C\u002Fa> 有 256K context，\u003Ca href=\"https:\u002F\u002Fwww.deepseek.com\u002F\" target=\"_blank\" rel=\"noopener\">DeepSeek R1\u003C\u002Fa> 在 MATH-500 拿到 97.3%，\u003Ca href=\"https:\u002F\u002Fwww.zhipuai.cn\u002Fen\u002F\" target=\"_blank\" rel=\"noopener\">GLM-5\u003C\u002Fa> 則在 SWE-bench Verified 拿到 77.8%。講白了，這些數字已經不是「還行」而已，是能進產品討論桌的程度。\u003C\u002Fp>\u003Cp>更現實的是，現在選模型不只看分數。你還得看授權、硬體成本、推理速度，還有能不能合法上線。說真的，這才是開發者每天會撞到的牆。\u003C\u002Fp>\u003Ch2>2026 的開源模型戰場很擠\u003C\u002Fh2>\u003Cp>這份表把主流開源模型幾乎都放進來了。像是 Qwen 3、Qwen 3.5、GLM-5、DeepSeek V3.2、DeepSeek R1、\u003Ca href=\"https:\u002F\u002Fai.meta.com\u002Fllama\u002F\" target=\"_blank\" rel=\"noopener\">Llama 4\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fai.google.dev\u002Fgemma\" target=\"_blank\" rel=\"noopener\">Gemma 3\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fmistral.ai\u002F\" target=\"_blank\" rel=\"noopener\">Mistral Large 3\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwww.cohere.com\u002Fcommand\" target=\"_blank\" rel=\"noopener\">Command A\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwww.tii.ae\u002F\" target=\"_blank\" rel=\"noopener\">Falcon 3\u003C\u002Fa>，還有 \u003Ca href=\"https:\u002F\u002Fdatabricks.com\u002Fblog\u002Fintroducing-dbrx-new-state-art-open-llm\" target=\"_blank\" rel=\"noopener\">DBRX\u003C\u002Fa>。名單很長，但差異其實很明顯。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775131800331-8pqc.png\" alt=\"2026 開源 LLM 誰領先\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen\" target=\"_blank\" rel=\"noopener\">Qwen\u003C\u002Fa> 系列最像全能型選手。Qwen 3.5 397B-A17B 這種架構，雖然總參數很大，但每個 token 只啟動 17B active parameters。這代表推理成本比較好控。對要自己架伺服器的人來說，這點很重要。\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1\" target=\"_blank\" rel=\"noopener\">DeepSeek R1\u003C\u002Fa> 走的是 MoE 路線，總參數 671B，active parameters 37B。它比較偏向推理。\u003Ca href=\"https:\u002F\u002Fai.meta.com\u002Fllama\u002F\" target=\"_blank\" rel=\"noopener\">Llama 4\u003C\u002Fa> 則是把 context length 拉很長，Scout 到 10M tokens，Maverick 到 1M tokens。這種設計很適合長文件、長對話、長程任務。\u003C\u002Fp>\u003Cul>\u003Cli>Qwen 3.5：256K context，支援文字與圖片，Apache 2.0\u003C\u002Fli>\u003Cli>GLM-5：205K context，支援文字與圖片，MIT\u003C\u002Fli>\u003Cli>DeepSeek V3.2：128K context，MIT\u003C\u002Fli>\u003Cli>Llama 4 Maverick：1M context，Llama 4 Community license\u003C\u002Fli>\u003Cli>Mistral Small 4：256K context，Apache 2.0\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>Benchmarks 比行銷文案更誠實\u003C\u002Fh2>\u003Cp>看 benchmark，通常比看官網文案更有用。這份比較表用了 MMLU、MMLU-Pro、GPQA Diamond、AIME ’24、MATH-500、SWE-bench Verified。這幾個測試涵蓋常識、進階推理、數學、程式碼修 bug，算是很實在。\u003C\u002Fp>\u003Cp>最亮眼的數字有三個。\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\" target=\"_blank\" rel=\"noopener\">Qwen 3 235B\u003C\u002Fa> 在 GPQA Diamond 拿到 77.2%，AIME ’24 拿到 85.7%。\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1\" target=\"_blank\" rel=\"noopener\">DeepSeek R1\u003C\u002Fa> 在 MATH-500 拿到 97.3%，幾乎是把這個測試打到快滿分。\u003Ca href=\"https:\u002F\u002Fwww.zhipuai.cn\u002Fen\u002F\" target=\"_blank\" rel=\"noopener\">GLM-5\u003C\u002Fa> 則在 SWE-bench Verified 拿到 77.8%，是表內最強的 coding 成績。\u003C\u002Fp>\u003Cp>這裡可以借用一句真實的話。\u003Cblockquote>“We a\u003Ca href=\"\u002Fnews\u002Fcrewform-agents-act-like-mcp-tools-zh\">re\u003C\u002Fa> seeing o\u003Ca href=\"\u002Fnews\u002Fopencode-mcp-servers-oauth-support-zh\">pen\u003C\u002Fa> models catch up fast in both quality and efficiency.” — Satya Nadel\u003Ca href=\"\u002Fnews\u002Fsolana-ai-agents-onchain-transactions-99-percent-zh\">la\u003C\u002Fa>, Microsoft Build 2024 keynote\u003C\u002Fblockquote>這句話放到 2026 來看，還是很貼切。開源模型現在的問題，不是能不能做事，而是要做哪件事。\u003C\u002Fp>\u003Cp>另一個值得看的點，是 \u003Ca href=\"https:\u002F\u002Fai.meta.com\u002Fllama\u002F\" target=\"_blank\" rel=\"noopener\">Llama 4 Maverick\u003C\u002Fa> 在 MMLU 拿到 85.5%，看起來很漂亮。但 MMLU 只是通用能力的一部分。它不等於深推理，也不等於真的會寫程式。只看單一分數，很容易選錯。\u003C\u002Fp>\u003Cul>\u003Cli>Qwen 3 235B：MMLU-Pro 83.6%，GPQA Diamond 77.2%，AIME ’24 85.7%\u003C\u002Fli>\u003Cli>DeepSeek R1：MMLU-Pro 84.0%，GPQA Diamond 71.5%，MATH-500 97.3%\u003C\u002Fli>\u003Cli>GLM-5：SWE-bench Verified 77.8%\u003C\u002Fli>\u003Cli>Llama 4 Maverick：MMLU 85.5%\u003C\u002Fli>\u003Cli>Gemma 3 27B：MMLU 78.6%，MATH-500 50.0%\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>授權條款，才是能不能上線的分水嶺\u003C\u002Fh2>\u003Cp>很多人先看分數，後看授權。這順序常常會害死人。你模型選得再好，只要法務不給過，產品還是不能出貨。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775131797492-6k09.png\" alt=\"2026 開源 LLM 誰領先\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>目前最省事的，還是 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FQwenLM\u002FQwen\" target=\"_blank\" rel=\"noopener\">Qwen\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1\" target=\"_blank\" rel=\"noopener\">DeepSeek\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwww.zhipuai.cn\u002Fen\u002F\" target=\"_blank\" rel=\"noopener\">GLM-5\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fmistral.ai\u002F\" target=\"_blank\" rel=\"noopener\">Mistral\u003C\u002Fa> 這幾條線。Apache 2.0 和 MIT 對新創最友善。你要 fine-tune、self-host、賣產品，流程都比較乾淨。\u003C\u002Fp>\u003Cp>但 \u003Ca href=\"https:\u002F\u002Fai.meta.com\u002Fllama\u002F\" target=\"_blank\" rel=\"noopener\">Meta 的 Llama\u003C\u002Fa> 就沒那麼單純。Llama 4 和 Llama 3.3 雖然可免費使用，但有 7 億月活用戶門檻。超過之後，就得看 Meta 的條款。\u003Ca href=\"https:\u002F\u002Fai.google.dev\u002Fgemma\" target=\"_blank\" rel=\"noopener\">Gemma\u003C\u002Fa> 則是要接受 Google 條款後才能商用。\u003Ca href=\"https:\u002F\u002Fwww.cohere.com\u002Fcommand\" target=\"_blank\" rel=\"noopener\">Command\u003C\u002Fa> 系列是 CC-BY-NC，商業用途卡得很死。\u003Ca href=\"https:\u002F\u002Fwww.tii.ae\u002F\" target=\"_blank\" rel=\"noopener\">Falcon 3\u003C\u002Fa> 還有營收超過 100 萬美元後的 royalty 條款。\u003C\u002Fp>\u003Cp>所以很多團隊最後選的，不是最強模型，而是最好簽的模型。這很現實，也很台灣。大家都想快上線，但合約常常先把人卡住。\u003C\u002Fp>\u003Cul>\u003Cli>Apache 2.0：Qwen 3\u002F3.5、Mistral Large 3、Mistral Small 4、Mixtral 8x7B、Grok-1\u003C\u002Fli>\u003Cli>MIT：DeepSeek V3\u002FR1\u002FV3.2、Phi-4 變體、GLM-5\u003C\u002Fli>\u003Cli>Llama 4 Community：700M MAU 以下免費，超過後看 Meta 條款\u003C\u002Fli>\u003Cli>CC-BY-NC：Command R+、Command A，不能直接商用\u003C\u002Fli>\u003Cli>DBRX：不能拿去訓練其他 LLM\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>自架測試後，排名會變樣\u003C\u002Fh2>\u003Cp>benchmark 很重要，但跑在自己機器上又是另一回事。這份文章的 Ollama 測試環境很務實。Ubuntu 24.04 LTS，4 vCPUs，16 GB RAM，CPU-only inference。這不是高級 GPU 農場，就是一般開發者比較可能碰到的條件。\u003C\u002Fp>\u003Cp>\u003Ca href=\"https:\u002F\u002Follama.com\u002F\" target=\"_blank\" rel=\"noopener\">Ollama\u003C\u002Fa> 跑 \u003Ca href=\"https:\u002F\u002Fai.google.dev\u002Fgemma\" target=\"_blank\" rel=\"noopener\">Gemma 3 4B\u003C\u002Fa> 時，只用了 4.2 GB RAM。這是表內最省記憶體的模型。\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fmeta-llama\u002FLlama-3.2-3B\" target=\"_blank\" rel=\"noopener\">Llama 3.2 3B\u003C\u002Fa> 雖然最快，88 秒就回應，但吃了 11.4 GB RAM。\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1\" target=\"_blank\" rel=\"noopener\">DeepSeek R1 8B\u003C\u002Fa> 和 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\" target=\"_blank\" rel=\"noopener\">Qwen 3 8B\u003C\u002Fa> 都跑到 433 秒，因為推理型模型會先產生更多中間 token。\u003C\u002Fp>\u003Cp>這裡的結論很直接。小模型不一定快，聰明模型常常比較慢。你如果要做本機助理、內網工具、或低成本 API，RAM 和 latency 可能比榜單分數更重要。\u003C\u002Fp>\u003Cul>\u003Cli>Gemma 3 4B：4.2 GB RAM，94 秒\u003C\u002Fli>\u003Cli>Llama 3.2 3B：11.4 GB RAM，88 秒\u003C\u002Fli>\u003Cli>Phi-4 Mini 3.8B：8.9 GB RAM，97 秒\u003C\u002Fli>\u003Cli>Mistral 7B：7.4 GB RAM，125 秒\u003C\u002Fli>\u003Cli>Qwen 3 8B、DeepSeek R1 8B：都要 433 秒\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>我會怎麼選\u003C\u002Fh2>\u003Cp>如果是我今年要上產品，我會先看 \u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002FQwen\" target=\"_blank\" rel=\"noopener\">Qwen 3.5\u003C\u002Fa>。它的泛用性高，context 也夠長。要做推理任務，我會看 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdeepseek-ai\u002FDeepSeek-R1\" target=\"_blank\" rel=\"noopener\">DeepSeek R1\u003C\u002Fa>。要做 coding，我會先試 \u003Ca href=\"https:\u002F\u002Fwww.zhipuai.cn\u002Fen\u002F\" target=\"_blank\" rel=\"noopener\">GLM-5\u003C\u002Fa>，因為 SWE-bench Verified 的數字很漂亮。\u003C\u002Fp>\u003Cp>但真正的選型邏輯，不是「誰最強」。而是「誰最適合你的工作」。如果你是法規很重的企業，Apache 2.0 或 MIT 幾乎是首選。如果你要處理超長文件，Llama 4、Qwen 3.5、Mistral Large 3 都值得測。如果你在意程式碼修補，GLM-5 要先進你的測試清單。\u003C\u002Fp>\u003Cp>我覺得 2026 的重點很簡單。開源模型已經能打進實戰，但真正拉開差距的，是你的資料、你的提示詞、你的部署方式，還有你能不能快速換模型。這件事很多團隊還沒準備好。\u003C\u002Fp>\u003Ch2>接下來該看什麼\u003C\u002Fh2>\u003Cp>如果你現在要做選型，別只看一張排行榜。先拿自己的資料跑 20 到 50 個真實任務。再比 latency、RAM、成本和授權。這樣比看新聞稿準多了。\u003C\u002Fp>\u003Cp>我會猜，接下來 6 到 12 個月，開源 LLM 的競爭焦點會更偏向「同級效能下的成本」和「授權條款」。誰能把推理成本壓低，誰就更容易進企業環境。你如果是開發者，現在就該把模型切換流程做成可插拔，不然之後會很痛。\u003C\u002Fp>\u003Cp>說白了，2026 的問題不是開源模型能不能用。問題是，你的產品能不能跟著換。這才是現在最值得先處理的事。\u003C\u002Fp>","Qwen 3.5、GLM-5、DeepSeek R1、Llama 4 讓開源 LLM 進入實戰。這篇整理 2026 年主流模型的 benchmark、上下文長度、授權條款與自架表現。","computingforgeeks.com","https:\u002F\u002Fcomputingforgeeks.com\u002Fopen-source-llm-comparison\u002F",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775131800331-8pqc.png",[13,14,15,16,17,18,19,20,21,22],"開源 LLM","Qwen 3.5","DeepSeek R1","GLM-5","Llama 4","模型比較","授權條款","SWE-bench Verified","MATH-500","自架部署","zh",1,false,"2026-04-02T12:09:39.445524+00:00","2026-04-02T12:09:39.227+00:00","done","31b7d763-4c00-4105-9280-4352d203861b","open-source-llm-comparison-2026-zh","model-release","424af64f-8d0b-4cd5-b58b-f37ee073bfa1","published","2026-04-08T09:00:52.405+00:00",[36,38,40,43,45,47,48,49],{"name":17,"slug":37},"llama-4",{"name":21,"slug":39},"math-500",{"name":41,"slug":42},"DeepSeek-R1","deepseek-r1",{"name":14,"slug":44},"qwen-35",{"name":13,"slug":46},"開源-llm",{"name":18,"slug":18},{"name":22,"slug":22},{"name":50,"slug":51},"SWE-Bench Verified","swe-bench-verified",{"id":32,"slug":53,"title":54,"language":55},"open-source-llm-comparison-2026-en","Open Source LLMs in 2026: Who Leads?","en",[57,63,69,75,81,87],{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":31},"bd8cfc0e-66db-4546-9b9e-fa328f7538d6","weishenme-google-yincang-de-gemini-live-moxing-bi-yanshi-gen-zh","為什麼 Google 隱藏的 Gemini Live 模型，比演示更重要","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778869245574-c25w.png","2026-05-15T18:20:23.111559+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":31},"5b5fa24f-5259-4e9e-8270-b08b6805f281","minimax-m1-open-hybrid-attention-reasoning-model-zh","MiniMax-M1：開源 1M Token 推理模型","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797859209-ea1g.png","2026-05-14T22:30:38.636592+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":31},"b1da56ac-8019-4c6b-a8dc-22e6e22b1cb5","gemini-omni-video-review-text-rendering-zh","Gemini Omni 影片模型怎麼了","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778779280109-lrrk.png","2026-05-14T17:20:42.608312+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":31},"d63e9d93-e613-4bbf-8135-9599fde11d08","why-xiaomi-mimo-v25-pro-changes-coding-agents-zh","為什麼 Xiaomi 的 MiMo-V2.5-Pro 改變的是 Coding …","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778689858139-v38e.png","2026-05-13T16:30:27.893951+00:00",{"id":82,"slug":83,"title":84,"cover_image":85,"image_url":85,"created_at":86,"category":31},"8f0c9185-52f9-46f2-82c6-5baec126ba2e","openai-realtime-audio-models-live-voice-zh","OpenAI 即時音訊模型瞄準語音互動","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778451657895-2iu7.png","2026-05-10T22:20:32.443798+00:00",{"id":88,"slug":89,"title":90,"cover_image":91,"image_url":91,"created_at":92,"category":31},"52106dc2-4eba-4ca0-8318-fa646064de97","anthropic-10-finance-ai-agents-zh","Anthropic推10款金融AI Agent","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778389843399-vclb.png","2026-05-10T05:10:22.778762+00:00",[94,99,104,109,114,119,124,129,134,139],{"id":95,"slug":96,"title":97,"created_at":98},"58b64033-7eb6-49b9-9aab-01cf8ae1b2f2","nvidia-rubin-six-chips-one-ai-supercomputer-zh","NVIDIA Rubin 把六顆晶片塞進 AI 機櫃","2026-03-26T07:18:45.861277+00:00",{"id":100,"slug":101,"title":102,"created_at":103},"0dcc2c61-c2a6-480d-adb8-dd225fc68914","march-2026-ai-model-news-what-mattered-zh","2026 年 3 月 AI 模型新聞重點","2026-03-26T07:32:08.386348+00:00",{"id":105,"slug":106,"title":107,"created_at":108},"214ab08b-5ce5-4b5c-8b72-47619d8675dd","why-small-models-are-winning-on-device-ai-zh","小模型為何吃下裝置端 AI","2026-03-26T07:36:30.488966+00:00",{"id":110,"slug":111,"title":112,"created_at":113},"785624b2-0355-4b82-adc3-de5e45eecd88","midjourney-v8-faster-images-higher-costs-zh","Midjourney V8 變快了，也變貴了","2026-03-26T07:52:03.562971+00:00",{"id":115,"slug":116,"title":117,"created_at":118},"cda76b92-d209-4134-86c1-a60f5bc7b128","xiaomi-mimo-trio-agents-robots-voice-zh","小米 MiMo 三模型瞄準代理、機器人與語音","2026-03-28T03:05:08.779489+00:00",{"id":120,"slug":121,"title":122,"created_at":123},"9e1044b4-946d-47fe-9e2a-c2ee032e1164","xiaomi-mimo-v2-pro-1t-moe-agents-zh","小米 MiMo-V2-Pro 登場：1T MoE 模型","2026-03-28T03:06:19.002353+00:00",{"id":125,"slug":126,"title":127,"created_at":128},"d68e59a2-55eb-4a8f-95d6-edc8fcbff581","cursor-composer-2-started-from-kimi-zh","Cursor Composer 2 其實從 Kimi 起步","2026-03-28T03:11:58.893796+00:00",{"id":130,"slug":131,"title":132,"created_at":133},"c4b6186f-bd84-4598-997e-c6e31d543c0d","cursor-composer-2-agentic-coding-model-zh","Cursor Composer 2 走向代理式寫碼","2026-03-28T03:13:06.422716+00:00",{"id":135,"slug":136,"title":137,"created_at":138},"45812c46-99fc-4b1f-aae1-56f64f5c9024","openai-shuts-down-sora-video-app-api-zh","OpenAI 關閉 Sora App 與 API","2026-03-29T04:47:48.974108+00:00",{"id":140,"slug":141,"title":142,"created_at":143},"e112e76f-ec3b-408f-810e-e93ae21a888a","apple-siri-gemini-distilled-models-zh","Apple Siri 牽手 Gemini 的真相","2026-03-29T04:52:57.886544+00:00"]