[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-minimax-m1-open-hybrid-attention-reasoning-model-zh":3,"tags-minimax-m1-open-hybrid-attention-reasoning-model-zh":39,"related-lang-minimax-m1-open-hybrid-attention-reasoning-model-zh":48,"related-posts-minimax-m1-open-hybrid-attention-reasoning-model-zh":52,"series-model-release-5b5fa24f-5259-4e9e-8270-b08b6805f281":89},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":32,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":33,"topic_cluster_id":37,"embedding":38,"is_canonical_seed":23},"5b5fa24f-5259-4e9e-8270-b08b6805f281","MiniMax-M1：開源 1M Token 推理模型","\u003Cp data-speakable=\"summary\">MiniMax-M1 是一款開源推理模型，主打 100 萬 \u003Ca href=\"\u002Ftag\u002Ftoken\">Token\u003C\u002Fa> 上下文、8 萬 Token 輸出，還把 API 價格壓得很低。\u003C\u002Fp>\u003Cp>MiniMax 在 2025 年 6 月 16 日推出 \u003Ca href=\"https:\u002F\u002Fwww.minimax.io\u002Fnews\u002Fminimaxm1\" target=\"_blank\" rel=\"noopener\">MiniMax-M1\u003C\u002Fa>。這次最吸睛的不是模型名字，而是數字。100 萬 Token 上下文、8 萬 Token 推理輸出，還有 512 張 H800 跑了 3 週的訓練設定。\u003C\u002Fp>\u003Cp>更狠的是成本。MiniMax 說，整段強化學習只花了 534,700 美元。講白了，這是在告訴大家：大模型不一定非得燒到像失火一樣。它也可以很大，還可以算得精。\u003C\u002Fp>\u003Cp>對開發者來說，這種模型值不值得玩，重點不在宣傳詞。重點在它能不能真的讀長文件、看大程式碼庫、跑長 \u003Ca href=\"\u002Ftag\u002Fagent\">agent\u003C\u002Fa> 流程。MiniMax 把它放進 \u003Ca href=\"https:\u002F\u002Fwww.minimax.io\u002F\" target=\"_blank\" rel=\"noopener\">MiniMax\u003C\u002Fa> App、網頁版和 API，也接上 \u003Ca href=\"https:\u002F\u002Fdocs.vllm.ai\u002F\" target=\"_blank\" rel=\"noopener\">vLLM\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsgl-project\u002Fsglang\" target=\"_blank\" rel=\"noopener\">SGLang\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fhuggingface.co\u002Fminimax\" target=\"_blank\" rel=\"noopener\">Hugging Face\u003C\u002Fa> 與 \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FMiniMax-AI\" target=\"_blank\" rel=\"noopener\">GitHub\u003C\u002Fa>，就是想讓人直接上手。\u003C\u002Fp>\u003Ctable>\u003Cthead>\u003Ctr>\u003Cth>指標\u003C\u002Fth>\u003Cth>MiniMax-M1\u003C\u002Fth>\u003Cth>代表意義\u003C\u002Fth>\u003C\u002Ftr>\u003C\u002Fthead>\u003Ctbody>\u003Ctr>\u003Ctd>上下文長度\u003C\u002Ftd>\u003Ctd>1,000,000 tokens\u003C\u002Ftd>\u003Ctd>可處理超長文件與程式碼庫\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>推理輸出\u003C\u002Ftd>\u003Ctd>80,000 tokens\u003C\u002Ftd>\u003Ctd>適合長鏈思考與多步驟任務\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>RL 訓練算力\u003C\u002Ftd>\u003Ctd>512 H800 × 3 週\u003C\u002Ftd>\u003Ctd>顯示訓練規模很大\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>RL 成本\u003C\u002Ftd>\u003Ctd>534,700 美元\u003C\u002Ftd>\u003Ctd>MiniMax 主打效率\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>SWE-bench validation\u003C\u002Ftd>\u003Ctd>55.6% 到 56.0%\u003C\u002Ftd>\u003Ctd>軟體工程能力有競爭力\u003C\u002Ftd>\u003C\u002Ftr>\u003Ctr>\u003Ctd>API 價格\u003C\u002Ftd>\u003Ctd>每百萬 tokens $0.4 \u002F $2.2\u003C\u002Ftd>\u003Ctd>輸入與輸出都偏低價\u003C\u002Ftd>\u003C\u002Ftr>\u003C\u002Ftbody>\u003C\u002Ftable>\u003Ch2>MiniMax 為什麼要做這種模型\u003C\u002Fh2>\u003Cp>我覺得這題很直白。現在大家都在比誰的模型更會講，MiniMax 直接改比誰能記更多。100 萬 Token 的上下文，不是拿來唬人而已。它能把長文件、長對話、長程式碼一起塞進去。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797859209-ea1g.png\" alt=\"MiniMax-M1：開源 1M Token 推理模型\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>這對實務工作很有感。你在做 code review、法務文件摘要、客服知識庫查詢，或者 agent 反覆呼叫工具時，模型常常不是不會想。它是前面內容太長，後面就忘了。\u003Ca href=\"\u002Ftag\u002F長上下文\">長上下文\u003C\u002Fa>就是在補這個洞。\u003C\u002Fp>\u003Cp>MiniMax 說 M1 用的是混合注意力設計，裡面有 \u003Ca href=\"https:\u002F\u002Fwww.minimax.io\u002Fnews\u002Fminimaxm1\" target=\"_blank\" rel=\"noopener\">Lightning Attention\u003C\u002Fa>。意思很簡單。它想把長序列的計算壓下來，讓模型在吃大\u003Ca href=\"\u002Fnews\u002Fdata-centers-pushing-homeowners-to-solar-zh\">資料\u003C\u002Fa>時，不要把伺服器搞得像在烤肉。\u003C\u002Fp>\u003Cul>\u003Cli>100 萬 Token 上下文\u003C\u002Fli>\u003Cli>8 萬 Token 推理輸出\u003C\u002Fli>\u003Cli>512 張 H800 參與訓練\u003C\u002Fli>\u003Cli>RL 成本 534,700 美元\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>這些 benchmark 到底怎麼看\u003C\u002Fh2>\u003Cp>先講結論。MiniMax-M1 不是只會秀參數。它把重點放在軟體工程、長上下文理解，還有工具使用。這三個方向都很實際，因為現在很多 AI 工作流，最後都會碰到程式、文件和工具鏈。\u003C\u002Fp>\u003Cp>在 \u003Ca href=\"https:\u002F\u002Fwww.swebench.com\u002F\" target=\"_blank\" rel=\"noopener\">SWE-bench\u003C\u002Fa> vali\u003Ca href=\"\u002Fnews\u002Fwhy-anthropic-gates-foundation-ai-public-goods-zh\">dati\u003C\u002Fa>on 上，MiniMax 公布 M1-40k 是 55.6%，M1-80k 是 56.0%。這個數字沒有把 \u003Ca href=\"https:\u002F\u002Fdeepseek.com\u002F\" target=\"_blank\" rel=\"noopener\">DeepSeek\u003C\u002Fa> 的 R1-0528 拉下來，後者是 57.6%。但它還是站在\u003Ca href=\"\u002Ftag\u002F開源模型\">開源模型\u003C\u002Fa>第一梯隊。\u003C\u002Fp>\u003Cp>長上下文部分就更有看頭。MiniMax 說，M1 在開源模型裡表現很強，甚至把 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fo3\u002F\" target=\"_blank\" rel=\"noopener\">OpenAI o3\u003C\u002Fa> 和 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-4\" target=\"_blank\" rel=\"noopener\">Claude 4 Opus\u003C\u002Fa> 也拉進比較。它自家說法是，M1 在這項測試排到第二，只輸給 \u003Ca href=\"https:\u002F\u002Fdeepmind.google\u002Ftechnologies\u002Fgemini\u002Fpro\u002F\" target=\"_blank\" rel=\"noopener\">Gemini 2.5 Pro\u003C\u002Fa>。\u003C\u002Fp>\u003Cblockquote>“This feature gives us a substantial computational efficiency advantage in both training and inference.” — MiniMax\u003C\u002Fblockquote>\u003Cp>這句話很重要。因為大家都知道長上下文很香，但也很貴。模型如果只會吃記憶體，最後還是沒人敢用。MiniMax 想說的是，它不只大，還想讓訓練和推理都算得過去。\u003C\u002Fp>\u003Cul>\u003Cli>SWE-bench：55.6% 到 56.0%\u003C\u002Fli>\u003Cli>DeepSeek-R1-0528：57.6%\u003C\u002Fli>\u003Cli>長上下文排名：第二，僅次於 Gemini 2.5 Pro\u003C\u002Fli>\u003Cli>工具使用測試：MiniMax 宣稱領先多數開源模型\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>價格才是這次的殺手鐧\u003C\u002Fh2>\u003Cp>很多模型一出來就愛講能力，價格卻藏到最後。MiniMax 這次反過來，把價格放得很前面。它的 API 在 0 到 20 萬 Token 區間，輸入每百萬 Token 只要 0.4 美元，輸出是 2.2 美元。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797859128-3199.png\" alt=\"MiniMax-M1：開源 1M Token 推理模型\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>如果輸入超過 20 萬、到 100 萬 Token，輸入價格\u003Ca href=\"\u002Fnews\u002Fweb3-communication-trust-infrastructure-2026-zh\">變成\u003C\u002Fa>每百萬 Token 1.3 美元，輸出還是 2.2 美元。這個設計很有意思。它等於在說，長上下文可以用，但不是叫你亂塞資料。\u003C\u002Fp>\u003Cp>這種定價對開發者很實際。你要跑長文件摘要、程式碼 refactor、知識庫問答，或者 agent 反覆讀寫資料，成本會直接影響你要不要上線。MiniMax 這次就是在搶這塊。\u003C\u002Fp>\u003Cul>\u003Cli>0 到 20 萬輸入：每百萬 Token 0.4 美元\u003C\u002Fli>\u003Cli>0 到 20 萬輸出：每百萬 Token 2.2 美元\u003C\u002Fli>\u003Cli>20 萬到 100 萬輸入：每百萬 Token 1.3 美元\u003C\u002Fli>\u003Cli>20 萬到 100 萬輸出：每百萬 Token 2.2 美元\u003C\u002Fli>\u003C\u002Ful>\u003Cp>更麻煩的是，這還不是只有 API 的事。MiniMax 說 M1 已經能在 App、網頁和開發工具鏈裡用。對團隊來說，工具先接得上，才有機會真的進 production。只會發 paper 的模型，通常都活不久。\u003C\u002Fp>\u003Ch2>跟其他模型比，差在哪裡\u003C\u002Fh2>\u003Cp>如果只看 100 萬 Token，上下文這件事其實已經不是 MiniMax 獨有。\u003Ca href=\"https:\u002F\u002Fdeepmind.google\u002Ftechnologies\u002Fgemini\u002Fpro\u002F\" target=\"_blank\" rel=\"noopener\">Gemini 2.5 Pro\u003C\u002Fa> 也有 100 萬 Token 級別的上下文。差別在於，MiniMax 把價格壓得更明顯，還把開源和部署支援一起端出來。\u003C\u002Fp>\u003Cp>再看開源陣營。\u003Ca href=\"https:\u002F\u002Fdeepseek.com\u002F\" target=\"_blank\" rel=\"noopener\">DeepSeek\u003C\u002Fa> 的 R1 系列在推理圈很有名，\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fmeta-llama\u002Fllama3\" target=\"_blank\" rel=\"noopener\">Meta Llama\u003C\u002Fa> 系列則有更大的生態。MiniMax 的打法比較像是：我不一定要在所有榜單都第一，但我要在長上下文、成本、工具使用這幾個點很能打。\u003C\u002Fp>\u003Cp>這種策略其實很合理。現在企業採購 AI，不會只看單一 \u003Ca href=\"\u002Ftag\u002Fbenchmark\">benchmark\u003C\u002Fa>。你還得看部署難度、推理速度、價格、以及能不能接到既有的伺服器和軟體堆疊。M1 的優勢，就是它把這幾件事綁在一起賣。\u003C\u002Fp>\u003Cul>\u003Cli>Gemini 2.5 Pro：同樣主打超長上下文\u003C\u002Fli>\u003Cli>DeepSeek R1：推理能力強，SWE-bench 更高\u003C\u002Fli>\u003Cli>Meta Llama：開源生態成熟，但長上下文不是唯一賣點\u003C\u002Fli>\u003Cli>MiniMax-M1：價格和長上下文一起打\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>這代表整個 AI 產業什麼事\u003C\u002Fh2>\u003Cp>我覺得這波很像一個轉向。前兩年大家在拼參數和聊天能力，現在開始拼「你到底能不能真的做事」。做事就會碰到長文件、長流程、長記憶，還有工具呼叫。這些都很吃上下文。\u003C\u002Fp>\u003Cp>所以 100 萬 Token 不只是規格表上的數字。它其實在暗示一件事：模型會越來越像工作引擎，不只是聊天機器人。你丟進去的不是一句 prompt，而是一整包資料、規則、歷史紀錄和程式碼。\u003C\u002Fp>\u003Cp>MiniMax 這次還把訓練成本講得很細。512 張 H800、3 週、53.47 萬美元。這些數字的用意很明顯。它想讓市場相信，超長上下文不是只能靠超大預算堆出來。\u003C\u002Fp>\u003Cp>但我也不會把話說滿。模型規格漂亮，不等於每個工作都適合。真正的考驗，是你的資料格式、你的 prompt 工程、還有你的延遲需求。別忘了，Token 再大，回應慢到像在等公車，也沒人想用。\u003C\u002Fp>\u003Ch2>接下來可以怎麼看 M1\u003C\u002Fh2>\u003Cp>如果你是開發者，我會建議你先拿它測三件事。第一是長文件摘要。第二是大程式碼庫問答。第三是多步驟 agent 任務。這三個場景最容易看出模型有沒有真材實料。\u003C\u002Fp>\u003Cp>如果你是產品或 AI 工程師，就該盯成本。API 價格低，代表你有機會把更長的上下文塞進產品，但前提是你要算得出 ROI。不是每個功能都值得開到 100 萬 Token。很多時候，20 萬就夠了。\u003C\u002Fp>\u003Cp>MiniMax-M1 這次給市場的訊號很清楚。開源模型的競爭，已經不是單純比誰更會講。接下來更重要的是，誰能把長上下文、推理品質和成本一起做平衡。你如果正在選模型，這會是很值得實測的一個選項。\u003C\u002Fp>","MiniMax 推出 M1 開源推理模型，主打 100 萬 Token 上下文、8 萬 Token 輸出與低價 API。","www.minimax.io","https:\u002F\u002Fwww.minimax.io\u002Fnews\u002Fminimaxm1",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778797859209-ea1g.png",[13,14,15,16,17,18,19,20],"MiniMax-M1","開源推理模型","100萬Token上下文","長上下文模型","AI API價格","SWE-bench","vLLM","SGLang","zh",2,false,"2026-05-14T22:30:38.636592+00:00","2026-05-14T22:30:38.611+00:00","done","a5b8a1a0-4c69-49cb-a7e5-e1ecb6693416","minimax-m1-open-hybrid-attention-reasoning-model-zh","model-release","6c57f6bf-1023-4a22-a6c0-013bd88ac3d1","published","2026-05-15T09:00:16.865+00:00",[34,35,36],"MiniMax-M1 主打 100 萬 Token 上下文與 8 萬 Token 輸出，定位很明確。","它把 API 價格壓低，目標是讓長上下文不再只屬於高預算團隊。","對開發者來說，真正該測的是長文件、程式碼庫和 agent 工作流。","0ccb5d2e-69f1-4354-a3e0-cb370221cd95","[-0.011513578,0.01791528,0.02659432,-0.08432833,-0.03158021,0.0017986324,-0.004005158,-0.011192373,0.014500593,-0.00043255594,0.00834475,-0.022249421,0.012422426,-0.03112716,0.12588003,0.009855804,-0.01597711,0.006909384,0.0023057065,0.0021532907,0.008889124,0.0015052768,0.0034869376,0.017457299,-0.026606267,0.008146693,0.02154821,0.008971935,0.05252695,0.0074181575,0.0038021824,-0.005240121,9.783804e-06,-0.00015433134,-0.018514132,0.015327628,0.043736823,0.0043632546,-0.012887723,0.030502524,-0.008374034,-0.023124313,-0.0008082599,-0.012813213,-0.015058488,-0.010631935,-0.0030455594,-0.050294027,-0.003946394,0.01770223,-0.021075191,0.048568465,-0.003233075,-0.13472125,-0.029925033,0.026913632,-0.0051803906,0.023540748,0.006786033,-0.0046786936,-0.016502483,0.007878169,-0.009126986,0.017368326,0.00029159882,-0.016213644,0.023085244,0.032296956,-0.0009147542,-0.022803519,-0.014956723,0.00787435,0.011452139,-0.027876094,0.013640536,-0.017667249,0.009906996,-0.019482931,0.01919448,0.012369953,0.020441284,-0.017863298,-0.0013333469,0.020748427,0.0027379368,0.012569495,0.006208105,-0.0008615445,0.008283718,-0.010547319,0.014718503,0.015605903,0.012295639,0.012053156,-0.00022580892,0.019233182,-0.014807347,-0.0035829828,-0.002720229,-0.010466676,-0.019777987,-0.013557467,0.013231385,0.029084712,0.015792102,0.0014073987,0.0004742835,0.006878383,0.0055996925,0.021631079,0.010811809,0.011610171,0.014728704,0.0077459426,0.0070295637,-0.13205084,0.008938768,-0.00048375787,-0.01610973,-0.005543029,-0.011774934,0.017233992,-0.0062912595,0.040720202,0.014792055,-0.011736637,0.0016782598,-0.005920165,-0.011269221,-0.0011729852,-0.019325461,0.018943818,0.0022568288,0.0026760637,-0.024315529,0.043520488,0.0047404286,0.007969115,-0.04821675,-0.0153438365,-0.008098033,0.017914794,0.017957784,-0.016756112,-0.028233493,-0.007963028,-0.008111178,0.017346738,0.021398496,0.007917381,0.022334345,-0.020521501,-0.004116228,-0.007478887,0.0023454472,-0.05332051,0.026209367,-0.021962697,0.011433843,0.016601775,0.0041658636,0.0015763325,-0.015660906,0.008713342,-0.03659071,0.02001561,-0.017792672,0.0055647497,0.0007698409,0.0065105613,-0.011134864,-0.020328775,0.009608934,0.012930568,-0.00019158948,0.010736225,-0.01859941,-0.0144941285,0.0042470666,-0.025727365,0.00024664198,0.021967644,-0.0022546563,0.0064399494,-0.0004110795,0.009663394,0.0020232159,-0.003200594,0.028434137,0.017526349,0.0030728532,0.026258621,0.018047405,-0.015119566,0.0048729894,-0.039198976,-0.01042187,0.016904222,-0.0048447647,0.009262678,0.0074226917,-0.00332602,0.022432124,-0.03926291,0.014939677,-0.009796687,0.013954576,-0.020782411,0.025969004,-0.015725635,0.008873032,-0.015882969,0.01884453,-0.01322197,-0.009271894,-0.01655849,-0.017633064,0.0036648791,0.021102434,0.0096031055,-0.008142348,-0.022785002,0.002381643,0.0138252005,-0.0014074798,-0.021862298,-0.012416001,-0.014198568,0.024280872,0.011709766,-0.014737227,0.009547928,0.02363907,0.018105555,-0.00028971746,-0.04819125,0.003991298,-0.018266782,0.018612588,0.017791811,-0.0019828454,0.0077818516,0.0010956888,0.021393469,0.025998026,-0.03159664,0.020609202,-0.00606658,-0.028384337,0.011441226,-0.017361106,0.002828464,0.01983542,-0.018930268,0.02076329,-0.019928103,-0.025449088,0.0035545013,-0.025069369,0.012113832,0.02061497,-0.03261701,0.00939013,0.009856558,0.041734062,0.02384201,0.021441331,0.0005873011,-0.025086185,0.0070583387,-0.0032275396,0.017404856,0.018487286,-0.0044682412,0.02148476,0.000118948,-0.027838657,0.011553608,-0.0051867436,0.015799595,0.006413344,0.01843065,-0.010541126,0.007084512,0.025277574,-0.0074292566,-0.0028463372,0.02517943,0.0011270489,-0.016036272,0.02301825,0.019817991,0.008203876,-0.014190824,-0.00078376854,-0.034483414,-0.01983131,-0.01323569,0.0030042792,0.012022761,-0.007637397,-0.01435528,-0.0020495206,0.048604824,-0.027649177,0.005130641,0.037405275,0.018231748,-0.020093784,0.024811786,0.0035770563,-0.01826174,0.01471681,-0.0032838206,0.003280445,-0.007230086,0.0037233734,-0.026507879,-0.01159688,-0.022301119,-0.011036472,-0.021089742,-0.009272283,0.015084412,-0.0031068844,0.003145127,0.0024405634,0.0067610936,0.029328022,-0.0028418547,-0.0072755683,-0.005213044,0.021531377,-0.0016538046,0.002479674,-0.0067363284,0.008521252,-0.013172245,-0.012192892,-0.00023220696,0.0068094716,0.021163063,-0.03151283,0.00083194335,-0.019406086,0.00025592808,0.018509636,0.016040167,-0.0042637093,-0.029775437,0.015588507,-0.020906165,0.010125843,-0.025447654,-0.027405148,0.0120017715,0.002059716,-0.00480587,0.025275763,-0.01815659,0.03132961,-0.025855925,-0.028335955,0.006302154,0.023101667,-0.012951734,0.022002006,0.037468765,-0.033040524,-0.007202635,-0.013559539,-0.008696599,0.021202076,0.012318712,-0.001329061,-0.0071735103,-0.00060683565,0.009145525,0.015458096,-0.011509015,-0.003487282,0.031809125,-0.017924393,-0.019105691,-0.009210429,0.004138606,0.03811036,-0.00047401877,0.0134842135,-0.0076862457,0.014963878,0.017054759,0.015850859,-0.0084961895,-0.013750929,-0.02104987,0.008653055,0.0056380644,-0.012118427,-0.0267546,0.023620335,0.0063915593,-0.0074319975,0.01355467,-0.010623985,-0.0072586862,0.0013587826,0.0068357186,-0.008858004,-0.012980906,-0.009801679,0.010817018,0.0017192181,0.0018350941,0.025262646,-0.012746912,-0.008637865,-0.008008751,0.01582694,0.006861365,-0.022690801,0.023283774,0.0025877547,0.0009587884,-0.012138577,0.0010323366,-0.020438164,0.008448963,0.014215431,-0.015586391,-0.00020951832,-0.006402394,-0.043216366,-0.0016718507,-0.010970848,-0.0078378515,-0.0027570585,-0.018508159,-0.021374702,-0.026443036,-0.01755493,-0.0018707732,0.018814063,-0.005552693,0.013859329,0.005356187,-0.012178396,-0.0106697865,-0.03077742,0.026269834,0.019209487,0.004997114,0.027594196,-0.0039403145,-0.00078374037,0.01064928,-0.00546446,0.0063437033,-0.016798995,-0.012356873,0.0053844964,-0.023420157,-0.01137234,0.0040287874,-0.0042003132,0.0027586774,-0.02516035,0.005374239,-0.0033506844,-0.005299659,-0.021711528,0.013129128,0.035492178,0.008599558,-0.0024647692,-0.0066686426,-0.014802786,0.017943393,0.00047182277,-0.0072413245,-0.014771829,-0.021464653,0.010093432,-0.013927825,0.0118772285,0.0018901164,-0.002435105,-0.00067958876,0.012142743,0.026316695,0.0039206236,0.027008353,0.0022284912,0.012986671,0.0021460722,-0.029303195,-0.029773055,0.009225529,0.04381237,0.007977622,-0.006428758,-0.014000488,0.0073411893,-0.0386522,-0.0034138353,-0.0130769145,0.0024372463,-0.015268433,0.0012311667,-0.027835663,-0.029377414,0.005847381,0.0014480874,0.0114529105,0.0022052818,0.01679459,-0.005806812,-0.01279651,-0.017932463,-0.00849217,-0.017407417,-0.011267989,-0.004293753,0.01767758,-0.005100046,-0.013434108,0.04541201,-0.022909198,0.02818575,0.03329461,-0.018645763,0.035618007,0.015770612,-0.01598666,-0.0015191274,-0.016248237,0.030115655,0.024415597,0.015530882,-0.004790996,-0.027142113,-0.018129231,-0.001239444,-0.013986795,0.025172323,-0.08541985,0.0131308185,-0.009053904,-0.025822194,-0.020410249,-0.03001808,0.026629202,-0.020780955,-0.007020713,0.012073165,0.040780485,0.0081410725,0.025161795,0.008433234,0.004459078,-0.028251013,-0.024286732,-0.0079304315,0.015923528,-0.010615721,0.018987486,-0.00038320673,0.004556232,0.008536617,0.0023125815,-0.015506485,-0.0014706354,0.008782584,0.03749321,0.009502143,-0.02055397,-0.029953713,-0.007023148,0.0101533765,0.019882277,-0.0052194535,0.016282544,-0.01498946,0.002247301,0.0016316805,0.012221793,-0.0023650192,-0.02941736,-0.02937548,0.00029257045,0.008584195,-0.022125006,-0.006710887,0.013469832,0.020190995,-0.054668773,-0.0374609,-0.022180239,-0.006763892,-0.0019549623,-0.007394427,-0.018251136,0.015296756,0.019500082,0.026143847,-0.015933126,0.0007677841,-0.009281271,0.02612028,-0.031450536,0.009647908,0.006166448,0.019372739,-0.0061962646,0.010540577,-0.010044191,-0.010002221,0.007206641,0.0037138676,-0.011862026,0.019723937,-0.015889224,0.027190266,0.016072424,-0.010527314,-0.041911617,-0.0110329995,-0.08714313,0.0023187285,-0.021163138,-0.02533123,0.0023952005,0.0100990655,-0.013876144,-0.03553139,-0.0014531994,-0.002644334,0.031264067,0.025784718,-0.017293027,-0.024101874,0.008001138,0.0005566329,-0.0050105313,0.0083910655,0.005453007,-0.016585842,-0.015245827,0.012414958,0.024805903,-0.023640003,-0.011472345,0.017779578,-0.014945682,-0.0009972478,0.00024951066,-0.009704876,-0.013609291,-0.16103896,-0.022594066,0.0029855627,-0.011338427,0.0059717656,0.011803527,0.023457138,-0.0066212188,-0.025319604,0.0027281714,-0.00014471091,-0.035841577,0.015014206,0.021386191,0.0021687846,0.12397494,-0.017516036,-0.011552505,-0.005867514,-0.0070449584,0.019671215,-0.017644962,0.0037635418,-0.022760453,0.00558951,-0.010779742,0.004033261,-0.02013231,0.0031936313,0.029860001,-0.0020165232,0.015597225,-0.013208097,0.0011496285,0.010209542,0.023445005,-0.013917829,-0.0405887,0.0070670317,-0.011945975,0.018082783,0.01887978,0.0043708105,-0.0133654615,0.0048738364,0.01097817,0.004391772,0.0063372906,-0.011419323,0.007562061,0.0026893427,-0.071857974,-0.02562485,-0.008140534,0.016481739,-0.013277288,-0.003115954,0.011720529,0.028660344,-0.013858024,-0.00094222603,-0.006298657,0.0026425782,-0.0040563745,0.010808577,0.019484028,0.01763901,0.03311565,0.01783953,0.014768669,-0.0204275,0.0062742145,-0.001858726,-0.014697368,-0.0020114838,-0.028971307,0.027108105,0.033497836,0.04317511,-0.013566759,0.0006992457,-0.0069979597,0.005756204,-0.027290106,-0.022997383,-0.008709227,0.023890005,0.020633832,0.0030941095,-0.006411216,-0.017062286,0.016317926,-0.004656329,0.01785959,0.010740961,0.042320643,0.015288542,0.008972693,0.0049919607,-0.017615436,-0.0052792653,-0.01191604,0.02533974,-0.014190416,-0.008156072,0.014368881,0.0013384509,0.031707693,0.027269969,0.011846351]",[40,41,43,45,46],{"name":14,"slug":14},{"name":15,"slug":42},"100萬token上下文",{"name":13,"slug":44},"minimax-m1",{"name":16,"slug":16},{"name":17,"slug":47},"ai-api價格",{"id":30,"slug":49,"title":50,"language":51},"minimax-m1-open-hybrid-attention-reasoning-model-en","MiniMax-M1 brings 1M-token open reasoning model","en",[53,59,65,71,77,83],{"id":54,"slug":55,"title":56,"cover_image":57,"image_url":57,"created_at":58,"category":29},"b1da56ac-8019-4c6b-a8dc-22e6e22b1cb5","gemini-omni-video-review-text-rendering-zh","Gemini Omni 影片模型怎麼了","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778779280109-lrrk.png","2026-05-14T17:20:42.608312+00:00",{"id":60,"slug":61,"title":62,"cover_image":63,"image_url":63,"created_at":64,"category":29},"d63e9d93-e613-4bbf-8135-9599fde11d08","why-xiaomi-mimo-v25-pro-changes-coding-agents-zh","為什麼 Xiaomi 的 MiMo-V2.5-Pro 改變的是 Coding …","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778689858139-v38e.png","2026-05-13T16:30:27.893951+00:00",{"id":66,"slug":67,"title":68,"cover_image":69,"image_url":69,"created_at":70,"category":29},"8f0c9185-52f9-46f2-82c6-5baec126ba2e","openai-realtime-audio-models-live-voice-zh","OpenAI 即時音訊模型瞄準語音互動","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778451657895-2iu7.png","2026-05-10T22:20:32.443798+00:00",{"id":72,"slug":73,"title":74,"cover_image":75,"image_url":75,"created_at":76,"category":29},"52106dc2-4eba-4ca0-8318-fa646064de97","anthropic-10-finance-ai-agents-zh","Anthropic推10款金融AI Agent","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778389843399-vclb.png","2026-05-10T05:10:22.778762+00:00",{"id":78,"slug":79,"title":80,"cover_image":81,"image_url":81,"created_at":82,"category":29},"6ee6ed2a-35c6-4be3-ba2c-43847e592179","why-claudes-infinite-context-window-wont-autonomous-zh","為什麼 Claude 的「無限」上下文窗口，仍然不會讓 AI 自主運作","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778350250836-d5d5.png","2026-05-09T18:10:27.004984+00:00",{"id":84,"slug":85,"title":86,"cover_image":87,"image_url":87,"created_at":88,"category":29},"955a4ce5-fe90-4e43-acb5-2a8574433390","why-midjourney-81-raw-mode-better-default-style-zh","為什麼 Midjourney 8.1 Raw Mode 比預設風格更值得用","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778231459522-xhkv.png","2026-05-08T09:10:35.498905+00:00",[90,95,100,105,110,115,120,125,130,135],{"id":91,"slug":92,"title":93,"created_at":94},"58b64033-7eb6-49b9-9aab-01cf8ae1b2f2","nvidia-rubin-six-chips-one-ai-supercomputer-zh","NVIDIA Rubin 把六顆晶片塞進 AI 機櫃","2026-03-26T07:18:45.861277+00:00",{"id":96,"slug":97,"title":98,"created_at":99},"0dcc2c61-c2a6-480d-adb8-dd225fc68914","march-2026-ai-model-news-what-mattered-zh","2026 年 3 月 AI 模型新聞重點","2026-03-26T07:32:08.386348+00:00",{"id":101,"slug":102,"title":103,"created_at":104},"214ab08b-5ce5-4b5c-8b72-47619d8675dd","why-small-models-are-winning-on-device-ai-zh","小模型為何吃下裝置端 AI","2026-03-26T07:36:30.488966+00:00",{"id":106,"slug":107,"title":108,"created_at":109},"785624b2-0355-4b82-adc3-de5e45eecd88","midjourney-v8-faster-images-higher-costs-zh","Midjourney V8 變快了，也變貴了","2026-03-26T07:52:03.562971+00:00",{"id":111,"slug":112,"title":113,"created_at":114},"cda76b92-d209-4134-86c1-a60f5bc7b128","xiaomi-mimo-trio-agents-robots-voice-zh","小米 MiMo 三模型瞄準代理、機器人與語音","2026-03-28T03:05:08.779489+00:00",{"id":116,"slug":117,"title":118,"created_at":119},"9e1044b4-946d-47fe-9e2a-c2ee032e1164","xiaomi-mimo-v2-pro-1t-moe-agents-zh","小米 MiMo-V2-Pro 登場：1T MoE 模型","2026-03-28T03:06:19.002353+00:00",{"id":121,"slug":122,"title":123,"created_at":124},"d68e59a2-55eb-4a8f-95d6-edc8fcbff581","cursor-composer-2-started-from-kimi-zh","Cursor Composer 2 其實從 Kimi 起步","2026-03-28T03:11:58.893796+00:00",{"id":126,"slug":127,"title":128,"created_at":129},"c4b6186f-bd84-4598-997e-c6e31d543c0d","cursor-composer-2-agentic-coding-model-zh","Cursor Composer 2 走向代理式寫碼","2026-03-28T03:13:06.422716+00:00",{"id":131,"slug":132,"title":133,"created_at":134},"45812c46-99fc-4b1f-aae1-56f64f5c9024","openai-shuts-down-sora-video-app-api-zh","OpenAI 關閉 Sora App 與 API","2026-03-29T04:47:48.974108+00:00",{"id":136,"slug":137,"title":138,"created_at":139},"e112e76f-ec3b-408f-810e-e93ae21a888a","apple-siri-gemini-distilled-models-zh","Apple Siri 牽手 Gemini 的真相","2026-03-29T04:52:57.886544+00:00"]