[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-claude-opus-45-gpt-parameters-estimate-zh":3,"tags-claude-opus-45-gpt-parameters-estimate-zh":35,"related-lang-claude-opus-45-gpt-parameters-estimate-zh":49,"related-posts-claude-opus-45-gpt-parameters-estimate-zh":53,"series-research-838cb5fd-5651-49fb-9b4c-c2dbde25ca02":90},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":23,"translated_content":10,"views":24,"is_premium":25,"created_at":26,"updated_at":26,"cover_image":11,"published_at":27,"rewrite_status":28,"rewrite_error":10,"rewritten_from_id":29,"slug":30,"category":31,"related_article_id":32,"status":33,"google_indexed_at":34,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":25},"838cb5fd-5651-49fb-9b4c-c2dbde25ca02","Claude Opus 4.5 和 GPT 到底多大","\u003Cp>大家常把前沿 \u003Ca href=\"\u002Fnews\u002Fai-maps-navigation-mcp-baidu-autonavi-tencent-zh\">AI\u003C\u002Fa> 想成「越大越猛」。但這件事，現在沒那麼直線了。\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fgpt-4\u002F\" target=\"_blank\" rel=\"noopener\">GPT-4\u003C\u002Fa> 曾被外界估到 1.6 兆參數。\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fgpt-4o-and-more-tools-to-chatgpt-free\u002F\" target=\"_blank\" rel=\"noopener\">GPT-4o\u003C\u002Fa> 的估計卻掉到 200B 到 300B。差很多，真的差很多。\u003C\u002Fp>\u003Cp>這也讓 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002F\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa> 的 \u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-4\" target=\"_blank\" rel=\"noopener\">Claude Opus\u003C\u002Fa> 變得更有意思。官方沒講參數數字，但市場早就不只看大小。大家更在意成本、延遲、吞吐量，還有模型到底能不能把活做好。\u003C\u002Fp>\u003Cp>講白了，參數數量不是全部。可它還是很有參考價值。因為它會直接影響訓練成本、伺服器成本，還有產品能不能大規模上線。對開發者來說，這比「聽起來很大」重要太多。\u003C\u002Fp>\u003Ch2>參數大小，為什麼還是很重要\u003C\u002Fh2>\u003Cp>參數數量是一個粗糙指標。可是它很常反映現實世界的帳單。模型越大，訓練通常越貴。推論時也更吃 GPU，尤其是 dense model。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775207388141-adee.png\" alt=\"Claude Opus 4.5 和 GPT 到底多大\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>如果一個模型接近 1.6T 參數，那它的運行壓力就很大。反過來，200B 到 300B 的模型，通常更容易壓低服務成本。這也是為什麼很多公司開始追求更精簡的架構。\u003C\u002Fp>\u003Cp>但你也不能只看數字。資料品質、訓練配方、MoE 路由、後訓練、工具調用，這些都會改變結果。說真的，有時候一個比較小的模型，實際用起來反而更順。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>GPT-4：\u003C\u002Fstrong>外界常估約 1.6T 參數\u003C\u002Fli>\u003Cli>\u003Cstrong>GPT-4o：\u003C\u002Fstrong>常見估計落在 200B 到 300B\u003C\u002Fli>\u003Cli>\u003Cstrong>Claude Opus：\u003C\u002Fstrong>官方沒公開參數數字\u003C\u002Fli>\u003Cli>\u003Cstrong>推論成本：\u003C\u002Fstrong>通常跟模型大小高度相關\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>公開線索到底透露了什麼\u003C\u002Fh2>\u003Cp>\u003Ca href=\"https:\u002F\u002Fopenai.com\u002F\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa> 沒公開 GPT-4 和 GPT-4o 的參數數字，所以外界只能從間接線索拼圖。研究者會看模型表現、基礎設施痕跡、以及和合作夥伴相關的公開資訊。這不是精準答案，但方向很清楚。\u003C\u002Fp>\u003Cp>GPT-4 長期被當成超大模型。1.6T 這個數字在技術圈流傳很久。到了 GPT-4o，敘事就變了。它更快，也更像是「夠大，但沒大到離譜」的那一類。\u003C\u002Fp>\u003Cp>這裡可以直接引用一個老實又經典的觀點。Richard Sutton 在 \u003Ca href=\"http:\u002F\u002Fwww.incompleteideas.net\u002FIncIdeas\u002FBitterLesson.html\" target=\"_blank\" rel=\"noopener\">The Bitter Lesson\u003C\u002Fa> 裡寫過一句話，現在還是很有殺傷力。\u003C\u002Fp>\u003Cblockquote>“The bitter lesson is that general methods that leverage computation are ultimately the most effective, and by a large margin.” — Richard Sutton\u003C\u002Fblockquote>\u003Cp>這句話的意思很直接。別太迷信手工巧思。真正能打的系統，常常是把算力和訓練方法吃滿。只是現在多了一層：不一定要把參數做得超大，才叫強。\u003C\u002Fp>\u003Ch2>Claude Opus 4.5 可能走哪條路\u003C\u002Fh2>\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002Fnews\u002Fclaude-4\" target=\"_blank\" rel=\"noopener\">Claude 4\u003C\u002Fa> 系列沒有公開參數數字。這很正常。前沿模型廠商現在幾乎都不愛講這件事。因為市場已經不太買單單純比大小了。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775207385570-sw8q.png\" alt=\"Claude Opus 4.5 和 GPT 到底多大\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>Anthropic 近年的重點很明顯。它很在意 coding、長上下文、工具使用，還有 agent 工作流。這些能力對產品比較實際。你拿來寫程式、整理文件、跑流程，才真的有感。\u003C\u002Fp>\u003Cp>所以如果 Claude Opus 4.5 或 4.6 的參數規模，落在 GPT-4o 類似區間，我一點也不意外。現在的競爭重點，早就不是誰喊出最大數字，而是誰用更少成本做出更好的體驗。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\u002F\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa> 沒公開 Claude Opus 的參數數字\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fopenai.com\u002F\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa> 也沒公開 GPT-4 系列的完整大小\u003C\u002Fli>\u003Cli>GPT-4o 的 200B 到 300B，明顯低於 1T 級別想像\u003C\u002Fli>\u003Cli>較小部署腳印，通常更利於大規模上線\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>對開發者和買家，差在哪裡\u003C\u002Fh2>\u003Cp>如果你真的要選模型，別只盯著排行榜。很多產品根本不需要 1.6T 那種級別。200B 到 300B 的模型，只要速度夠快、價格合理、回答品質穩，就已經很夠用。\u003C\u002Fp>\u003Cp>這對 SaaS 團隊很重要。因為 \u003Ca href=\"\u002Fnews\u002Fai-coding-fast-trust-bottleneck-zh\">AI\u003C\u002Fa> 成本現在常常不是訓練，而是每天的推論帳單。只要使用量一上來，GPU 成本就會咬人。模型越能省，產品越容易活下來。\u003C\u002Fp>\u003Cp>還有一個現實問題。你要做的是客服、摘要、程式輔助，還是多步驟 agent？不同任務對模型的要求差很多。對很多場景來說，延遲低 30%，比參數多 3 倍更有感。\u003C\u002Fp>\u003Cul>\u003Cli>\u003Cstrong>訓練成本：\u003C\u002Fstrong>通常會隨模型規模快速上升\u003C\u002Fli>\u003Cli>\u003Cstrong>服務成本：\u003C\u002Fstrong>常是 AI 產品真正的瓶頸\u003C\u002Fli>\u003Cli>\u003Cstrong>延遲：\u003C\u002Fstrong>模型越精簡，通常回應越快\u003C\u002Fli>\u003Cli>\u003Cstrong>產品適配：\u003C\u002Fstrong>coding、摘要、agent 常看效率，不看面子\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>為什麼大家開始少談參數\u003C\u002Fh2>\u003Cp>這幾年，模型廠商越來越少公開參數數字。原因很簡單。第一，大家學會了參數不是全部。第二，公開太多，競爭對手也能更快推測架構方向。\u003C\u002Fp>\u003Cp>另外，市場也變成熟了。以前大家會拿 7B、13B、70B 互相比。現在更常問的是：上下文長度多少？工具調用穩不穩？coding 表現如何？價格每百萬 To\u003Ca href=\"\u002Fnews\u002Ftrivy-docker-images-fresh-supply-chain-attack-zh\">ke\u003C\u002Fa>n 幾美元？\u003C\u002Fp>\u003Cp>我覺得這是好事。因為這代表討論終於回到實用面。開發者要的不是一個好看的數字，而是一個能穩定跑在產品裡的模型。\u003C\u002Fp>\u003Ch2>所以 Claude Opus 4.5 到底多大\u003C\u002Fh2>\u003Cp>老實說，外部沒辦法精準知道。沒有官方數字，就只能看公開線索和產品表現。可是方向已經很清楚了：前沿模型正在往更高效率走。\u003C\u002Fp>\u003Cp>如果 Claude Opus 4.5 真的跟 GPT-4o 站在同一個量級，那它的重點就不是「多大」，而是「每個參數能做多少事」。我會押注這條路。接下來你該看的，也不是誰喊出更誇張的數字，而是誰能把成本壓低，還維持品質。\u003C\u002Fp>\u003Cp>對開發者來說，最實際的問題只有一個：你的產品，真的需要超大模型嗎？很多時候，答案其實是否定的。\u003C\u002Fp>","GPT-4 常被估到 1.6 兆參數，但 GPT-4o 可能只有 200B 到 300B。Claude Opus 4.5 的真實大小沒公開，重點其實是成本、延遲和效能比。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2020711987777159880",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775207388141-adee.png",[13,14,15,16,17,18,19,20,21,22],"Claude Opus 4.5","GPT-4o","GPT-4","參數估計","大型語言模型","Anthropic","OpenAI","AI 成本","推論成本","台灣開發者","zh",1,false,"2026-04-03T09:09:28.833454+00:00","2026-04-03T09:09:28.792+00:00","done","4f7f7ec4-1974-4373-bfab-d12b50dd136a","claude-opus-45-gpt-parameters-estimate-zh","research","280d30d6-b080-4de0-b89b-fd239d8775fc","published","2026-04-07T07:41:09.557+00:00",[36,38,40,41,43,45,46,48],{"name":13,"slug":37},"claude-opus-45",{"name":19,"slug":39},"openai",{"name":17,"slug":17},{"name":15,"slug":42},"gpt-4",{"name":14,"slug":44},"gpt-4o",{"name":21,"slug":21},{"name":18,"slug":47},"anthropic",{"name":22,"slug":22},{"id":32,"slug":50,"title":51,"language":52},"claude-opus-45-gpt-parameters-estimate-en","How Big Are Claude Opus 4.5 and GPT Models?","en",[54,60,66,72,78,84],{"id":55,"slug":56,"title":57,"cover_image":58,"image_url":58,"created_at":59,"category":31},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":61,"slug":62,"title":63,"cover_image":64,"image_url":64,"created_at":65,"category":31},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":67,"slug":68,"title":69,"cover_image":70,"image_url":70,"created_at":71,"category":31},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":73,"slug":74,"title":75,"cover_image":76,"image_url":76,"created_at":77,"category":31},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":79,"slug":80,"title":81,"cover_image":82,"image_url":82,"created_at":83,"category":31},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":85,"slug":86,"title":87,"cover_image":88,"image_url":88,"created_at":89,"category":31},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[91,96,101,106,111,116,121,126,131,136],{"id":92,"slug":93,"title":94,"created_at":95},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":132,"slug":133,"title":134,"created_at":135},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":137,"slug":138,"title":139,"created_at":140},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]