[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-openai-gpt-54-cyber-security-access-zh":3,"tags-openai-gpt-54-cyber-security-access-zh":33,"related-lang-openai-gpt-54-cyber-security-access-zh":49,"related-posts-openai-gpt-54-cyber-security-access-zh":53,"series-research-a8c7399c-ea3b-4c74-b0b4-1b3527a76dcc":90},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":21,"translated_content":10,"views":22,"is_premium":23,"created_at":24,"updated_at":24,"cover_image":11,"published_at":25,"rewrite_status":26,"rewrite_error":10,"rewritten_from_id":27,"slug":28,"category":29,"related_article_id":30,"status":31,"google_indexed_at":32,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":23},"a8c7399c-ea3b-4c74-b0b4-1b3527a76dcc","OpenAI 推 GPT-5.4-Cyber，安全工作進場","\u003Cp>OpenAI 這次不是只丟一個聊天模型。它還擴大了網路安全信任存取計畫，並推出 \u003Ca href=\"https:\u002F\u002Fopenai.com\" target=\"_blank\" rel=\"noopener\">GPT-5.4-Cyber\u003C\u002Fa>。講白了，這代表 AI 開始真的碰安全工作，不是只拿來寫文案。\u003C\u002Fp>\u003Cp>同一波消息裡，\u003Ca href=\"https:\u002F\u002Fdeepmind.google\" target=\"_blank\" rel=\"noopener\">Google DeepMind\u003C\u002Fa> 也推出 \u003Ca href=\"https:\u002F\u002Fdeepmind.google\u002Ftechnologies\u002Frobotics\u002F\" target=\"_blank\" rel=\"noopener\">Gemini Robotics-ER 1.6\u003C\u002Fa>，\u003Ca href=\"https:\u002F\u002Fwww.baidu.com\" target=\"_blank\" rel=\"noopener\">Baidu\u003C\u002Fa> 也有新動作。這幾條線放一起看，很清楚：模型廠商正在把 AI 切成不同用途，而不是硬塞成一個萬用聊天機器人。\u003C\u002Fp>\u003Ch2>OpenAI 把 cyber 當成正式產品線\u003C\u002Fh2>\u003Cp>網路安全信任存取計畫，才是這次最值得看的地方。資安團隊要的不是「會講話」的助手。它們要的是可控、可查、可限制權限的系統。尤其碰到敏感資料，亂回一句就可能出事。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776600233905-5zzo.png\" alt=\"OpenAI 推 GPT-5.4-Cyber，安全工作進場\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>所以 \u003Ca href=\"https:\u002F\u002Fopenai.com\" target=\"_blank\" rel=\"noopener\">GPT-5.4-Cyber\u003C\u002Fa> 的意義，不在於名字夠不夠炫。重點是 OpenAI 把模型和存取控管綁在一起。這種做法很像在告訴企業客戶：我們不是只賣 Token，我們也賣使用邊界。\u003C\u002Fp>\u003Cp>資安工作本來就很適合專用模型。事件分流、log 摘要、告警整理、事件報告草稿，這些事情都很耗人力。只要模型夠準，還能被稽核，就能省掉一堆重複工。\u003C\u002Fp>\u003Cul>\u003Cli>OpenAI 先做存取控管，再推模型，順序是對的。\u003C\u002Fli>\u003Cli>資安客戶在意誤判率，不在意模型多會聊天。\u003C\u002Fli>\u003Cli>專用模型比較好測，也比較好比對。\u003C\u002Fli>\u003Cli>信任存取計畫通常代表企業導向，不是玩票。\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>DeepMind 的機器人路線，重點是動作\u003C\u002Fh2>\u003Cp>\u003Ca href=\"https:\u002F\u002Fdeepmind.google\u002Ftechnologies\u002Frobotics\u002F\" target=\"_blank\" rel=\"noopener\">Gemini Robotics-ER 1.6\u003C\u002Fa> 代表另一條路。這不是文字世界的優化，而是讓模型去理解空間、動作和環境。機器人不會因為你講得漂亮就完成任務，它要真的抓得到、避得開、走得穩。\u003C\u002Fp>\u003Cp>這也解釋了為什麼 DeepMind 的更新和 OpenAI 的 cyber 更新很不同。資安模型處理的是文字、事件、規則。機器人模型處理的是感測器、控制、物理限制。兩邊都叫 AI，但產品問題完全不一樣。\u003C\u002Fp>\u003Cp>DeepMind 做機器人研究不是一天兩天了。它的官方資料早就把這條線講得很清楚。這次版本號往前推，通常表示團隊在調整能力、穩定性，或是部署方式。說真的，這比喊口號實際多了。\u003C\u002Fp>\u003Cblockquote>“The future of robotics lies in making robots more useful in the real world.” — Demis Hassabis, co-founder and CEO of Google DeepMind\u003C\u002Fblockquote>\u003Cp>這句話放到今天還是很準。機器人如果只能秀 d\u003Ca href=\"\u002Fnews\u002Fgemini-app-release-notes-latest-updates-zh\">em\u003C\u002Fa>o，沒什麼用。能在真實場景裡少出錯，才有價值。\u003C\u002Fp>\u003Ch2>數據和產品選擇，透露市場方向\u003C\u002Fh2>\u003Cp>這次消息沒有丟一堆硬數字，但產品選擇本身就很有訊號。OpenAI 選 cyber。DeepMind 選 \u003Ca href=\"\u002Fnews\u002Fwhite-house-anthropic-mythos-risks-meeting-zh\">ro\u003C\u002Fa>botics。Baidu 還在往搜尋和平台能力補強。這代表 AI 廠商開始更像軟體公司，而不是只會發大字報。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776600239567-6q2q.png\" alt=\"OpenAI 推 GPT-5.4-Cyber，安全工作進場\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>對買方來說，這種切法更好懂。資安主管可以直接看 cyber 模型。機器人團隊可以測 motion 模型。搜尋團隊可以看 retrieval 和 ranking 的整合。每個人都知道自己在買什麼，採購也比較不會亂掉。\u003C\u002Fp>\u003Cp>如果拿幾家大廠來比，差異很明顯：\u003C\u002Fp>\u003Cul>\u003Cli>\u003Ca href=\"https:\u002F\u002Fopenai.com\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa> 盯的是敏感工作流，重點是權限和稽核。\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fdeepmind.google\" target=\"_blank\" rel=\"noopener\">Google DeepMind\u003C\u002Fa> 盯的是具身智慧，重點是空間和動作。\u003C\u002Fli>\u003Cli>\u003Ca href=\"https:\u002F\u002Fwww.baidu.com\" target=\"_blank\" rel=\"noopener\">Baidu\u003C\u002Fa> 盯的是搜尋與平台，重點是規模和相關性。\u003C\u002Fli>\u003Cli>三家都在往專用系統走，不再只推萬用聊天機器人。\u003C\u002Fli>\u003C\u002Ful>\u003Cp>這跟 \u003Ca href=\"\u002Fnews\u002Fatlassian-ai-training-customer-data-2026-zh\">202\u003C\u002Fa>4、2025 那波大模型熱潮很不一樣。那時候大家比誰更會聊天。現在大家比誰更能進流程、進系統、進權限控管。老實說，這才比較像軟體落地。\u003C\u002Fp>\u003Cp>而且專用系統也比較容易管。資安模型可以看誤報和漏報。機器人模型可以看任務完成率和安全性。搜尋模型可以看命中率和延遲。規格越清楚，越能做治理。\u003C\u002Fp>\u003Ch2>這波更新對開發者代表什麼\u003C\u002Fh2>\u003Cp>對台灣開發者來說，這波訊號很直接。你如果還在把 AI 當成「什麼都能問」的工具，可能會慢半拍。真正有價值的，是把模型塞進工作流，配上評測、權限、回饋迴路。\u003C\u002Fp>\u003Cp>換句話說，未來更吃香的不是只會呼叫 API 的人，而是懂得把模型當軟體元件的人。你要知道什麼資料能進去，什麼輸出要擋掉，什麼情境要人工覆核。這些都很工程，不浪漫，但很重要。\u003C\u002Fp>\u003Cp>我自己的判斷是，接下來半年會看到更多帶功能標籤的模型名。像 Cyber、Robotics、Search、Code 這種字眼會越來越多。這不是行銷話術，而是廠商開始承認：一個模型吃天下，沒那麼簡單。\u003C\u002Fp>\u003Ch2>產業脈絡：從通用聊天到專用系統\u003C\u002Fh2>\u003Cp>這個轉向不是突然發生的。過去兩年，LLM 先把大家的注意力拉到聊天能力。接著企業開始問：能不能接內網？能不能控權限？能不能追蹤每一步？一問下去，通用助手就開始露出限制。\u003C\u002Fp>\u003Cp>所以現在的趨勢很清楚。模型廠商要往更窄的場景切。因為窄，才好測。因為好測，才好賣。因為好賣，才進得了企業預算。這條路很務實，也很像軟體業本來該走的樣子。\u003C\u002Fp>\u003Cp>如果你是工程團隊，這代表你要開始重視評測資料、權限設計、審計紀錄，還有失敗案例庫。別再只看 demo。demo 很會騙人，真的上線才知道模型有多誠實。\u003C\u002Fp>\u003Ch2>接下來怎麼看\u003C\u002Fh2>\u003Cp>我覺得接下來最值得觀察的，是 OpenAI 會不會把 cyber 模型做成更完整的企業方案。若它真的把權限、稽核、資料隔離一起包進去，資安團隊會很有感。因為那才是能上線的東西。\u003C\u002Fp>\u003Cp>如果你在做 AI 產品，現在可以先問自己一個問題：你的系統是通用聊天，還是任務型工具？前者容易展示，後者才真的能進公司。這差很多。\u003C\u002Fp>\u003Cp>說到底，這波新聞不是在比誰最會講未來。它在告訴你，AI 正在變得更像軟體工程。下一步，輪到開發者決定要不要跟上。\u003C\u002Fp>","OpenAI 擴大網路安全信任存取，並推出 GPT-5.4-Cyber。DeepMind 與 Baidu 也同步推進機器人與搜尋更新，AI 正往專用系統走。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2028040384207500334",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1776600233905-5zzo.png",[13,14,15,16,17,18,19,20],"OpenAI","GPT-5.4-Cyber","資安","Google DeepMind","Gemini Robotics-ER 1.6","Baidu","LLM","AI 產品","zh",0,false,"2026-04-19T12:03:28.424057+00:00","2026-04-19T12:03:27.556+00:00","done","1ae0a600-cead-40a0-9bcb-75f7289b343e","openai-gpt-54-cyber-security-access-zh","research","2b4823a0-05dd-4ef7-a31a-feab1cc0df67","published","2026-04-20T09:00:12.995+00:00",[34,36,38,40,42,43,45,47],{"name":13,"slug":35},"openai",{"name":17,"slug":37},"gemini-robotics-er-16",{"name":14,"slug":39},"gpt-54-cyber",{"name":19,"slug":41},"llm",{"name":15,"slug":15},{"name":16,"slug":44},"google-deepmind",{"name":18,"slug":46},"baidu",{"name":20,"slug":48},"ai-產品",{"id":30,"slug":50,"title":51,"language":52},"openai-gpt-54-cyber-security-access-en","OpenAI pushes GPT-5.4-Cyber into security work","en",[54,60,66,72,78,84],{"id":55,"slug":56,"title":57,"cover_image":58,"image_url":58,"created_at":59,"category":29},"667b72b6-e821-4d68-80a1-e03340bc85f1","turboquant-seo-shift-small-sites-zh","TurboQuant 與小站 SEO 變化","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778840440690-kcw9.png","2026-05-15T10:20:27.319472+00:00",{"id":61,"slug":62,"title":63,"cover_image":64,"image_url":64,"created_at":65,"category":29},"381fb6c6-6da7-4444-831f-8c5eed8d685c","turboquant-vllm-comparison-fp8-kv-cache-zh","TurboQuant 與 FP8 實測結果","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778839867551-4v9g.png","2026-05-15T10:10:36.034569+00:00",{"id":67,"slug":68,"title":69,"cover_image":70,"image_url":70,"created_at":71,"category":29},"c15f45ee-a548-4dbf-8152-91de159c1a11","llmbda-calculus-agent-safety-rules-zh","LLMbda 演算替 AI 代理人立安全規則","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778825503412-mlbf.png","2026-05-15T06:10:34.832664+00:00",{"id":73,"slug":74,"title":75,"cover_image":76,"image_url":76,"created_at":77,"category":29},"0c02225c-d6ff-44f8-bc92-884c8921c4a3","low-complexity-beamspace-denoiser-mmwave-mimo-zh","更簡單的毫米波波束域去噪器","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778814650361-xtc2.png","2026-05-15T03:10:30.06639+00:00",{"id":79,"slug":80,"title":81,"cover_image":82,"image_url":82,"created_at":83,"category":29},"9d27f967-62cc-433f-8cdb-9300937ade13","ai-benchmark-wins-cyber-scare-defenders-zh","為什麼 AI 基準賽在資安領域的勝利，應該讓防守方警醒","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778807450006-nofx.png","2026-05-15T01:10:29.379041+00:00",{"id":85,"slug":86,"title":87,"cover_image":88,"image_url":88,"created_at":89,"category":29},"bc402dc6-5da6-46fc-9d66-d09cb215f72b","why-linux-security-needs-patch-wave-mindset-zh","為什麼 Linux 安全需要「補丁浪潮」思維","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778741449813-s2wn.png","2026-05-14T06:50:24.052583+00:00",[91,96,101,106,111,116,121,126,131,136],{"id":92,"slug":93,"title":94,"created_at":95},"f18dbadb-8c59-4723-84a4-6ad22746c77a","deepmind-bets-on-continuous-learning-ai-2026-zh","DeepMind 押注 2026 連續學習 AI","2026-03-26T08:16:02.367355+00:00",{"id":97,"slug":98,"title":99,"created_at":100},"f4a106cb-02a6-4508-8f39-9720a0a93cee","ml-papers-of-the-week-github-research-desk-zh","每週 ML 論文清單，為何紅到 GitHub","2026-03-27T01:11:39.284175+00:00",{"id":102,"slug":103,"title":104,"created_at":105},"c4f807ca-4e5f-47f1-a48c-961cf3fc44dc","ai-ml-conferences-to-watch-in-2026-zh","2026 AI 研討會投稿時程整理","2026-03-27T01:51:53.874432+00:00",{"id":107,"slug":108,"title":109,"created_at":110},"9f50561b-aebd-46ba-94a8-363198aa7091","openclaw-agents-manipulated-self-sabotage-zh","OpenClaw Agent 會自己搞砸自己","2026-03-28T03:03:18.786425+00:00",{"id":112,"slug":113,"title":114,"created_at":115},"11f22e92-7066-4978-a544-31f5f2156ec6","vega-learning-to-drive-with-natural-language-instructions-zh","Vega：使用自然語言指示進行自駕車控制","2026-03-28T14:54:04.847912+00:00",{"id":117,"slug":118,"title":119,"created_at":120},"a4c7cfec-8d0e-4fec-93cf-1b9699a530b8","drive-my-way-en-zh","Drive My Way：個性化自駕車風格的實現","2026-03-28T14:54:26.207495+00:00",{"id":122,"slug":123,"title":124,"created_at":125},"dec02f89-fd39-41ba-8e4d-11ede93a536d","training-knowledge-bases-with-writeback-rag-zh","用 WriteBack-RAG 強化知識庫提升檢索效能","2026-03-28T14:54:45.775606+00:00",{"id":127,"slug":128,"title":129,"created_at":130},"3886be5c-a137-40cc-b9e2-0bf18430c002","packforcing-efficient-long-video-generation-method-zh","PackForcing：短影片訓練也能生成長影片","2026-03-28T14:55:02.688141+00:00",{"id":132,"slug":133,"title":134,"created_at":135},"72b90667-d930-4cc9-8ced-aaa0f8968d44","pixelsmile-toward-fine-grained-facial-expression-editing-zh","PixelSmile：提升精細臉部表情編輯的新方法","2026-03-28T14:55:20.678181+00:00",{"id":137,"slug":138,"title":139,"created_at":140},"cf046742-efb2-4753-aef9-caed5da5e32e","adaptive-block-scaled-data-types-zh","IF4：神經網路量化的聰明選擇","2026-03-31T06:00:36.990273+00:00"]