[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-openai-altman-trust-and-power-en":3,"tags-openai-altman-trust-and-power-en":30,"related-lang-openai-altman-trust-and-power-en":41,"related-posts-openai-altman-trust-and-power-en":45,"series-industry-b629ec27-7a62-495d-afa0-96e8993e510f":82},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":20},"b629ec27-7a62-495d-afa0-96e8993e510f","OpenAI、奥特曼与信任危机","\u003Cp>OpenAI最初不是一家普通创业公司。它在2015年以非营利形式起步，创始人包括\u003Ca href=\"https:\u002F\u002Fopenai.com\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa>的山姆·奥特曼、伊利亚·苏茨克弗、格雷格·布罗克曼和埃隆·马斯克，他们把人工智能描述为一种可能改变人类命运的技术。今天，OpenAI已经变成一家估值数千亿美元、产品覆盖数亿用户的公司，这个反差本身就足够刺眼。\u003C\u002Fp>\u003Cp>问题也随之变得简单而直接：如果一家公司的目标曾经是“让通用人工智能造福全人类”，那它现在究竟更像一家使命驱动机构，还是一家必须持续增长的商业机器？这篇文章讨论的核心，不是奥特曼是否聪明，而是他是否值得被赋予如此大的信任。\u003C\u002Fp>\u003Ch2>OpenAI的起点，和它后来走到的地方\u003C\u002Fh2>\u003Cp>OpenAI成立时的叙事很清晰：AI的能力可能远超过去任何一项技术，风险也可能远超过去任何一次软件浪潮。于是，公司选择了非营利母体加有限盈利子公司的结构，试图把“安全优先”写进治理框架里。这个设计在当时听起来很克制，也很理想主义。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775629696492-ohe3.png\" alt=\"OpenAI、奥特曼与信任危机\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>但现实很快把理想主义推到了边上。随着\u003Ca href=\"https:\u002F\u002Fwww.microsoft.com\" target=\"_blank\" rel=\"noopener\">Microsoft\u003C\u002Fa>投入数十亿美元，OpenAI的产品开始进入主流市场，ChatGPT迅速成为全球最知名的AI产品之一。到2024年，OpenAI的年化收入被多家媒体估算已达到数十亿美元级别，用户规模也从早期研究圈扩展到普通办公室、学校和开发者社区。\u003C\u002Fp>\u003Cp>这类扩张会带来一个很现实的变化：当产品被数亿人使用时，任何治理失误都不再是内部争议，而是公共事件。OpenAI的组织结构本来是为了压住这种压力，结果它自己先被压力改写了。\u003C\u002Fp>\u003Cul>\u003Cli>2015年：OpenAI以非营利形式成立\u003C\u002Fli>\u003Cli>2022年：ChatGPT发布后，用户增长进入爆发期\u003C\u002Fli>\u003Cli>2024年：OpenAI成为全球最受关注的AI公司之一\u003C\u002Fli>\u003Cli>微软向OpenAI投入了数十亿美元级别资金\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>奥特曼为什么总让人又想信任又想警惕\u003C\u002Fh2>\u003Cp>山姆·奥特曼的问题不在于“会不会讲故事”，而在于他太擅长同时讲两种故事：一种是关于使命、长期主义和人类利益，另一种是关于速度、产品和市场份额。前者让人安心，后者让投资人兴奋。真正麻烦的是，这两种叙事经常同时成立。\u003C\u002Fp>\u003Cp>《纽约客》长期关注OpenAI内部权力变化的报道里，最刺眼的不是某一次冲突，而是一个更大的矛盾：如果一个组织把“安全”写进章程，却又必须在竞争中持续加速，它到底会优先听谁的？董事会、研究人员、投资方，还是那个最会对外发声的CEO？\u003C\u002Fp>\u003Cp>这里没有阴谋论可讲，只有治理问题。奥特曼过去多次被批评过度自信，也多次被赞赏为极强的执行者。两种评价都可能是真的。对一家AI公司来说，这种人格特征既是优势，也是风险。\u003C\u002Fp>\u003Cblockquote>“Any organization that is building this kind of technology should be prepared to be transparent about the risks.” — Ilya Sutskever\u003C\u002Fblockquote>\u003Cp>这句话之所以重要，是因为它点出了OpenAI最初的自我要求：透明、克制、对风险保持敬畏。可当公司进入产品化和商业化阶段，透明往往会和竞争、保密、速度发生冲突。OpenAI今天面对的，不是“该不该做AI”，而是“谁来决定做多快、做多大、做到什么程度”。\u003C\u002Fp>\u003Ch2>治理结构比口号更重要\u003C\u002Fh2>\u003Cp>如果只看口号，OpenAI几乎无可挑剔。它谈安全、谈对齐、谈负责任部署，还会公开发布模型卡、系统说明和安全评估。可AI公司真正的分水岭从来不是口号，而是董事会能不能在关键时刻说“不”。2023年11月，OpenAI曾经历一次震动全行业的董事会风波，奥特曼被短暂解职，随后又迅速回归。那场事件让外界第一次如此直观地看到：这家公司内部的权力平衡并不稳定。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775629686436-wmfc.png\" alt=\"OpenAI、奥特曼与信任危机\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>更重要的是，这类事件不是一次性的八卦，而是治理结构的压力测试。一个训练前沿模型的公司，如果连董事会与CEO之间的边界都说不清，外界很难相信它能稳定处理更复杂的问题，比如模型失控、数据滥用、自动化欺诈，或是未来更强系统的部署节奏。\u003C\u002Fp>\u003Cul>\u003Cli>非营利母体理论上要优先考虑人类安全\u003C\u002Fli>\u003Cli>盈利部门需要持续融资和扩张\u003C\u002Fli>\u003Cli>产品竞争要求更快发布\u003C\u002Fli>\u003Cli>安全治理要求更慢验证\u003C\u002Fli>\u003C\u002Ful>\u003Cp>这四个目标很难同时最大化。现实里，通常只有一个会真正占上风，而那往往是最能带来现金流和市场份额的那个。\u003C\u002Fp>\u003Ch2>和其他AI公司比，OpenAI到底特殊在哪\u003C\u002Fh2>\u003Cp>把OpenAI和\u003Ca href=\"https:\u002F\u002Fwww.anthropic.com\" target=\"_blank\" rel=\"noopener\">Anthropic\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fdeepmind.google\" target=\"_blank\" rel=\"noopener\">Google DeepMind\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fwww.meta.com\" target=\"_blank\" rel=\"noopener\">Meta\u003C\u002Fa>放在一起看，会更容易理解它的特殊性。Anthropic同样强调安全，但它的品牌叙事更偏向“宪法式AI”和可控性；DeepMind背靠Google，研究能力很强，但商业目标更清晰；Meta则更直接，把开源模型和平台分发作为核心策略。\u003C\u002Fp>\u003Cp>OpenAI最特别的地方在于，它既是消费级AI产品的最大流量入口之一，也是“AI安全叙事”的最强传播者之一。它一边卖给你效率，一边提醒你要谨慎。这样的双重身份并不罕见，但在规模和影响力上，OpenAI把这种张力放大到了极致。\u003C\u002Fp>\u003Cp>再看数字就更直观了。ChatGPT在短时间内成为历史上增长最快的消费级应用之一，开发者通过\u003Ca href=\"https:\u002F\u002Fplatform.openai.com\" target=\"_blank\" rel=\"noopener\">OpenAI API\u003C\u002Fa>把模型接进客服、搜索、写作和编程工具里，企业客户则把它当作生产力基础设施。对比之下，很多AI实验室仍然主要停留在研究、试点或企业合同阶段。\u003C\u002Fp>\u003Cul>\u003Cli>OpenAI：消费级产品、API、企业方案同时推进\u003C\u002Fli>\u003Cli>Anthropic：更强调安全和企业市场\u003C\u002Fli>\u003Cli>Google DeepMind：研究能力强，分发依托Google生态\u003C\u002Fli>\u003Cli>Meta：偏开源和平台分发\u003C\u002Fli>\u003C\u002Ful>\u003Cp>这意味着OpenAI面对的不是单一竞争，而是三种不同打法的同时夹击。它要证明自己既能赚钱，又能守住边界，还能继续被公众信任，这几乎是最难的组合题。\u003C\u002Fp>\u003Ch2>真正的问题不是奥特曼个人，而是权力怎么被约束\u003C\u002Fh2>\u003Cp>把焦点全部放在奥特曼身上，很容易把问题讲窄。更大的问题是：当AI系统越来越强，谁有资格决定它们何时上线、如何训练、出现异常时如何处理？这不是一场关于个人品德的辩论，而是一场关于制度设计的考试。\u003C\u002Fp>\u003Cp>OpenAI已经不再只是一个研究组织。它是一个平台、一家基础设施公司，也是全球AI监管讨论中的样板案例。它的一举一动都会被复制、被质疑、被立法者拿来当参考。换句话说，它的治理方式会外溢到整个行业。\u003C\u002Fp>\u003Cp>所以，对奥特曼的信任不该建立在“他看起来很有远见”这种感觉上，而应该建立在更硬的东西上：董事会是否真能制衡CEO，安全评估是否公开到足够程度，模型发布是否有可审计的标准，出了问题谁来承担责任。没有这些，任何宏大使命都只是包装。\u003C\u002Fp>\u003Cp>接下来更值得关注的，不是OpenAI又会发布什么新模型，而是它会不会继续把治理变成可执行的规则。如果答案是否定的，那么市场会继续追捧它，但公众对它的信任只会越来越薄。等到下一次重大模型发布时，真正该问的问题也许会更直接：我们到底是在使用工具，还是在接受一家公司的判断？\u003C\u002Fp>","OpenAI从非营利起步到估值千亿美元，奥特曼的权力和公司治理正被重新审视。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2024866695706081032",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775629696492-ohe3.png",[13,14,15,16,17],"OpenAI","Sam Altman","AI governance","ChatGPT","AI safety","en",1,false,"2026-04-08T06:27:48.364776+00:00","2026-04-08T06:27:48.216+00:00","done","3db5f42e-1287-4ef8-aa8e-96f4f4eb850f","openai-altman-trust-and-power-en","industry","0e152bb6-f1eb-4834-a1ab-e07f4105cc8f","published","2026-04-08T09:00:46.5+00:00",[31,33,35,37,39],{"name":13,"slug":32},"openai",{"name":15,"slug":34},"ai-governance",{"name":16,"slug":36},"chatgpt",{"name":14,"slug":38},"sam-altman",{"name":17,"slug":40},"ai-safety",{"id":27,"slug":42,"title":43,"language":44},"openai-altman-trust-and-power-zh","OpenAI、奧特曼與信任危機","zh",[46,52,58,64,70,76],{"id":47,"slug":48,"title":49,"cover_image":50,"image_url":50,"created_at":51,"category":26},"6ff3920d-c8ea-4cf3-8543-9cf9efc3fe36","circles-agent-stack-targets-machine-speed-payments-en","Circle’s Agent Stack targets machine-speed payments","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778871659638-hur1.png","2026-05-15T19:00:44.756112+00:00",{"id":53,"slug":54,"title":55,"cover_image":56,"image_url":56,"created_at":57,"category":26},"1270e2f4-6f3b-4772-9075-87c54b07a8d1","iren-signs-nvidia-ai-infrastructure-pact-en","IREN signs Nvidia AI infrastructure pact","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778871059665-3vhi.png","2026-05-15T18:50:38.162691+00:00",{"id":59,"slug":60,"title":61,"cover_image":62,"image_url":62,"created_at":63,"category":26},"b308c85e-ee9c-4de6-b702-dfad6d8da36f","circle-agent-stack-ai-payments-en","Circle launches Agent Stack for AI payments","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778870450891-zv1j.png","2026-05-15T18:40:31.462625+00:00",{"id":65,"slug":66,"title":67,"cover_image":68,"image_url":68,"created_at":69,"category":26},"f7028083-46ba-493b-a3db-dd6616a8c21f","why-nebius-ai-pivot-is-more-real-than-hype-en","Why Nebius’s AI Pivot Is More Real Than Hype","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778823055711-tbfv.png","2026-05-15T05:30:26.829489+00:00",{"id":71,"slug":72,"title":73,"cover_image":74,"image_url":74,"created_at":75,"category":26},"b63692ed-db6a-4dbd-b771-e1babdc94af7","nvidia-backs-corning-factories-with-billions-en","Nvidia backs Corning factories with billions","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778822444685-tvx6.png","2026-05-15T05:20:28.914908+00:00",{"id":77,"slug":78,"title":79,"cover_image":80,"image_url":80,"created_at":81,"category":26},"26ab4480-2476-4ec7-b43a-5d46def6487e","why-anthropic-gates-foundation-ai-public-goods-en","Why Anthropic and the Gates Foundation should fund AI public goods","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778796645685-wbw0.png","2026-05-14T22:10:22.60302+00:00",[83,88,93,98,103,108,113,118,123,128],{"id":84,"slug":85,"title":86,"created_at":87},"d35a1bd9-e709-412e-a2df-392df1dc572a","ai-impact-2026-developments-market-en","AI's Impact in 2026: Key Developments and Market Shifts","2026-03-25T16:20:33.205823+00:00",{"id":89,"slug":90,"title":91,"created_at":92},"5ed27921-5fd6-492e-8c59-78393bf37710","trumps-ai-legislative-framework-en","Trump's AI Legislative Framework: What's Inside?","2026-03-25T16:22:20.005325+00:00",{"id":94,"slug":95,"title":96,"created_at":97},"e454a642-f03c-4794-b185-5f651aebbaca","nvidia-gtc-2026-key-highlights-innovations-en","NVIDIA GTC 2026: Key Highlights and Innovations","2026-03-25T16:22:47.882615+00:00",{"id":99,"slug":100,"title":101,"created_at":102},"0ebb5b16-774a-4922-945d-5f2ce1df5a6d","claude-usage-diversifies-learning-curves-en","Claude Usage Diversifies, Learning Curves Emerge","2026-03-25T16:25:50.770376+00:00",{"id":104,"slug":105,"title":106,"created_at":107},"69934e86-2fc5-4280-8223-7b917a48ace8","openclaw-ai-commoditization-concerns-en","OpenClaw's Rise Raises Concerns of AI Model Commoditization","2026-03-25T16:26:30.582047+00:00",{"id":109,"slug":110,"title":111,"created_at":112},"b4b2575b-2ac8-46b2-b90e-ab1d7c060797","google-gemini-ai-rollout-2026-en","Google's Gemini AI Rollout Extended to 2026","2026-03-25T16:28:14.808842+00:00",{"id":114,"slug":115,"title":116,"created_at":117},"6e18bc65-42ae-4ad0-b564-67d7f66b979e","meta-llama4-fabricated-results-scandal-en","Meta's Llama 4 Scandal: Fabricated AI Test Results Unveiled","2026-03-25T16:29:15.482836+00:00",{"id":119,"slug":120,"title":121,"created_at":122},"bf888e9d-08be-4f47-996c-7b24b5ab3500","accenture-mistral-ai-deployment-en","Accenture and Mistral AI Team Up for AI Deployment","2026-03-25T16:31:01.894655+00:00",{"id":124,"slug":125,"title":126,"created_at":127},"5382b536-fad2-49c6-ac85-9eb2bae49f35","mistral-ai-high-stakes-2026-en","Mistral AI: Facing High Stakes in 2026","2026-03-25T16:31:39.941974+00:00",{"id":129,"slug":130,"title":131,"created_at":132},"9da3d2d6-b669-4971-ba1d-17fdb3548ed5","cursors-meteoric-rise-pressures-en","Cursor's Meteoric Rise Faces Industry Pressures","2026-03-25T16:32:21.899217+00:00"]