[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"article-midjourney-public-beta-visual-generation-history-en":3,"tags-midjourney-public-beta-visual-generation-history-en":30,"related-lang-midjourney-public-beta-visual-generation-history-en":40,"related-posts-midjourney-public-beta-visual-generation-history-en":44,"series-industry-7e97034b-97b7-4bd6-86dd-c0267fefd4ed":81},{"id":4,"title":5,"content":6,"summary":7,"source":8,"source_url":9,"author":10,"image_url":11,"keywords":12,"language":18,"translated_content":10,"views":19,"is_premium":20,"created_at":21,"updated_at":21,"cover_image":11,"published_at":22,"rewrite_status":23,"rewrite_error":10,"rewritten_from_id":24,"slug":25,"category":26,"related_article_id":27,"status":28,"google_indexed_at":29,"x_posted_at":10,"tweet_text":10,"title_rewritten_at":10,"title_original":10,"key_takeaways":10,"topic_cluster_id":10,"embedding":10,"is_canonical_seed":20},"7e97034b-97b7-4bd6-86dd-c0267fefd4ed","Midjourney公测背后的视觉生成史","\u003Cp>7月，\u003Ca href=\"https:\u002F\u002Fwww.midjourney.com\" target=\"_blank\" rel=\"noopener\">Midjourney\u003C\u002Fa>进入公测，创始人 \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fdavidholz\u002F\" target=\"_blank\" rel=\"noopener\">David Holz\u003C\u002Fa> 没有把产品做成传统 App，而是把入口放进了 \u003Ca href=\"https:\u002F\u002Fdiscord.com\" target=\"_blank\" rel=\"noopener\">Discord\u003C\u002Fa>。这个选择很聪明：用户不是一个人对着空白画布，而是在一个公开频道里看着别人不断生成、修改、再生成，像在围观一场实时创作秀。\u003C\u002Fp>\u003Cp>这种“广场式”体验迅速放大了传播效率，也让 Midjourney 的审美标签变得非常鲜明。它的图像不追求机械式还原，更像是把“好看”写进了默认参数里，尤其是 V-series 之后，那种偏 CG、偏海报、偏概念设计的质感，几乎成了它的招牌。\u003C\u002Fp>\u003Cp>如果把这件事放回技术史里看，Midjourney 只是最新一轮爆发。视觉生成已经走了七十多年，从早期的规则绘图，到神经网络，再到今天的大模型扩散生成，今天我们看到的“点几下就出图”，其实是几代研究和产品路线叠加后的结果。\u003C\u002Fp>\u003Ch2>Midjourney为什么先赢在Discord\u003C\u002Fh2>\u003Cp>Midjourney 早期没有把精力放在独立客户端上，而是直接押注 Discord。这个决定降低了使用门槛，也把生成过程变成了社交内容本身。用户发一句提示词，几秒后就能得到四张图，再继续放大、重绘、变体，整个过程天然适合围观和转发。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775171754306-tnkj.png\" alt=\"Midjourney公测背后的视觉生成史\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>对生成式产品来说，分发方式往往和模型能力一样重要。Midjourney 的做法把“使用”变成了“展示”，把“结果”变成了“话题”。这也是它比很多同类工具更快出圈的原因之一。\u003C\u002Fp>\u003Cp>它的审美策略也很明确。Midjourney 不太执着于照片级真实感，而是持续强化一种更容易被普通用户接受的视觉风格：高对比、强光影、细节饱满、构图完整。对设计师来说，这意味着它更像一个灵感机器；对普通用户来说，它更像一个“自动出片”的工具。\u003C\u002Fp>\u003Cul>\u003Cli>入口在 Discord，降低了安装和学习成本\u003C\u002Fli>\u003Cli>默认生成结果更偏艺术化，而非纯写实\u003C\u002Fli>\u003Cli>公开频道让每次生成都带有社交传播属性\u003C\u002Fli>\u003Cli>V-series 强化了统一审美，形成明显品牌辨识度\u003C\u002Fli>\u003C\u002Ful>\u003Ch2>从规则绘图到扩散模型\u003C\u002Fh2>\u003Cp>视觉生成不是最近几年才出现的想法。早在 20 世纪中期，研究者就已经在尝试用程序生成图形，只是那时的方法更接近“手工写规则”。计算机能画线、画几何图案、做简单变形，但离今天这种“理解提示词并生成完整图像”还很远。\u003C\u002Fp>\u003Cp>真正把这条路线推向实用的是深度学习。2014 年，Ian Goodfellow 提出了 \u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661\" target=\"_blank\" rel=\"noopener\">GAN\u003C\u002Fa>，生成图像第一次有了更强的逼真感。随后，扩散模型开始接管高质量生成任务，\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fdall-e-2\u002F\" target=\"_blank\" rel=\"noopener\">OpenAI\u003C\u002Fa> 的 \u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fdall-e-2\u002F\" target=\"_blank\" rel=\"noopener\">DALL·E 2\u003C\u002Fa>、\u003Ca href=\"https:\u002F\u002Fstability.ai\" target=\"_blank\" rel=\"noopener\">Stability AI\u003C\u002Fa> 的 \u003Ca href=\"https:\u002F\u002Fstability.ai\u002Fstable-diffusion\" target=\"_blank\" rel=\"noopener\">Stable Diffusion\u003C\u002Fa>，把“文字到图像”的能力真正带到了大众手里。\u003C\u002Fp>\u003Cp>Midjourney 的差异不在于它发明了生成图像这件事，而在于它把模型输出包装成了一种稳定的审美体验。很多模型能生成“正确”的图，但 Midjourney 更擅长生成“愿意发出去”的图。\u003C\u002Fp>\u003Cblockquote>“The future of AI is not about replacing humans, it’s about amplifying human creativity.” — David Holz\u003C\u002Fblockquote>\u003Cp>这句话常被拿来解释 Midjourney 的产品哲学。它并没有把自己定义成一个替代设计师的工具，而是把重点放在创意放大上。这个方向也解释了为什么它会优先优化风格、构图和整体观感，而不是一味追求像素级还原。\u003C\u002Fp>\u003Ch2>四个关键节点看视觉生成的演进\u003C\u002Fh2>\u003Cp>如果把视觉生成史压缩成几个节点，会更容易看清 Midjourney 为什么会在这个时间点爆发。每一代技术都在解决前一代的短板，而用户能感知到的，往往是结果而不是算法细节。\u003C\u002Fp>\n\u003Cfigure class=\"my-6\">\u003Cimg src=\"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775171753485-f7ht.png\" alt=\"Midjourney公测背后的视觉生成史\" class=\"rounded-xl w-full\" loading=\"lazy\" \u002F>\u003C\u002Ffigure>\n\u003Cp>先看几个具体数字。\u003Ca href=\"https:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661\" target=\"_blank\" rel=\"noopener\">GAN 论文\u003C\u002Fa>发表于 2014 年；\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Findex\u002Fdall-e-2\u002F\" target=\"_blank\" rel=\"noopener\">DALL·E 2\u003C\u002Fa> 在 2022 年把文字生成图像带到更高分辨率；\u003Ca href=\"https:\u002F\u002Fstability.ai\u002Fstable-diffusion\" target=\"_blank\" rel=\"noopener\">Stable Diffusion\u003C\u002Fa> 同年开源后迅速扩散到本地部署和第三方应用；\u003Ca href=\"https:\u002F\u002Fwww.midjourney.com\" target=\"_blank\" rel=\"noopener\">Midjourney\u003C\u002Fa> 通过 Discord 先做社区，再做产品。\u003C\u002Fp>\u003Cul>\u003Cli>2014：GAN 让生成图像第一次具备较强真实感\u003C\u002Fli>\u003Cli>2022：DALL·E 2 把文本到图像的质量推到新高度\u003C\u002Fli>\u003Cli>2022：Stable Diffusion 开源后迅速进入开发者和创作者工作流\u003C\u002Fli>\u003Cli>Midjourney：用 Discord 社区把生成过程变成传播内容\u003C\u002Fli>\u003C\u002Ful>\u003Cp>这条链条说明一件事：视觉生成的竞争早已不只是“谁的模型更强”，而是“谁能把模型变成用户每天都会打开的产品”。Midjourney 在这一点上做得很早，也做得很准。\u003C\u002Fp>\u003Ch2>OpenAI为何关停Sora的讨论\u003C\u002Fh2>\u003Cp>标题里提到“OpenAI 为何关停 Sora”，但更准确地说，\u003Ca href=\"https:\u002F\u002Fopenai.com\u002Fsora\" target=\"_blank\" rel=\"noopener\">Sora\u003C\u002Fa> 讨论的是视频生成的边界，而不是单纯的产品成败。OpenAI 公开展示 Sora 时，重点放在长时序一致性、复杂场景和镜头运动上。它让外界第一次清楚看到，视频生成已经从“短片段演示”走向“可叙事的镜头语言”。\u003C\u002Fp>\u003Cp>但视频比图片难得多。图片只需要在一个瞬间成立，视频则要在时间轴上保持人物、物体、光线和运动逻辑一致。生成一张漂亮的图像已经不容易，生成十几秒还不崩的画面，更像是在和物理规律、记忆一致性、镜头调度同时较劲。\u003C\u002Fp>\u003Cp>这也是 Midjourney 和 Sora 的分野。Midjourney 把注意力放在静态图像的审美稳定性上，Sora 则把问题推进到动态世界建模。一个解决“好看”，另一个解决“会动且说得通”。\u003C\u002Fp>\u003Cp>从产品角度看，这两条路线都说明生成式 AI 已经过了单纯拼参数的阶段。接下来比的，是谁能把能力做成稳定的工作流，谁能让创作者愿意把日常任务交给它。\u003C\u002Fp>\u003Ch2>接下来谁会更吃香\u003C\u002Fh2>\u003Cp>接下来真正有竞争力的产品，未必是“最像”的那个，而是“最适合某种创作场景”的那个。Midjourney 已经证明，审美一致性和社区传播能让一个模型迅速破圈；Sora 则提醒大家，视频生成的门槛高得多，谁先解决长时序一致性，谁就更接近生产级应用。\u003C\u002Fp>\u003Cp>对开发者和产品经理来说，这里有个很现实的判断标准：模型能力只是起点，入口设计、反馈速度、审美策略、版权边界、工作流整合，都会直接影响最终结果。单纯把 API 暴露出来，已经不够了。\u003C\u002Fp>\u003Cp>如果你想判断下一波视觉生成产品谁会跑出来，可以盯住这些指标：\u003C\u002Fp>\u003Cul>\u003Cli>生成结果的稳定性，而不是单次演示的惊艳程度\u003C\u002Fli>\u003Cli>社区传播效率，尤其是是否天然适合分享\u003C\u002Fli>\u003Cli>是否能嵌进设计、广告、短视频和电商的日常流程\u003C\u002Fli>\u003Cli>对风格控制和版权风险的处理方式\u003C\u002Fli>\u003C\u002Ful>\u003Cp>Midjourney 的故事说明，生成式 AI 的胜负手经常不在模型参数表里，而在用户第一眼看到的那张图里。下一阶段，谁能把“好看、可控、可复用”同时做好，谁就更可能拿到真正的生产力入口。问题已经不是图能不能生成，而是谁会把生成结果变成自己的工作标准。\u003C\u002Fp>","Midjourney在Discord公测后走红。它的审美偏好算法和社交式交互，改写了图像生成的传播方式。","zhuanlan.zhihu.com","https:\u002F\u002Fzhuanlan.zhihu.com\u002Fp\u002F2020798020871042635",null,"https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1775171754306-tnkj.png",[13,14,15,16,17],"Midjourney","Discord","视觉生成","DALL·E 2","Stable Diffusion","en",0,false,"2026-04-02T23:15:37.853381+00:00","2026-04-02T23:15:37.737+00:00","done","463d267f-1d36-468d-8d43-c70eef157579","midjourney-public-beta-visual-generation-history-en","industry","93bd4b46-8be3-4118-bb94-6e230bf4bc7d","published","2026-04-07T07:41:14.493+00:00",[31,33,35,36,38],{"name":13,"slug":32},"midjourney",{"name":14,"slug":34},"discord",{"name":15,"slug":15},{"name":16,"slug":37},"dalle-2",{"name":17,"slug":39},"stable-diffusion",{"id":27,"slug":41,"title":42,"language":43},"midjourney-public-beta-visual-generation-history-zh","Midjourney公測背後的視覺生成史","zh",[45,51,57,63,69,75],{"id":46,"slug":47,"title":48,"cover_image":49,"image_url":49,"created_at":50,"category":26},"6ff3920d-c8ea-4cf3-8543-9cf9efc3fe36","circles-agent-stack-targets-machine-speed-payments-en","Circle’s Agent Stack targets machine-speed payments","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778871659638-hur1.png","2026-05-15T19:00:44.756112+00:00",{"id":52,"slug":53,"title":54,"cover_image":55,"image_url":55,"created_at":56,"category":26},"1270e2f4-6f3b-4772-9075-87c54b07a8d1","iren-signs-nvidia-ai-infrastructure-pact-en","IREN signs Nvidia AI infrastructure pact","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778871059665-3vhi.png","2026-05-15T18:50:38.162691+00:00",{"id":58,"slug":59,"title":60,"cover_image":61,"image_url":61,"created_at":62,"category":26},"b308c85e-ee9c-4de6-b702-dfad6d8da36f","circle-agent-stack-ai-payments-en","Circle launches Agent Stack for AI payments","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778870450891-zv1j.png","2026-05-15T18:40:31.462625+00:00",{"id":64,"slug":65,"title":66,"cover_image":67,"image_url":67,"created_at":68,"category":26},"f7028083-46ba-493b-a3db-dd6616a8c21f","why-nebius-ai-pivot-is-more-real-than-hype-en","Why Nebius’s AI Pivot Is More Real Than Hype","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778823055711-tbfv.png","2026-05-15T05:30:26.829489+00:00",{"id":70,"slug":71,"title":72,"cover_image":73,"image_url":73,"created_at":74,"category":26},"b63692ed-db6a-4dbd-b771-e1babdc94af7","nvidia-backs-corning-factories-with-billions-en","Nvidia backs Corning factories with billions","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778822444685-tvx6.png","2026-05-15T05:20:28.914908+00:00",{"id":76,"slug":77,"title":78,"cover_image":79,"image_url":79,"created_at":80,"category":26},"26ab4480-2476-4ec7-b43a-5d46def6487e","why-anthropic-gates-foundation-ai-public-goods-en","Why Anthropic and the Gates Foundation should fund AI public goods","https:\u002F\u002Fxxdpdyhzhpamafnrdkyq.supabase.co\u002Fstorage\u002Fv1\u002Fobject\u002Fpublic\u002Fcovers\u002Finline-1778796645685-wbw0.png","2026-05-14T22:10:22.60302+00:00",[82,87,92,97,102,107,112,117,122,127],{"id":83,"slug":84,"title":85,"created_at":86},"d35a1bd9-e709-412e-a2df-392df1dc572a","ai-impact-2026-developments-market-en","AI's Impact in 2026: Key Developments and Market Shifts","2026-03-25T16:20:33.205823+00:00",{"id":88,"slug":89,"title":90,"created_at":91},"5ed27921-5fd6-492e-8c59-78393bf37710","trumps-ai-legislative-framework-en","Trump's AI Legislative Framework: What's Inside?","2026-03-25T16:22:20.005325+00:00",{"id":93,"slug":94,"title":95,"created_at":96},"e454a642-f03c-4794-b185-5f651aebbaca","nvidia-gtc-2026-key-highlights-innovations-en","NVIDIA GTC 2026: Key Highlights and Innovations","2026-03-25T16:22:47.882615+00:00",{"id":98,"slug":99,"title":100,"created_at":101},"0ebb5b16-774a-4922-945d-5f2ce1df5a6d","claude-usage-diversifies-learning-curves-en","Claude Usage Diversifies, Learning Curves Emerge","2026-03-25T16:25:50.770376+00:00",{"id":103,"slug":104,"title":105,"created_at":106},"69934e86-2fc5-4280-8223-7b917a48ace8","openclaw-ai-commoditization-concerns-en","OpenClaw's Rise Raises Concerns of AI Model Commoditization","2026-03-25T16:26:30.582047+00:00",{"id":108,"slug":109,"title":110,"created_at":111},"b4b2575b-2ac8-46b2-b90e-ab1d7c060797","google-gemini-ai-rollout-2026-en","Google's Gemini AI Rollout Extended to 2026","2026-03-25T16:28:14.808842+00:00",{"id":113,"slug":114,"title":115,"created_at":116},"6e18bc65-42ae-4ad0-b564-67d7f66b979e","meta-llama4-fabricated-results-scandal-en","Meta's Llama 4 Scandal: Fabricated AI Test Results Unveiled","2026-03-25T16:29:15.482836+00:00",{"id":118,"slug":119,"title":120,"created_at":121},"bf888e9d-08be-4f47-996c-7b24b5ab3500","accenture-mistral-ai-deployment-en","Accenture and Mistral AI Team Up for AI Deployment","2026-03-25T16:31:01.894655+00:00",{"id":123,"slug":124,"title":125,"created_at":126},"5382b536-fad2-49c6-ac85-9eb2bae49f35","mistral-ai-high-stakes-2026-en","Mistral AI: Facing High Stakes in 2026","2026-03-25T16:31:39.941974+00:00",{"id":128,"slug":129,"title":130,"created_at":131},"9da3d2d6-b669-4971-ba1d-17fdb3548ed5","cursors-meteoric-rise-pressures-en","Cursor's Meteoric Rise Faces Industry Pressures","2026-03-25T16:32:21.899217+00:00"]