Soderbergh uses AI in Lennon doc with Meta
Steven Soderbergh is using generative AI and Meta tools to fill gaps in a John Lennon documentary for Cannes.

Steven Soderbergh is using generative AI and Meta tools to finish a John Lennon documentary.
Steven Soderbergh says about 10% of Deadline's reported documentary work on John Lennon will use AI-generated footage, and the finished film is set to premiere at Cannes this month. The project, John Lennon: The Last Interview, is being made with help from Meta after the team ran into a very practical problem: an audio-only interview needed visuals.
That detail matters because this is not a flashy AI demo built for social media. It is a documentary trying to turn spoken words into an image track without pretending the visuals are real archival footage. Soderbergh says the AI material is meant to fill abstract sections where Lennon and Yoko Ono are talking about ideas, music, and memories that do not have obvious footage attached.
| Fact | Number | Why it matters |
|---|---|---|
| AI share of the film | 10% | Soderbergh says only a small slice uses generated video |
| Festival debut | This month | The film is headed to Cannes |
| Source material | Audio-only interview | The team needed new imagery to support the narration |
| Primary partner | Meta | Meta provided tools after seeing the film |
Why this documentary needed AI in the first place
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
The core challenge is simple: the film is built around Lennon’s last interview, and interviews do not always come with matching visuals. Soderbergh said most of the movie uses archival material, stills, motion graphics, and video clips, but the team still had gaps where the conversation moves into more abstract territory.

That is where generative AI enters. Instead of inventing fake documentary evidence, the production is using generated imagery to support moments that are more poetic than factual. In Soderbergh’s telling, the goal is closer to visual interpretation than reconstruction.
He explained that the team had laid the film out in chapters and then began filling in the sections where Lennon and Ono discuss specific experiences, songs, and people. The remaining holes were the philosophical passages, the parts where a plain archival approach would leave the screen empty.
- Most of the film uses archival material and conventional editing.
- The AI-generated footage covers surreal or abstract moments.
- The project is tied to Lennon’s final interview era around The Beatles legacy and Double Fantasy.
- The documentary is being prepared for a Cannes Film Festival premiere.
Meta is using the film as a stress test
The Meta angle is the part that makes this story bigger than one documentary. Soderbergh said producer Michael Sugar suggested talking to Meta because the company was building video generation tools. Meta wanted a filmmaker to stress test those tools, and the film became that test case.
This is a useful reminder that big tech companies often need real creative work to see where their tools break. Synthetic demos can look polished, but a documentary has to handle pacing, tone, historical texture, and the awkward spaces where reality does not give you enough footage. That is a much harder test than a flashy promo clip.
“They were open and wanted to see the film, so we showed them the film and they said, ‘Well, this is good timing because we really would like and need a filmmaker to stress test some of these tools that we’re working on.’” — Steven Soderbergh, quoted by Deadline
Soderbergh also framed the project as a technical collaboration rather than an attempt to hide machine-made imagery. He said the AI use is meant to be obvious, the same way viewers can recognize VFX or CGI when they see it. That distinction matters because the loudest criticism around generative AI usually centers on deception, replacement, and stolen style.
Here, the director is arguing for disclosure and restraint. The film is not pretending the generated shots are lost Lennon footage. It is using them as a visual bridge where the archive runs out.
How this compares with other AI film uses
This documentary is arriving during a messy stretch for AI in film and TV. Some productions are using the technology to recreate voices or faces, while others are using it to speed up previsualization, concept work, or background elements. The difference is not just technical; it is ethical and legal.

In this case, the estate is involved, the use is limited, and the purpose is tied to a specific production problem. That makes it easier to defend than a project that tries to fabricate a dead performer’s presence for marketing value.
- Black Bag, Erin Brockovich, and the Ocean’s Eleven films are among Soderbergh’s best-known credits.
- The documentary’s AI use is about 10% of runtime, not the whole film.
- The project is tied to a real archival gap, not a fictionalized scene.
- The estate, including Sean Ono Lennon, has reportedly supported the approach.
Soderbergh said he asked Sean Ono Lennon what his father would have thought, and Sean answered that Lennon would have wanted to engage with new technology. That is a believable answer for an artist who spent a lot of his career testing the edge of what pop culture could absorb.
It also explains why this story is likely to matter beyond Beatles fandom. If the film lands well, it gives other directors a template for using AI in a constrained, disclosed way. If it feels fake or distracting, it becomes another warning label for the industry.
What Cannes will tell us next
The real test is not whether Soderbergh can get AI video to look interesting. It is whether the audience accepts it inside a documentary about one of the most scrutinized figures in music history. Cannes will give the first serious public verdict, and that reaction will matter more than any demo reel or press quote.
If the generated sequences feel like honest visual interpretation, this could become a reference point for documentary teams facing similar archival gaps. If they feel like overreach, the backlash will be loud and immediate. Either way, the film will add a concrete example to the AI-in-media argument, which has been mostly abstract until now.
For developers and media folks, the lesson is pretty clear: the interesting question is not whether AI can make moving images. It already can. The question is where the line sits between enhancement and fabrication, and who gets to draw it when the subject is a cultural icon like John Lennon.
My bet is that this documentary will be discussed less for the AI itself and more for the policy it quietly proposes: use the machine only when the archive ends, disclose it clearly, and keep the human edit in charge. Cannes will show whether that rule is good enough for audiences, or whether documentary viewers want a harder boundary.
// Related Articles
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods
- [IND]
Why Observability Is Critical for Cloud-Native Systems
- [IND]
Data centers are pushing homeowners to solar
- [IND]
How to choose a GPU for 异环