[IND] 8 min readOraCore Editors

AI Documentary Puts CEOs on the Spot

A new AI film opens March 27 with Altman, Hassabis, and Amodei on camera, but it still lets the biggest names off the hook.

Share LinkedIn
AI Documentary Puts CEOs on the Spot

March 27 is the date to watch if you want a fresh take on AI anxiety on the big screen. The AI Doc: Or How I Became an Apocaloptimist brings Sam Altman, Dario Amodei, and Demis Hassabis into the same frame, then asks whether we should trust the people building the systems that may reshape work, school, and politics.

The film has access most documentaries would kill for, but access is not the same thing as pressure. That tension gives the movie its charge, and also its weakness: it wants to explain AI in plain English while avoiding the hardest follow-up question, which is who gets to define the rules when the incentives are this massive.

A documentary built around fatherhood and fear

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

Director Daniel Roher frames the movie around a personal problem: he is about to become a father. That gives the film a human center, and it helps explain why the movie keeps returning to the same uneasy question: what kind of world will a child inherit if AI keeps accelerating at the current pace?

AI Documentary Puts CEOs on the Spot

Roher is not approaching this like a detached observer. He is clearly trying to sort out whether AI is an overhyped business cycle, an infrastructure layer for the next decade, or a force that could make ordinary life feel stranger and less stable. That makes the film easier to watch than a policy panel and more honest than a lot of corporate AI demos.

Still, the documentary’s emotional frame does some of the work that the interviews do not. When a filmmaker is worried about his newborn son, every answer from a CEO sounds like a test of character, and every evasive answer sounds a little louder.

  • Release date: March 27
  • Key interview subjects: Altman, Amodei, Hassabis
  • Missing from the chair: Mark Zuckerberg and Elon Musk, despite reported requests
  • Previous Roher film: Navalny, which won the Academy Award for Best Documentary Feature

When the CEOs finally talk, the answers are familiar

The movie gets the access, then runs into the usual problem: high-profile AI leaders are very good at sounding thoughtful without giving much away. One of the film’s sharpest moments comes when Roher asks Altman why anyone should trust him to guide AI’s rapid growth, given the stakes. Altman’s answer is blunt: “You shouldn’t.” It is a memorable line, but it also feels like a dodge, because the conversation ends before the documentary can push on what that admission actually means.

That pattern repeats across the film. The executives talk about safety, responsibility, and the need for caution, while still describing AI as a technology with enormous upside. That balance is politically convenient and rhetorically slippery. It lets them sound measured while keeping the spotlight on the abstract promise of future benefits rather than the concrete harms already showing up in labor markets, education, and online trust.

“You shouldn’t.” — Sam Altman, in response to why anyone should trust him to guide AI’s rapid acceleration

The documentary also includes Tristan Harris, cofounder of the Center for Humane Technology, who brings the film’s bleakest line. He says he knows people who work on AI risk who do not expect their children to make it to high school. That is a brutal sentence, and the movie treats it as the kind of warning that can’t be waved away with another startup slogan.

What the film gets right is the emotional temperature of the moment. What it does less well is force the people with the most power to explain why their products deserve public trust before they are deployed at scale.

What the film explains well, and what it lets slide

The strongest stretch of the documentary is its AI primer. Roher and co-director Charlie Tyrell keep the language plain, and that matters. AI coverage is often buried under jargon, but this film tries to define the terms without turning every sentence into a pitch deck.

AI Documentary Puts CEOs on the Spot

Visually, the movie also tries to soften the dread. Roher’s drawings and paintings give it a handmade feel, while stop-motion sequences add a little surreal humor. That matters because it stops the film from becoming a wall of talking heads and charts. The creative choices help the audience sit with the subject instead of bouncing off it.

But once the movie moves from explanation to accountability, it gets mushier. It touches on the way AI hype feeds a global race for dominance, concentrates wealth, and rewards companies for claiming their models are both dangerous and indispensable. Then it backs away from the obvious next step: asking whether the people selling this future should be treated as neutral witnesses.

  • The film argues that AI’s risks and rewards are both enormous, but it spends more time on the promise than on enforcement
  • It raises AGI as a major goal, yet gives little evidence for why today’s large language models should get there on their own
  • It notes that AI power is concentrated in a very small group of companies and executives
  • It treats public pressure as a solution, even though the companies involved already control the infrastructure, talent, and capital

How this compares with the real AI business

This is where the documentary feels most disconnected from the market it is trying to examine. The real AI race is not a philosophical seminar. It is a capital-intensive competition for chips, data centers, enterprise contracts, and consumer attention. OpenAI, Anthropic, and Google DeepMind are not just debating the future in public; they are building products that already influence coding, search, customer support, and content production.

That gap between rhetoric and reality is why the film’s “both sides” ending feels too soft. The CEOs are not random participants in a public square. They are the people making the bets, setting the tempo, and deciding how much uncertainty the rest of us have to absorb.

Here is the comparison the movie hints at but never fully lands:

  • AI labs spend billions on compute and talent, while most viewers are asked to judge the technology from a theater seat
  • Executives can frame their systems as life-saving tools or existential threats, depending on which audience they are talking to
  • Public oversight moves slowly, while model releases, product updates, and deployment timelines move fast
  • When companies say they do not fully understand what their models will do, they are still shipping them

That last point is the one the film keeps circling. If the builders admit they do not fully understand the systems, then the burden should not fall only on the public to “pressure” them into better behavior. The burden should also fall on the firms that chose to deploy first and explain later.

Roher’s closing instinct is understandable. He wants a path that does not end in panic. He wants his child to grow up in a world where technology is shaped by human judgment. But the documentary’s ending feels too polite for a subject where the power imbalance is this obvious.

The real question is who gets to set the rules

The AI debate keeps getting framed as a moral puzzle about whether to fear the machines. That is the easier story. The harder one is about governance, incentives, and who gets to decide what counts as acceptable risk. This documentary gets close to that truth, then steps back before it names the people most responsible.

That is why the film is worth watching and arguing with. It is smart enough to show the anxiety, and honest enough to show how little certainty exists inside the industry. But it also reveals how easily celebrity founders and billion-dollar labs can turn self-criticism into branding.

If the next wave of AI movies wants to do more than stage a conversation, it should ask a sharper question: what would accountability look like if executives could no longer hide behind the idea that everyone is equally powerless?

My bet is that the next serious public fight over AI will not be about whether the technology is scary. It will be about whether governments force the companies building it to prove, in public and in detail, that their systems are safe enough to keep shipping.