[IND] 8 min readOraCore Editors

Palantir says militaries own AI targeting calls

Palantir’s UK boss says militaries, not vendors, decide AI targeting. The debate now centers on Maven, Claude, and civilian risk.

Share LinkedIn
Palantir says militaries own AI targeting calls

Palantir’s defense software is now inside one of the most sensitive jobs in modern war: helping decide what gets hit, and how fast. The company says its Maven Smart System is a decision-support tool, while critics point to the real-world pressure that comes with AI-assisted targeting in a live conflict.

That tension got sharper after the Pentagon said in February it would phase out Anthropic Claude from the Maven effort, following Anthropic’s refusal to allow its models to be used in autonomous weapons and surveillance. The UK boss of Palantir, Louis Mosley, says the responsibility for how the system is used belongs to militaries, not the supplier.

What Maven actually does

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

Maven was launched by the Pentagon in 2017 to speed up targeting decisions by pulling together huge volumes of battlefield data. That includes satellite imagery, drone footage, intelligence feeds, and other signals that commanders would otherwise have to sort through manually.

Palantir says militaries own AI targeting calls

The pitch is simple: shrink the time between data collection and action. The system can analyze inputs, surface candidate targets, and suggest the level of force based on available aircraft, personnel, and other resources. In a war where hours can matter, that kind of speed is exactly why defense officials keep buying more AI.

Palantir says Maven is a support layer, not an autonomous shooter. Mosley told the BBC that a human still makes the final call, and that the platform helps personnel synthesize information that once took much longer to process. That distinction matters, because the line between assistance and automation gets blurry fast when commanders are under pressure.

  • Maven was launched in 2017 by the Pentagon.
  • The system ingests satellite, drone, and intelligence data.
  • Palantir says a human makes the final targeting decision.
  • The Pentagon said in February it would phase out Claude from Maven.

The human-in-the-loop argument

Mosley’s core defense is familiar to anyone who has followed military AI debates: the machine recommends, the human decides. That sounds clean in theory. In practice, the quality of that human review depends on time, training, and whether the operator feels pressure to trust the software’s answer.

Prof Elke Schwarz of Queen Mary University of London pushed back hard on that logic. She warned that prioritizing speed and scale leaves little room for meaningful verification of targets, especially when civilians may be in the blast radius. Her concern is not that software makes one dramatic mistake, but that it nudges decision-makers into a habit of over-trusting outputs they do not fully inspect.

“If there’s a risk of killing and you co-opt a lot of your critical thinking to software that will take care of these things for you, then you just become reliant on the software,” said Prof Elke Schwarz of Queen Mary University of London. “It’s a race to the bottom.”

That quote gets to the heart of the issue. AI does not need to be fully autonomous to reshape how war is fought. If it compresses the review window enough, the human in the loop can become a rubber stamp instead of a genuine check.

Palantir rejects that framing. Mosley said the policy framework for who gets to make which decision is a question for military customers, not the company. That answer may be legally tidy, but it also leaves a huge moral gap: if a model speeds up the kill chain, can the vendor really wash its hands of the outcome?

The numbers behind the concern

The BBC report says the US has launched more than 11,000 strikes against Iran since 28 February, with many reportedly identified by Maven. That scale matters because AI systems get judged differently when they are used for a few test cases versus thousands of wartime decisions.

Palantir says militaries own AI targeting calls

There is also the human cost of errors. Iranian officials said a strike on a school in Minab killed 168 people, including around 110 children, on the opening day of the war. The BBC says Pentagon officials have faced questions about whether AI tools like Maven were used to identify targets in that attack.

Rep. Sara Jacobs, a Democrat on the House Armed Services Committee, has called for strict guardrails and clear rules on AI use in lethal decisions. She told NBC News that AI tools are not fully reliable and can fail in subtle ways, while operators may continue to over-trust them. That is a direct warning to any military that thinks a human signature alone is enough to make the process safe.

  • More than 11,000 strikes against Iran have been launched since 28 February, according to the BBC report.
  • Iran said the Minab school strike killed 168 people.
  • Iran said around 110 of the dead were children.
  • The Pentagon reportedly designated Maven as an official program of record last week, according to Reuters.

Why Claude’s exit matters

The Pentagon’s decision to phase out Claude from Maven is more than a vendor swap. It shows that model providers are starting to draw hard lines around military use, especially when autonomous weapons and surveillance are involved. Anthropic’s refusal to allow those uses forced the issue into the open.

Palantir says alternatives can replace Claude, which is believable because the defense market rarely depends on one model for long. But the broader signal is more important than the technical substitution. If one major AI supplier walks away from a military use case, other vendors may face the same pressure to explain where they will and will not draw the line.

That matters for OpenAI, Microsoft, and any company building models that could be adapted for defense work. Even when a product is sold as analysis software, it can drift toward operational use once commanders see it saving time.

Here is the comparison that should worry anyone watching military AI:

  • Claude was tied to Maven, then reportedly phased out after Anthropic objected to autonomous weapons and surveillance use.
  • Maven Smart System is still being positioned as a decision aid that helps commanders process data faster.
  • The Pentagon’s own language, including “detect, deter, and dominate,” suggests this is now a long-term program, not a pilot.

The real question is not whether AI can help staff officers sort imagery faster. It can. The question is whether the same tools make it easier for militaries to act before they have truly checked what the software is telling them.

What happens next

For now, Palantir is betting that the human-in-the-loop defense will hold up under scrutiny, and that militaries will keep wanting software that shortens the path from sensor data to action. That may be true. It also means the next major test will probably come from a mistake, a leaked policy, or a court challenge after a strike goes wrong.

My read: the next phase of this story is less about whether AI belongs in defense and more about who writes the rules for its use. If the Pentagon keeps expanding Maven while vendors keep trimming what their models are allowed to do, the pressure shifts to procurement contracts and rules of engagement. The most important question now is simple: when an AI system flags a target, who can prove that a human really had time to say no?