[IND] 8 min readOraCore Editors

AI warfare firms are defense contractors, not startups

AI targeting systems helped generate tens of thousands of targets. The real story is who sells them, who profits, and who dies.

Share LinkedIn
AI warfare firms are defense contractors, not startups

One system produced more than 37,000 targets in the first weeks of war. Another could generate 100 potential bombing sites per day. Those numbers are not a footnote; they are the point. When targeting moves that fast, the company selling the software is not building a chat app, it is building part of the kill chain.

That is why the debate around military AI keeps missing the mark. The conversation often treats firms like Palantir, Anthropic, OpenAI, Google, and Amazon as if they are ordinary AI vendors. They are not. They are defense contractors with better branding, and the branding matters because it hides how much of modern warfare now depends on software that cannot explain its own output.

The Guardian’s March 2026 reporting on AI-assisted warfare makes that plain. In Gaza, systems processed huge pools of data to rank people by the probability that they were militants. In Iran, the same basic logic helped compress targeting cycles into minutes or seconds. The result is a war machine that can move faster than human review, while still claiming a human was “in the loop.”

The numbers tell the story better than the slogans

Get the latest AI news in your inbox

Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.

No spam. Unsubscribe at any time.

The most revealing detail in this debate is not that AI is involved. It is how much scale AI adds. The article describes a system that generated more than 37,000 targets in the opening weeks of the war, another that could spit out 100 bombing sites a day, and verification windows that lasted about 20 seconds per name. That is not deliberation. That is throughput.

AI warfare firms are defense contractors, not startups

Military and AI companies love the language of precision, but the numbers point in another direction. If a reviewer has 20 seconds to approve a target, there is no meaningful way to investigate whether the underlying intelligence is stale, whether a person has changed roles, or whether the data is misclassifying civilians as combatants. The machine is doing the sorting, and the human is just clicking through the queue.

  • More than 37,000 targets generated in the first weeks of war
  • Up to 100 potential bombing sites produced per day by one system
  • Verification windows measured in roughly 20 seconds per target
  • A reported 1,000 targets identified in the first 24 hours of the Iran campaign
  • Over 53,000 deaths recorded in Gaza in the cited database, with roughly 17% named fighters

Those figures matter because they show where the real power sits. The model is not a helper on the side. It is the factory floor. Once that is true, the company selling the model becomes part of the military decision process, whether it wants the label or not.

The people behind the systems are already named

This is where the “AI company” framing starts to fall apart. Palantir has long sold data infrastructure to governments and security agencies, and the reporting says its systems were used in the Iran campaign. Anthropic pushed back on Pentagon pressure over ethical limits on Claude, while the military looked elsewhere. OpenAI later removed its ban on military use. Google and Amazon remain tied to Project Nimbus, the cloud and AI contract with the Israeli government worth more than $1 billion.

That mix of vendors is important because it shows how normal the defense business has become inside the AI industry. These companies are not peripheral suppliers. They are embedded in procurement, cloud hosting, model access, and operational planning. Once a model is used to rank targets, the company behind it is no longer just selling software. It is selling judgment at industrial scale.

“The world’s biggest technology companies are not neutral platforms. They are powerful actors with responsibilities.” — Tim Cook, Apple CEO, in a 2017 interview with Bloomberg

Cook was talking about platforms, not warfare, but the line fits here because the same logic applies with even more force. If a company’s tools shape who gets targeted, who gets watched, and who gets killed, then neutrality is a marketing claim, not a serious description of its role.

There is also a political reason these firms prefer the “AI” label. It sounds abstract, technical, and a little magical. “Defense contractor” sounds old-fashioned, regulated, and accountable. One word invites admiration from investors. The other invites scrutiny from lawmakers, journalists, and courts.

AI targeting is different from ordinary dual-use tech

Yes, plenty of technologies can be used in war. Radios, satellites, maps, cloud servers, and computers all have civilian and military uses. That is the classic dual-use problem, and it is real. But AI targeting is different because it does not merely support the strike. It helps decide who gets struck and why.

AI warfare firms are defense contractors, not startups

That difference matters under international humanitarian law. A commander has to verify that a target is a legitimate military objective and take feasible steps to protect civilians. Those duties cannot be outsourced to an opaque system that produces a probability score and then disappears behind proprietary code. If the model says a person is likely a combatant, but the system cannot show how it reached that result, the chain of responsibility gets thinner exactly where it should get stronger.

  • Traditional dual-use tools assist operations
  • AI targeting systems rank human beings for killing
  • Cloud infrastructure stores and moves the data
  • Large language models help summarize, classify, and prioritize
  • Defense procurement turns all of it into operational doctrine

The Guardian piece also makes a sharp point about accountability: when the underlying intelligence is years out of date, the failure is not a bug in the model. It is a failure in the system that trusts the model anyway. That is why “human in the loop” has become such a slippery phrase. If the human cannot realistically challenge the output, the phrase is just moral decoration.

For developers, this should feel uncomfortably familiar. A model can be impressive in a demo and dangerous in production if the surrounding process is broken. In consumer apps, that means hallucinations and bad recommendations. In war, it means dead children. The same engineering instincts that reward speed, scale, and automation become something far darker when the output is a target list.

The real question is who gets to call this progress

The next phase of this story is easy to predict. Governments will keep buying faster targeting systems, vendors will keep describing them as decision support, and executives will keep insisting the final call belongs to humans. But if the review window stays measured in seconds and the target list keeps growing by the thousands, the human role is ceremonial.

If that is where military AI is headed, then the most useful question is not whether the models get better. It is whether lawmakers will force companies like Palantir, OpenAI, and Anthropic to disclose where their systems are used, what data they ingest, and who signs off on lethal decisions. If they do not, the label on the product will keep changing while the business model stays the same.

That is the part worth remembering the next time an AI executive talks about “responsible deployment.” In this domain, responsibility is not a press release. It is a paper trail, a procurement record, and a refusal to let software hide the person who ordered the strike.

For more on the policy side of this debate, see OraCore’s coverage of AI regulation and defense procurement. The next test is simple: will governments treat targeting models like software products, or like weapons systems with a vendor attached?