Pentagon strikes AI deals for classified work
The Pentagon is signing AI deals to expand classified work as White House officials push Anthropic back into government use.

The Pentagon is making AI deals to expand classified work with top model makers.
The Pentagon is widening its ties with artificial intelligence companies after White House officials grew impressed and uneasy about Anthropic’s newest model, Anthropic’s Mythos. The move is about more than procurement. It is about who gets access to advanced models, who gets paid to build for government use, and how far classified work can go before policy catches up.
What changed
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
The immediate spark is a dispute between Anthropic and the Pentagon, with White House officials pushing for a compromise that could reopen doors for the company across parts of government. That matters because government contracts are often where model makers prove their systems can handle sensitive workflows, security reviews, and long procurement cycles.

In practice, these deals can decide which companies become embedded in defense operations and which ones stay on the outside. For AI firms, that means access to stable revenue and high-value use cases. For the Pentagon, it means faster access to models that can summarize, classify, translate, and support analysis at scale.
| Item | Detail |
|---|---|
| Company | Anthropic |
| Model | Mythos |
| Agency | Pentagon |
| Political pressure point | White House officials seeking a compromise |
Why classified work matters
Classified work is where AI policy gets real. Public demos and chatbots are one thing; handling sensitive defense material is another. Once a model is considered for classified or near-classified use, questions about model behavior, prompt injection, data retention, audit logs, and access controls become much harder to ignore.
That is also why the government’s interest in Anthropic is so notable. The company has built a reputation around safety work and controlled deployment, while still shipping powerful models that can compete with the biggest systems from OpenAI and Google. If Mythos is strong enough to worry officials, it is probably strong enough to attract serious defense interest.
“AI is the new electricity,” Andrew Ng said in a 2017 talk at Stanford’s Graduate School of Business.
That quote gets repeated a lot, but the point still holds here: once a general-purpose technology becomes useful, institutions rush to plug it into their most important systems. Defense is usually one of the last places to do that, because the stakes are high and the tolerance for mistakes is low.
How this compares with earlier Pentagon AI work
The Pentagon has been working with AI vendors for years, but the current wave looks different because the models are much more capable and the pressure to deploy is higher. Earlier efforts often focused on narrow tasks like image analysis or logistics prediction. Now the conversation is about systems that can reason over text, code, and operational data.

- Earlier defense AI programs focused on narrow tasks such as detection and classification.
- Current model deals target broader workflows like analysis, drafting, and decision support.
- Government buyers now care as much about model governance as raw capability.
- Vendor selection can shape which AI stack becomes standard inside federal agencies.
That shift changes the business math for every major model company. A defense contract can bring credibility, but it also brings scrutiny. A company that wants federal work has to answer questions about red-teaming, model updates, incident response, and whether a system can be trusted in high-stakes settings.
What Anthropic gets, and what it risks
For Anthropic, a compromise with the Pentagon could unlock more than a single contract. It could restore access to a broader slice of government work and help the company compete for long-term federal adoption. That kind of relationship can matter just as much as consumer growth, especially when the customer is a massive buyer with strict compliance requirements.
But the risk is obvious. The closer a model company gets to defense use, the more it gets pulled into debates about military applications, oversight, and political control. Anthropic has spent a lot of time talking about safety, which makes this moment especially sensitive. If the company wants to keep its public posture while also selling into the Pentagon, it will need to show that safety claims survive contact with real government deployments.
- Anthropic’s public updates will be watched for signs of a policy shift.
- The Pentagon will likely want stricter controls than commercial customers do.
- The White House appears to be acting as a mediator in the dispute.
- Any agreement could influence how other AI vendors structure government bids.
The bigger story is that AI procurement is becoming a power struggle, not a simple buying decision. The companies that can satisfy security teams, policy officials, and technical evaluators at the same time will get the first serious government deployments.
What happens next
If this compromise lands, expect other agencies to test similar deals with model makers that can pass security review and political inspection. If it fails, the Pentagon will still keep shopping for AI systems, but the list of acceptable vendors may narrow. Either way, the next phase of government AI adoption will be shaped less by chatbot demos and more by classified access, compliance paperwork, and who is trusted to sit inside the room where sensitive work happens.
The real question is whether the government wants AI vendors as contractors, partners, or something closer to infrastructure providers. The answer will decide which companies get the deepest access to federal systems over the next few years.
// Related Articles
- [IND]
Circle’s Agent Stack targets machine-speed payments
- [IND]
IREN signs Nvidia AI infrastructure pact
- [IND]
Circle launches Agent Stack for AI payments
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods