Powell and Bessent Flag Anthropic’s Mythos Risks
Powell, Bessent, and bank CEOs met on Anthropic’s Mythos as the company limited release and launched Project Glasswing for defense.

Federal Reserve Chair Jerome Powell and Treasury Secretary Scott Bessent met with top bank executives this week over a new AI model from Anthropic called Mythos. The reason was blunt: Anthropic says the model can uncover vulnerabilities in major operating systems and web browsers, which makes it useful for defense and risky in the wrong hands.
That is a serious conversation for a model that Anthropic says it will not release widely. The company also announced a new effort, Project Glasswing, with Amazon, Apple, and Nvidia to use Mythos for cybersecurity work. That mix of caution and ambition is why the meeting mattered.
Why Washington cared enough to call bankers in
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
The closed-door meeting took place at the Treasury Department in Washington, D.C., and included leaders from major banks. JPMorgan Chase chief executive Jamie Dimon was invited but could not attend, according to CBS News. The message from regulators was simple: financial institutions need to prepare for AI-driven security threats before those threats become routine.

Anthropic’s own framing explains the urgency. The company said Mythos has already found weaknesses in systems that most people use every day. If a model can identify flaws in widely deployed software, it could help defenders patch faster, but it could also help attackers move faster too.
For banks, that is not an abstract policy debate. Financial firms sit on high-value data, run giant authentication systems, and depend on complex software stacks that are already targeted by phishing, malware, and credential theft. A model that can reason through code and find hidden weaknesses changes the economics of both defense and offense.
- The meeting brought together the Fed, Treasury, and senior bank leadership.
- Anthropic says Mythos can uncover vulnerabilities in major operating systems and browsers.
- The company says it will not broadly release the model.
- Project Glasswing includes Amazon, Apple, and Nvidia.
Anthropic’s bet: keep the model close, use it for defense
Anthropic is trying to thread a narrow needle. It wants Mythos to help cybersecurity teams, but it does not want the model to spread widely before safeguards catch up. That is why the company launched Project Glasswing as a controlled effort instead of a public product rollout.
The company’s statement was unusually direct about the stakes. Anthropic warned that AI progress is moving fast enough that capabilities like Mythos may soon spread beyond users who care about safety. That is a worrying sentence if you work in security, because it suggests the gap between advanced defensive tooling and offensive misuse may shrink quickly.
“Given the rate of AI progress, it will not be long before such capabilities proliferate, potentially beyond actors who are committed to deploying them safely,” Anthropic said in a post about the new project.
That quote matters because it shows Anthropic is not treating Mythos like a normal product launch. It is treating the model like a dual-use system, closer to a sensitive security tool than a chatbot update. That is also why the company chose a project structure with large partners rather than a broad release to every developer.
There is also a political layer here. Treasury said the administration is continuing to coordinate on AI security through an interagency task force, and that regulators will keep meeting with institutions about these developments. In other words, the U.S. government is already trying to build a playbook for the next class of AI risk.
How this compares with earlier AI risk warnings
This is not the first time U.S. officials have treated AI as a financial stability concern, but the timing is notable. In 2023, the Biden administration identified AI as a potential risk to financial stability, according to CBS News reporting at the time. That was the first time the designation had been made.

What changed is the level of specificity. Earlier warnings were broad: AI could distort markets, automate fraud, or create new systemic risks. The Mythos discussion is narrower and more technical. It is about whether a model can find software flaws before criminals do, and whether banks can use the same class of tools without opening new holes in their defenses.
- In 2023, the U.S. first labeled AI a potential financial stability risk.
- This week’s meeting focused on a specific model, not AI in general.
- Anthropic is limiting Mythos distribution instead of pushing a wide launch.
- The company is pairing the model with named corporate partners for defensive use.
That narrower focus matters because it changes what regulators need to ask. They are no longer debating a hypothetical future where AI might affect finance. They are discussing a system that can already inspect software for weaknesses and could be used to automate parts of intrusion research.
Compare that with the usual banking tech conversation, which often centers on fraud detection, customer service bots, or document processing. Mythos is a different category. It is closer to a security researcher that never gets tired, never sleeps, and can scan enormous amounts of code faster than a human team.
For a bank CEO, that is both attractive and alarming. A tool like this could shorten patch cycles, improve threat hunting, and help internal teams test their own systems. It could also force banks to assume that attackers may soon have similar tools, which means security budgets, vendor reviews, and incident response plans all need a rethink.
What to watch next
The key question is whether Project Glasswing becomes a model for how advanced AI gets handled in finance and critical infrastructure. If Anthropic can keep Mythos in a tightly controlled defensive setting while regulators build clearer rules, other AI labs may copy the approach.
But if the model’s capabilities leak into the wider market too quickly, banks will have to defend against a much faster class of attacker. That would push the industry toward more automated security testing, stricter software supply-chain checks, and more pressure on vendors to prove their products are hardened before deployment.
My read: the next major AI security story will not be about chatbots writing emails. It will be about who gets access to models that can inspect code, spot flaws, and help break systems at scale. If you run security for a bank, an insurer, or a cloud provider, this is the moment to ask one question: where would a model like Mythos help your team, and where would it create new exposure?
That answer will shape the next round of AI policy far more than another generic debate about “AI risk.”
// Related Articles
- [IND]
Circle’s Agent Stack targets machine-speed payments
- [IND]
IREN signs Nvidia AI infrastructure pact
- [IND]
Circle launches Agent Stack for AI payments
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods