Anthropic’s Mythos shows AI cyber risk was already here
Anthropic’s Mythos alarmed banks and regulators, but researchers say older AI models can already find the same software flaws.

Anthropic’s Mythos alarmed banks and regulators, but older AI models can already find many of the same software flaws.
When a model is said to uncover thousands of previously unknown vulnerabilities, people tend to treat it like a warning from the future. But the reaction to Anthropic’s Mythos has exposed something more uncomfortable: the cyber risk it highlights is already in circulation, and some security teams have been testing it for months.
The story matters because the gap between finding a flaw and fixing it is still measured in days or weeks, while AI can now scan code at machine speed. That mismatch is what has banks, governments, and security vendors on edge.
| Signal | Number | What it means |
|---|---|---|
| Vulnerabilities found by Claude Opus 4.6 | 500+ | Anthropic said its earlier widely available model found more than 500 high-severity issues in open-source software. |
| Organizations in the limited Mythos rollout | Few | Anthropic restricted access to a small group, including Apple, Amazon, JPMorgan Chase, and Palo Alto Networks. |
| Patch timing for many firms | Days to weeks | That is the window attackers can exploit before defenders close the hole. |
Mythos triggered panic, but the capability is older than the headlines
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
CNBC’s reporting makes the central point pretty clear: cybersecurity researchers say the vulnerability-finding ability Anthropic highlighted with Mythos can already be reproduced with existing models from Anthropic and OpenAI. The difference is scale and orchestration, not a brand-new category of attack.

Ben Harris, CEO of watchTowr, told CNBC that teams are already reproducing Mythos-style results by coordinating public models. That matters because it suggests the threat is less about one secret model and more about a workflow that lowers the skill bar for attackers.
In other words, the alarm is real, but the alarm bell is late. Security teams have been living with AI-assisted vulnerability discovery for a while; Mythos just made the scale visible to executives who were not paying close attention.
- AI speeds up discovery of bugs.
- Organizations still patch slowly.
- That gap creates exposure.
- Offense gets the first move.
What researchers are actually seeing in the wild
Cybersecurity firms say they have already reproduced many of Mythos’s headline results by using older models in parallel. Vidoc CEO Klaudia Kloc told CNBC that the models available now are already powerful enough to detect zero-days at large scale, and that this has been true for months, maybe longer.
Vidoc tested a technique called orchestration, which breaks a codebase into smaller chunks and has multiple tools cross-check one another. That approach matters because it changes the economics of vulnerability hunting. One model guessing alone may miss a flaw; a coordinated workflow can inspect much more code, much faster.
“The models that we have right now are powerful enough to detect zero days in a large scale, and this is scary enough.”
Klaudia Kloc, CEO of Vidoc
Another firm, Aisle, found that many of Mythos’s results could be reproduced with cheaper models running together. Founder Stanislav Fort summed up the point neatly in a blog post: “A thousand adequate detectives searching everywhere will find more bugs than one brilliant detective who has to guess where to look.”
That line is useful because it captures the real shift. The breakthrough is not a single genius model. It is the ability to distribute work across many agents, many prompts, and many passes over the same code.
Why banks and regulators are freaking out
For banks, insurers, and public agencies, the problem is not abstract. Ben Harris said his recent conversations with financial firms and regulators have been marked by “hysteria,” and that tracks with the basic math of vulnerability management.

Attackers can move in hours. Defenders often need days or weeks to patch, test, and roll out fixes. Some patches require systems to go offline, which makes the delay even longer. That means the window for exploitation remains open even if the vulnerability is known.
Anthropic says the limited release of Mythos was part of Project Glasswing, a safety measure meant to give companies time to prepare. Dario Amodei said at an Anthropic event that the danger is a big jump in vulnerabilities, breaches, and ransomware damage.
That warning is hard to dismiss, but it also cuts both ways. If the model is dangerous enough to restrict, the security community needs access to study it. By keeping Mythos tightly controlled, Anthropic may have reduced short-term misuse while also slowing independent validation and defense work.
- CNBC’s report says the model was shared with a small set of U.S. companies.
- JPMorgan Chase CEO Jamie Dimon has already warned that AI raises vulnerability risk before it helps defense.
- OpenAI responded with GPT-5.5-Cyber, a model aimed at cybersecurity teams.
- Anthropic said its earlier models had already found hundreds of serious bugs.
The real fight is offense versus defense
This story is bigger than one model launch. It is about which side of cybersecurity gets the first usable advantage from AI. Right now, the answer looks like offense.
Justin Herring, a partner at Mayer Brown and a former cybersecurity regulator in New York, told CNBC that AI systems can produce a surge in discovered vulnerabilities without a matching tool for fixing them. That is the part many executives still seem to miss.
Claude Opus 4.6 finding more than 500 high-severity issues is a reminder that the older models were already good enough to matter. Mythos may have increased the scale, but it did not invent the problem. It made the problem easier to see.
There is also a second-order effect here: access. Anthropic’s controlled rollout created a class of companies that could start patching early, while everyone else waited for the broader debate to catch up. Pavel Gurvich, CEO of Tenzai, said this creates “tiers of haves and have-nots,” which may slow down security progress for the rest of the market.
My read: the next 12 months will be less about whether AI can find vulnerabilities and more about which vendors can turn that discovery into automated triage, patch suggestions, and safer deployment pipelines. If you run security for a bank, hospital, or software company, the practical question is simple: how many of your patch workflows still assume a human will notice the problem first?
// Related Articles
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions
- [IND]
Why Anthropic and the Gates Foundation should fund AI public goods
- [IND]
Why Observability Is Critical for Cloud-Native Systems
- [IND]
Data centers are pushing homeowners to solar
- [IND]
How to choose a GPU for 异环