Anthropic’s leaked Mythos model signals a bigger leap
A leaked draft shows Anthropic testing Mythos, a model the company calls its most capable yet, with major coding and cyber gains.

Anthropic says its new model is already being tested with early access customers, and the company’s own draft materials call it “the most capable we’ve built to date.” The leak matters because the same documents say the model scores dramatically higher on coding, reasoning, and cybersecurity than Claude Opus 4.6.
What got out was not a vague rumor or a stray product name. Fortune reviewed draft blog content and related files that described Anthropic’s unreleased model, apparently called Claude Mythos, along with a new higher-priced tier named Capybara. The documents were stored in a publicly accessible data cache and later removed after Fortune alerted the company.
If the leak is accurate, this is more than a routine model refresh. Anthropic appears to be preparing a release that pushes harder into enterprise use, especially security-sensitive work, while also acknowledging that the model is expensive to run and not ready for broad availability.
What the leaked files actually show
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
The leaked material looked like a draft launch page, complete with headings, publication date fields, and product copy. That matters because it suggests the model was far enough along for marketing and rollout planning, not just lab-only experimentation.

Anthropic said the public access problem came from “human error” in an external content management system. The company said draft content became accessible because of a configuration issue in its CMS tools. In plain English: unpublished files were left where search engines and other users could find them.
The cache also appears to have included much more than a single model announcement. According to the reporting, there were close to 3,000 assets linked to Anthropic’s blog that had not been published previously but were still publicly reachable.
- Draft references to a model called Claude Mythos
- A new top-tier model family name, Capybara
- Claims of higher scores in software coding, academic reasoning, and cybersecurity
- Notes that the model is expensive and limited to early access customers
- Other internal or semi-internal assets, including event material and unused blog resources
The naming is interesting too. Anthropic currently sells Claude in Opus, Sonnet, and Haiku tiers. The leaked draft says Capybara would sit above Opus, which would make it the company’s new premium lane for its biggest customers.
That’s a useful signal for buyers. If Anthropic is creating a tier above Opus, it is probably trying to separate general-purpose assistants from models that are costly, slower, and aimed at high-value workloads where performance matters more than price.
Why Anthropic is being so careful
Anthropic’s public response was unusually direct. A spokesperson said the model is a “step change” in AI performance and called it the most capable model the company has built so far. The company also said it is working with a small group of early access customers before any wider release.
That caution lines up with the company’s own internal language in the draft blog post. The leaked copy says Anthropic wants to understand the model’s near-term cybersecurity risks before release, and it frames the rollout around helping defenders prepare for AI-assisted attacks.
“We consider this model a step change and the most capable we’ve built to date.” — Anthropic spokesperson, quoted in Fortune’s reporting
That quote is doing a lot of work. It signals confidence, but it also hints that Anthropic thinks the model crosses a threshold where release timing matters as much as raw capability.
There is a practical reason for the caution. Anthropic says the model is ahead of its other systems in cyber capability and could help attackers exploit vulnerabilities faster than defenders can patch them. That is the kind of statement that changes product planning, customer access, and security review all at once.
For developers and security teams, the message is simple: this is the sort of model that can shorten the time between finding a bug and turning it into an exploit. If Anthropic is right, the same model that helps teams audit code could also help threat actors scale up attacks.
How it compares with recent frontier models
Anthropic is not making these claims in a vacuum. OpenAI and Anthropic have both been pushing models deeper into coding and security tasks, and both have started to classify some systems as higher risk under internal safety frameworks.

OpenAI said in February that GPT-5.3-Codex was the first model it had classified as “high capability” for cybersecurity-related tasks under its Preparedness Framework. It also said that model was directly trained to identify software vulnerabilities.
Anthropic’s own recent release, Claude Opus 4.6, already showed it could surface previously unknown vulnerabilities in production codebases. The company described that ability as dual-use, which is the polite way of saying it can help defenders and attackers at the same time.
- Anthropic Claude Opus 4.6: current top public model before Mythos, with strong vulnerability-finding ability
- OpenAI GPT-5.3-Codex: first OpenAI model classified as high capability for cyber tasks
- Leaked Mythos/Capybara: draft claims of much higher coding, reasoning, and cyber scores than Opus 4.6
- Release strategy: Anthropic says it is keeping the new model in early access because of cost and risk
One detail makes the comparison sharper: Anthropic’s draft says Capybara would be larger and more intelligent than Opus, while also more expensive. That usually means the company sees a premium market for customers who need maximum performance and can justify the compute bill.
There is also a commercial angle here. The leaked documents included material about an invite-only CEO summit in Europe, which suggests Anthropic is pairing frontier-model development with a more aggressive enterprise sales push. That fits the broader direction of the AI market right now: the best models are becoming boardroom products as much as developer tools.
What the leak says about AI product security
The leak is embarrassing, but it is also revealing. Anthropic’s own CMS apparently exposed a large pile of unpublished assets, which means a single configuration mistake was enough to surface product plans, internal files, and event details. That is a reminder that AI labs are now operating like high-value software companies with much more to protect.
For teams building with AI, the lesson is not abstract. If a company like Anthropic can accidentally expose draft launch materials, then smaller teams should assume their own content systems, asset stores, and staging environments are easy to misconfigure.
There is another lesson for anyone following model releases closely: the most important details often appear before the press release. Pricing tiers, rollout limits, safety concerns, and enterprise targeting usually show up in drafts long before the polished announcement does.
Anthropic has not publicly confirmed the Mythos name, and it may still change before launch. But the direction is clear enough. The company is preparing a more expensive, more capable model with stronger coding and security performance, and it is treating release as a controlled experiment rather than a mass-market launch.
My bet is that Anthropic will keep this model in limited access until it can show clear evidence that defenders can use it more effectively than attackers can abuse it. If that happens, the real question for the rest of the industry is simple: how many more models are already sitting one CMS mistake away from becoming public?
For more on how frontier model makers are changing release strategy, see our coverage of OpenAI’s cyber risk framework and Anthropic’s Opus 4.6 security testing.
// Related Articles
- [MODEL]
Why Google’s Hidden Gemini Live Models Matter More Than the Demo
- [MODEL]
MiniMax-M1 brings 1M-token open reasoning model
- [MODEL]
Gemini Omni Video Review: Text Rendering Beats Rivals
- [MODEL]
Why Xiaomi’s MiMo-V2.5-Pro Changes Coding Agents More Than Chatbots
- [MODEL]
OpenAI’s Realtime Audio Models Target Live Voice
- [MODEL]
Anthropic发布10款金融AI Agent