Anthropic’s Mythos Preview Raises the Cyber Stakes
Anthropic’s new Mythos Preview is being tested with Apple, Google, Microsoft, and 45+ firms to probe AI’s cyber risks.

Anthropic says its new Anthropic model, Mythos Preview, can already uncover thousands of critical vulnerabilities. That is a big claim, but the more interesting detail is who gets to test it first: Apple, Google, Microsoft, and more than 45 other organizations.
Instead of shipping the model straight to the public, Anthropic is using a consortium called Project Glasswing to pressure-test what happens when a model that is strong at code also becomes useful for cyber offense. That is a smart move, because the security debate around AI has moved past theory. The question now is whether defenders can adapt before attackers start using similar tools at scale.
What Anthropic is actually testing
Get the latest AI news in your inbox
Weekly picks of model releases, tools, and deep dives — no spam, unsubscribe anytime.
No spam. Unsubscribe at any time.
Anthropic says Mythos Preview was trained for coding, not for hacking. But in practice, better code generation often means better vulnerability discovery, exploit chaining, binary analysis, and penetration testing. In the company’s own framing, the model can help find misconfigurations, inspect binaries without source code, and generate proofs of concept for attacks.

That matters because the line between defensive and offensive use is thin. A model that spots a buffer overflow in a test environment can also help someone build a more effective intrusion path. Anthropic’s frontier red team lead, Logan Graham, said the company wants the industry to prepare for a world where these capabilities are broadly available in 6, 12, or 24 months.
- Project Glasswing includes 45+ organizations across tech, security, infrastructure, and finance
- Mythos Preview is not generally released yet
- Anthropic says the model has already found thousands of critical vulnerabilities
- Some of those bugs are decades old and were still being missed in heavily reviewed code
The release strategy borrows from coordinated vulnerability disclosure, where researchers give vendors time to patch before public disclosure. That is a familiar playbook in security, but here it is being applied to a model that can generate security findings at machine speed.
That shift changes the economics of defense. If one model can scan more code, more often, and across more systems than a human team can, then the bottleneck moves from discovery to triage. Security teams will need better ways to sort real threats from noise, and they will need them fast.
Why Anthropic pulled in its rivals
Project Glasswing is unusual because it includes direct competitors. Anthropic is asking the same companies that build operating systems, cloud platforms, chips, and security tooling to help test a model that could reshape their own products. That makes sense if the goal is to surface weaknesses before the model is widely available.
Google’s vice president of security engineering, Heather Adkins, said the company is pleased to see the cross-industry initiative and that AI creates new challenges and new opportunities in cyber defense. Microsoft’s global CISO, Igor Tsyganskiy, said early access to Mythos Preview will help the company identify and mitigate risk and improve products for customers.
“We need to prepare now for a world where these capabilities are broadly available in 6, 12, 24 months,” Logan Graham said. “Many things would be different about security. Many of the assumptions that we’ve built the modern security paradigms on might break.”
That quote gets to the heart of the issue. Security teams have spent years building processes around human attackers, human analysts, and human response times. If AI can speed up reconnaissance and exploit development, then the old assumption that defenders have time to react starts to look shaky.
Anthropic’s approach also hints at a bigger industry problem: no single company can fix this alone. A model that can spot vulnerabilities in one vendor’s stack may also expose weaknesses in another vendor’s cloud, another company’s endpoint software, and another team’s internal tooling. That is why the consortium model matters more than the model name.
How this compares with today’s security tooling
Traditional application security tools already find bugs, but they usually do it in narrow ways. Static analyzers catch certain code patterns, scanners flag known issues, and penetration testers focus on specific targets. Mythos Preview appears to combine those tasks with broader reasoning, which is why Anthropic says it can produce attack chains and proofs of concept.

Here is the practical comparison:
- Classic scanners are good at known signatures and repeatable checks
- Human red teams are strong at creativity, but they are limited by time and staffing
- Mythos Preview appears to move faster across large codebases and can chain findings into working attack paths
- Anthropic says the model has already exposed thousands of critical bugs, including long-standing issues missed by prior review
That does not mean the model is magic. It still needs guardrails, validation, and human judgment. A machine can propose a vulnerability or exploit path, but a security team still has to decide whether the issue is real, exploitable, and worth prioritizing. The real advantage is scale, not perfection.
The comparison with human researchers is also telling. Graham said Mythos Preview has already done things a senior security researcher could accomplish. That is a strong signal that AI is moving from helper to peer in some security workflows, especially in code review and exploit discovery.
There is a second-order effect here too. If defenders get access to these systems first, they can harden software before attackers catch up. If attackers get there first, the blast radius could be ugly. That is why Anthropic’s staged rollout is more than a PR move; it is a risk-management test for the whole field.
What this means for developers and security teams
For developers, the immediate takeaway is uncomfortable but useful: assume AI-assisted vulnerability discovery is part of normal security work now. Code that sailed through review last year may not survive a model that can reason across files, dependencies, and binary behavior in one pass.
For security teams, the next step is to treat AI findings like a new class of input, not a replacement for existing controls. That means updating triage pipelines, tightening patch workflows, and deciding where AI should be allowed to run in internal environments. It also means more attention on model access, because a tool that helps defenders can also help intruders if it falls into the wrong hands.
If you want a useful mental model, think of Mythos Preview less as a chatbot and more as a very fast junior researcher with access to huge amounts of code. That kind of system can save time, but it can also overwhelm teams that are not ready to process the output.
My prediction is simple: the first companies to adopt AI-assisted security well will be the ones that pair model access with strict review and fast patching. The companies that treat it like a novelty will spend the next year catching up after someone else’s model finds their bugs first.
For readers tracking this shift, the real question is not whether AI will change security. It already is. The question is whether your team is building processes for a world where vulnerability discovery gets faster every quarter.
// Related Articles
- [IND]
Why AI infrastructure is now the real moat
- [IND]
Circle’s Agent Stack targets machine-speed payments
- [IND]
IREN signs Nvidia AI infrastructure pact
- [IND]
Circle launches Agent Stack for AI payments
- [IND]
Why Nebius’s AI Pivot Is More Real Than Hype
- [IND]
Nvidia backs Corning factories with billions