Security & Networking
The AI That Found a 27-Year-Old Vulnerability
Nobody Else Could.
This Changes Everything.
Anthropic’s Claude Mythos Preview autonomously discovered critical zero-days in every major operating system and every major browser — including flaws that survived decades of human review and five million automated tests. Project Glasswing is the industry’s response.
Somewhere in the codebase of OpenBSD — one of the most security-hardened operating systems ever built, the software trusted to run the world’s firewalls and critical network infrastructure — a single line of code had been quietly waiting since 1999. Twenty-seven years of human review. Millions of automated tests. Every major security audit passed. Then an AI found it in hours, autonomously, without being told where to look. This is the story of Project Glasswing, the most significant AI cybersecurity initiative ever announced — and what it means for every organisation that depends on software to function.
On April 16, 2026, Anthropic announced Project Glasswing — a landmark AI cybersecurity initiative bringing together twelve of the world’s most consequential technology companies, united around a shared and urgent conclusion: AI models have reached a capability threshold in cybersecurity that cannot be ignored, and the only responsible response is to put those capabilities to work for defenders before attackers get there first. Project Glasswing is the industry’s collective answer to a question that can no longer be deferred.
The catalyst is Claude Mythos Preview — an unreleased frontier model that Anthropic describes as a general-purpose AI that can surpass all but the most skilled humans at finding and exploiting software vulnerabilities. In the weeks before today’s announcement, Mythos Preview autonomously found thousands of critical zero-day vulnerabilities — flaws previously unknown to the software’s own developers — across every major operating system and every major web browser on the market.
Why This Matters Now
A zero-day vulnerability is a security flaw unknown to the software’s developers — meaning there is no patch, no defence, and no warning. Mythos Preview found thousands of them, in the most hardened and widely-used software in the world. The same capability that makes this model invaluable for defenders makes it potentially catastrophic in the wrong hands. That is precisely why Anthropic is not making it generally available — and why twelve of the world’s most powerful technology companies joined the effort within weeks.
Three AI Cybersecurity Cases That Illustrate What Changed
Anthropic has published technical details for a subset of vulnerabilities that have already been patched. Three stand out as illustrations of what Mythos Preview can do that nothing else could:
What makes these three cases remarkable is not just the age or severity of the vulnerabilities — it is the mode of discovery. Mythos Preview found and in many cases developed working exploits entirely autonomously, without any human steering. It did not receive a list of files to check, a function to analyse, or a vulnerability class to look for. It read the code, reasoned about it, and found what decades of human expertise and millions of automated tests had missed.
“The window between a vulnerability being discovered and being exploited by an adversary has collapsed — what once took months now happens in minutes with AI. That is not a reason to slow down; it’s a reason to move together, faster.”
Elia Zaitsev, CTO, CrowdStrikeHow Far Ahead Mythos Preview Really Is
Anthropic has published benchmark results across cybersecurity and general coding tasks. The gaps between Mythos Preview and Claude Opus 4.6 — itself one of the best coding models available — are substantial:
Mythos Preview vs Claude Opus 4.6 — Key Benchmarks
“`The SWE-bench Multimodal gap — 59.0% vs 27.1% — is particularly striking. It suggests that Mythos Preview’s ability to reason about code across visual and textual modalities simultaneously is dramatically more capable than anything previously deployed. For vulnerability detection, where understanding how code renders visually in a browser matters as much as the raw logic, this is a decisive advantage.
Why 12 Companies Joined in Weeks
Project Glasswing’s partner list reads like a directory of the companies responsible for the infrastructure the world runs on. The speed with which they joined — within weeks of being shown Mythos Preview’s capabilities — is itself a signal about how seriously the industry is taking what this model can do.
The Linux Foundation’s involvement in Project Glasswing deserves particular attention from an AI cybersecurity perspective. As Jim Zemlin, its CEO, noted: open source software constitutes the vast majority of code in modern systems — including the systems AI agents themselves use to write new software. Open source maintainers have historically been left to figure out security alone, without the resources for dedicated security teams. Project Glasswing is the first time they have had access to a frontier AI model specifically for this purpose, backed by $2.5M in direct funding.
Project Glasswing: The AI Cybersecurity Model Anthropic Won’t Release — And Why
Anthropic has stated clearly that it does not plan to make Claude Mythos Preview generally available. This is not a commercial decision — it is a safety one at the heart of the Project Glasswing structure, and it is worth understanding the AI cybersecurity logic precisely.
The same capability that allows Mythos Preview to find a 27-year-old vulnerability in OpenBSD would allow a malicious actor to find exploitable vulnerabilities in banking systems, hospital networks, power grid software, or military infrastructure. The model scores 83.1% on CyberGym — a benchmark measuring the ability to reproduce real-world cybersecurity vulnerabilities. That is not a theoretical risk. It is a measurable capability uplift for anyone who wants to do harm at scale.
“AI capabilities have crossed a threshold that fundamentally changes the urgency required to protect critical infrastructure. There is no going back.”
Anthony Grieco, SVP & Chief Security & Trust Officer, CiscoAnthropic’s response to this dilemma is the Project Glasswing structure: controlled access for organisations whose defensive work is verifiable, with Anthropic committing $100M in usage credits and requiring partners to share what they learn. The goal is to create a durable asymmetry in which defenders have used Mythos Preview’s capabilities to patch vulnerabilities before attackers can access equivalent tools.
The eventual plan is to launch new cybersecurity safeguards with an upcoming Claude Opus model — not Mythos Preview, which poses too high a risk — so that those safeguards can be tested and refined before Mythos-class capabilities are eventually made more widely available.
The Name
The project is named after the glasswing butterfly, Greta oto. The metaphor runs two ways: its transparent wings let it hide in plain sight — like the vulnerabilities in this post that survived decades of review. They also allow it to evade harm — like the transparency Anthropic is advocating in its approach to these capabilities. It is a rare choice for a corporate initiative: a name that acknowledges both the concealment and the escape.
Five AI Cybersecurity Implications Every Security Leader Needs to Understand
First: Automated testing is no longer a sufficient proxy for security. The FFmpeg case — a vulnerability that survived five million automated test executions — is a direct indictment of the assumption that automated coverage equals security coverage. Reasoning-based AI vulnerability detection operates on a fundamentally different basis than pattern-matching automated tools. Security strategies built around automated testing alone are now structurally inadequate.
Second: The attacker timeline has collapsed. CrowdStrike’s Zaitsev noted that the window between discovery and exploitation has gone from months to minutes with AI. This is not hyperbole — it is a consequence of the same agentic reasoning capability that found the Linux kernel exploit chain autonomously. Patch deployment timelines that were acceptable in 2024 are no longer acceptable in 2026.
Third: Open source is the most exposed attack surface. The Linux Foundation’s involvement highlights a structural vulnerability that has been underappreciated: open source software, which underpins virtually all enterprise infrastructure, has historically had no access to the security resources available to large commercial vendors. That is changing through Project Glasswing — but the backlog of unscanned open source codebases is enormous.
Fourth: AI-augmented red teaming is now the standard, not the premium. Every organisation with a security function should be evaluating AI cybersecurity tools — specifically AI-assisted vulnerability detection — as a primary methodology, not an experimental one. The gap between Mythos Preview and manual review — a 27-year-old flaw surviving every human audit — is not a statistical edge case. It reflects a genuine capability difference in reasoning about complex, large-scale codebases.
Fifth: The national security dimension is real. Anthropic has been in ongoing discussions with US government officials about Mythos Preview’s offensive and defensive capabilities. State-sponsored attackers from China, Iran, North Korea, and Russia already conduct sustained campaigns against Western critical infrastructure — a pattern we covered in depth in our analysis of exposed ransomware operator toolkits and the shift toward sovereign AI compute in our AI infrastructure analysis. The emergence of AI cybersecurity capabilities in the hands of nation-state actors is not a future risk. It is a current one. The same sovereign AI infrastructure questions we explored in our Industry Vision Report 2026 apply directly here.
The Honest Assessment
Project Glasswing is a genuine attempt to solve a problem that has no clean solution. An AI model capable of finding 27-year-old zero-days in the world’s most hardened software is not a tool that can be safely released to everyone. It is also not a tool that can be safely kept from defenders while attackers develop equivalent capabilities independently.
Anthropic’s bet is that a coalition of the world’s most capable defensive security organisations, given early and controlled access, can patch enough of the most critical vulnerabilities to create a durable advantage for defenders before the capability proliferates. That is not a certain outcome. It is a calculated one — and the $100M commitment and 12-partner coalition suggest a serious rather than symbolic effort.
The deeper implication is for how the industry thinks about AI capability thresholds. Mythos Preview is the first widely-acknowledged case of an AI model reaching a capability level that changes the risk calculus enough to require a fundamentally different deployment approach. It will not be the last. The framework being built around Project Glasswing — controlled access, mandatory information sharing, public reporting within 90 days — is as important as the vulnerabilities being patched. It is a template for how the industry handles the next threshold, and the one after that.
The glasswing butterfly’s wings are transparent. You can see through them, but they still function. That may be the most accurate metaphor for what responsible AI deployment at the frontier actually looks like.
Sources & References
- Anthropic — Project Glasswing announcement, April 2026: anthropic.com/glasswing
- Anthropic Frontier Red Team Blog — Mythos Preview vulnerability details: red.anthropic.com
- Claude Mythos Preview System Card: anthropic.com
- Cisco announcement: blogs.cisco.com
- Microsoft MSRC announcement: microsoft.com
- AWS Security Blog: aws.amazon.com
- Google Cloud / Vertex AI announcement: cloud.google.com
- Linux Foundation announcement: linuxfoundation.org
- CrowdStrike announcement: crowdstrike.com
- Palo Alto Networks: paloaltonetworks.com
