Anthropic on Tuesday released a preview of its new frontier model, Mythos, which it says will be used by a small coterie of partner organizations for cybersecurity work. In a previously leaked memo, the AI startup called the model one of its “most powerful” yet.
The model’s limited debut is part of a new security initiative, dubbed Project Glasswing, in which more than 40 partner organizations will deploy the model for the purposes of “defensive security work” and to secure critical software, Anthropic said. While it was not specifically trained for cybersecurity work, the preview will be used to scan both first-party and open-source software systems for code vulnerabilities, the company said.
Anthropic claims that, over the past few weeks, Mythos identified “thousands of zero-day vulnerabilities, many of them critical.” Many of the vulnerabilities are one to two decades old, the company added.
Mythos is a general-purpose model for Anthropic’s Claude AI systems that the company claims has strong agentic coding and reasoning skills. Anthropic’s frontier models are considered its most sophisticated and high-performance models, designed for more complex tasks, including agent-building and coding.
The partner organizations previewing Mythos include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the initiative, these partners will ultimately share what they’ve learned from using the model so that the rest of the tech industry can benefit from it. The preview is not going to be made generally available, Anthropic said.
Anthropic also claims that it has engaged in “ongoing discussions” with federal officials about the use of Mythos, although one would have to imagine that those discussions are complicated by the fact that Anthropic and the Trump administration are currently locked in a legal battle after the Pentagon labeled the AI lab a supply-chain risk over Anthropic’s refusal to allow autonomous targeting or surveillance of U.S. citizens.
News of Mythos was originally leaked in a data security incident reported last month by Fortune. A draft blog about the model (then called “Capybara”) was left in an unsecured cache of documents available on a publicly inspectable data lake. The leak, which Anthropic subsequently attributed to “human error,” was originally spotted by security researchers. “‘Capybara’ is a new name for a new tier of model: larger and more intelligent than our Opus models — which were, until now, our most powerful,” the leaked document said, adding later that it was “by far the most powerful AI model we’ve ever developed,” according to the report.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
In the leak, Anthropic claimed that its new model far exceeded performance areas (like “software coding, academic reasoning, and cybersecurity”) met by its currently public models, and that it could potentially pose a cybersecurity threat if weaponized by bad actors to find bugs and exploit them (rather than fix them, which is how Mythos will be deployed).
Last month, the company accidentally exposed nearly 2,000 source code files and over half a million lines of code via a mistake it made in the launch of version 2.1.88 of its Claude Code software package. The company then accidentally caused thousands of code repositories on Github to be taken down as it attempted to clean up the mess.



