Anthropic on Tuesday released a preview of its new frontier model, Mythos, which it says will be used by a small group of partner organizations for cyber operations. In one previously leaked memothe artificial intelligence startup called the model one of its “most powerful” to date.
The model’s limited debut is part of a new security initiative, called Project Glasswing, in which more than 40 partner organizations will deploy the model for the purposes of “defense security work” and to secure critical software, Anthropic said. While not specifically trained for cyber work, the preview will be used to scan first-party and open-source software systems for code vulnerabilities, the company said.
Anthropic claims that, in recent weeks, Mythos has identified “thousands of zero-day vulnerabilities, many of them critical.” Many of the vulnerabilities are one to two decades old, the company added.
Mythos is a general-purpose model for Anthropic’s Claude AI systems that the company claims has strong coding and reasoning skills. Anthropic’s frontier models are considered the most sophisticated and high performance modelsdesigned for more complex tasks, including creating agents and coding.
Partner organizations previewing Mythos include Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks. As part of the initiative, these partners will eventually share what they learned from using the model so the rest of the tech industry can benefit from it. The preview is not going to be made generally available, Anthropic said.
Anthropic also claims to have engaged in “ongoing discussions” with federal officials about the use of Mythos, though one would have to imagine those discussions are complicated by the fact that Anthropic and the Trump administration are currently locked out. in a legal battle after the Pentagon labeled the AI lab a supply chain risk due to Anthropic’s refusal to allow autonomous targeting or surveillance of US citizens.
News of Mythos was originally leaked in a data security incident was reported last month by Fortune. A rough blog about the model (then called “Capybara”) was left in an unsecured document cache available in a publicly auditable data lake. The leak, which Anthropic later attributed to “human error,” was initially detected by security researchers. “‘Capybara’ is a new name for a new level of models: bigger and smarter than our Opus models – which have been, until now, our most powerful,” the leaked document said, later adding that it was “by far the most powerful AI model we’ve ever developed,” according to the report.
Techcrunch event
San Francisco, California
|
13-15 October 2026
In the leak, Anthropic claimed that its new model far exceeded the performance areas (such as “software coding, academic reasoning, and cybersecurity”) that its currently public models address, and that it could pose a cybersecurity threat if weaponized by bad actors to find bugs and exploit them (rather than fixing them, as Mythos would develop).
Last month, the company accidentally exposed nearly 2,000 source code files and over half a million lines of code through a mistake it made when releasing version 2.1.88 of the Claude Code software package. The company then accidentally caused thousands of Github code repositories to be taken down as it tried to clean up the mess.
