Anthropic’s Claude Mythos exposed a terrifying new cybersecurity reality: AI can now find vulnerabilities faster than humans can fix them
For years, cybersecurity experts worried about the moment artificial intelligence would tilt the balance between attackers and defenders. That moment may have already arrived.
Anthropic’s new AI model, Claude Mythos, triggered alarm across banks, regulators, tech companies, and government agencies after reports emerged that the system could identify software vulnerabilities at a pace beyond what human researchers can match. According to a CNBC report, the concern was serious enough that Anthropic chose not to release the model publicly, limiting access to a small group of major U.S. companies, including Apple, Amazon, JPMorgan Chase, and Palo Alto Networks.
Announced in April 2026, Mythos is a frontier AI system focused on advanced coding and cybersecurity tasks. Anthropic says the model can identify vulnerabilities in browsers and operating systems at a scale that surpasses human experts. The company restricted access under a security initiative called Project Glasswing, out of fears that the technology could be abused by criminal groups or hostile governments.
Yet cybersecurity researchers say the real shock is not Mythos itself. It is the realization that much of this capability already exists.
“What we are seeing across the industry now is that people are able to reproduce the vulnerabilities found with Mythos through clever orchestration of public models to get very, very similar results,” Ben Harris, CEO of cybersecurity firm watchTowr, told CNBC.
That realization is forcing a painful reassessment across the cybersecurity industry. The fear is no longer about a distant future where AI eventually becomes dangerous. The fear is that existing AI systems are already capable of finding software flaws faster than companies can patch them.
OpenAI entered the conversation weeks later with GPT-5.5-Cyber, a model built for cybersecurity tasks. The company opened limited access to vetted security teams on Thursday, signaling that major AI labs are now openly racing to develop offensive and defensive cyber capabilities.
Anthropic’s Mythos exposed a problem the industry can no longer ignore
Anthropic CEO Dario Amodei warned this week that the danger could extend far beyond isolated hacks.
“The danger is just some enormous increase in the amount of vulnerabilities, in the amount of breaches, in the financial damage that’s done from ransomware on schools, hospitals, not to mention banks,” Amodei said during an Anthropic event.
Cybersecurity researchers say one of the most unsettling parts of the story is that AI systems may already be lowering the barrier for large-scale cyberattacks.
“The models that we have right now are powerful enough to detect zero days in a large scale, and this is scary enough,” Klaudia Kloc, CEO of cybersecurity firm Vidoc, told CNBC.
A zero-day vulnerability is a software flaw unknown to developers and unpatched at the time attackers discover it. In the past, finding those vulnerabilities often required highly specialized expertise and months of work. AI is changing that equation.
Researchers at Vidoc tested whether older public models from Anthropic and OpenAI could replicate Mythos-style results using orchestration techniques that split large codebases into smaller sections and cross-check findings across multiple AI systems. According to Kloc, they succeeded.
Another cybersecurity startup, Aisle, reached a similar conclusion. Founder Stanislav Fort wrote that many of Mythos’s headline results could be reproduced using cheaper AI models operating in parallel.
“A thousand adequate detectives searching everywhere will find more bugs than one brilliant detective who has to guess where to look,” Fort said.
Anthropic does not dispute that earlier AI models already possess meaningful capabilities for vulnerability discovery. The company pointed to earlier research showing that Claude Opus 4.6 uncovered more than 500 high-severity vulnerabilities in open-source software.
What separates Mythos from previous systems is its reported ability to evade detection and generate working exploits with minimal human involvement. That step worries regulators and financial institutions far more than the discovery of vulnerabilities alone.
Banks and regulators are scrambling to prepare for AI-powered cyberattacks
Banks, insurers, and regulators are now confronting a dangerous imbalance. AI systems can uncover weaknesses at machine speed. Patching software still moves at human speed.
The concerns have already reached Washington.
In April, Semafor reported that the U.S. Treasury Department was seeking access to Anthropic’s Mythos model so officials could study the vulnerabilities the system is reportedly capable of exploiting.
That request marked one of the clearest signs yet that frontier AI systems are being treated less like ordinary software products and more like national-risk infrastructure tied to the stability of financial systems, critical networks, and government operations.
The shift could pull AI labs deeper into ongoing government oversight as regulators push for direct technical evaluations of advanced models with cyber capabilities.
“The industry is panicking about the number of vulnerabilities they face now,” Harris said. “But even before Mythos is widely available, it couldn’t fix vulnerabilities fast enough.”
That imbalance is reshaping cybersecurity strategy inside corporations and governments. Security teams that once focused on manual patch management are increasingly discussing “machine response” systems capable of identifying, prioritizing, and patching vulnerabilities automatically before attackers can exploit them.
The concern extends beyond Silicon Valley.
Cybersecurity researchers warn that nation-state hacking groups in countries such as China, Russia, and North Korea likely already possess many of these capabilities through their own AI research or by accessing public models.
“Hackers in North Korea, China, and Russia know how to do this, with or without Anthropic,” Kloc said.
Financial institutions appear especially worried. JPMorgan CEO Jamie Dimon said last month that AI tools may initially make companies more vulnerable before eventually helping them defend themselves.
“You have a significant increase in the volume of vulnerabilities discovered, but they don’t seem to have deployed a tool that helps you fix them,” said Justin Herring, partner at Mayer Brown and former executive deputy superintendent for cybersecurity at New York’s financial regulator.
“Vulnerability management is the great Sisyphean task of cybersecurity,” Herring said.
The restricted rollout of Mythos has sparked another debate inside the cybersecurity community. Some researchers argue that limiting access to a small circle of companies created an uneven playing field where a handful of large corporations gained an early advantage in preparing defenses.
Others argue Anthropic had little choice.
The company appears caught between two competing risks: releasing powerful cyber capabilities too broadly, or slowing the pace of defensive research by keeping the technology behind closed doors.
“They’re trying to figure out the best way to fix the world before this becomes accessible to the world,” said Ben Seri, co-founder of cybersecurity startup Zafran Security. “It’s this kind of chicken-and-egg situation, and you’re going to break some eggs. It’s unavoidable.”
