In a series of unusually direct conversations, US officials have begun warning some of the country’s largest banks about a new kind of threat. Not a coordinated cyberattack. Not a foreign adversary.
An artificial intelligence model.
At the center of those discussions is Anthropic, whose latest system has raised concerns among regulators for its ability to identify and potentially exploit software vulnerabilities at a scale and speed that far exceeds traditional methods.
A Closed-Door Conversation With High Stakes
Senior figures including Scott Bessent and Jerome Powell recently met with bank executives to outline the risks. The message was not alarmist, but it was clear.
This technology is moving faster than the systems designed to contain it.
The model in question, part of Anthropic’s Claude family, has demonstrated an ability to scan complex software environments and surface weaknesses that might otherwise go undetected for years. In some cases, those vulnerabilities are not theoretical. They are exploitable.
That combination, discovery and execution, is what has regulators paying attention.
Why This Is Different
Banks are no strangers to cybersecurity threats. They invest heavily in defending infrastructure that spans decades of legacy systems layered with modern digital services.
What makes this moment different is acceleration.
AI models like Anthropic’s do not simply assist human analysts. They compress timelines. Tasks that once took weeks or months, auditing code, mapping systems, identifying weak points, can now happen in a fraction of that time.
The implication is straightforward. Defensive systems, which rely on patch cycles and human oversight, may struggle to keep pace if similar tools are used offensively.
A Tool That Cuts Both Ways
There is, however, a paradox at the heart of the warnings.
US officials are not only cautioning banks. They are also encouraging them to experiment with the technology in controlled environments.
The reasoning is pragmatic. If AI can expose vulnerabilities quickly, it may also help institutions secure their systems before those weaknesses are exploited elsewhere.
Several large financial institutions have already begun testing such models internally, treating them less like consumer-facing AI and more like advanced security infrastructure.
Limited Access, By Design
Anthropic has not broadly released the model. Access remains restricted to a small group of organizations under tightly managed conditions.
That decision reflects a growing tension within the AI industry. As capabilities improve, the risks associated with open deployment increase. Companies are beginning to act less like software vendors and more like stewards of potentially sensitive infrastructure.
In this case, the concern is not just misuse. It is replication.
Once a capability exists, it rarely remains isolated for long.
A Broader Shift in Cybersecurity
The discussions unfolding in Washington point to a deeper shift.
For years, cybersecurity has been defined by asymmetry. Attackers look for a single point of failure, while defenders must secure entire systems.
AI changes that balance.
It lowers the cost of finding those weak points and raises the possibility that highly sophisticated techniques could become more widely accessible.
Regulators in the US, UK, and elsewhere are now assessing how to respond, not just to this model, but to a broader class of systems that could follow.
What Comes Next
For now, the warnings remain largely behind closed doors.
But the underlying question is beginning to surface more publicly.
If AI can identify and exploit vulnerabilities faster than humans can fix them, how do institutions maintain control over systems that were never designed for that level of scrutiny?
There is no immediate answer.
What is clear is that the line between tool and threat is becoming harder to define.
