Bank of England warns about Anthropic's Claude Mythos AI risks

The Bank of England plans to discuss the impact of Anthropic PBC's new AI model with financial institutions, as UK regulators join their peers in the US and elsewhere in raising alarms over the risks posed by the tool.

Objective Facts

The Bank of England plans to discuss the impact of Anthropic PBC's new AI model with financial institutions, as UK regulators join their peers in the US and elsewhere in raising alarms over the risks posed by the tool. The Bank of England is set to address the implications of Anthropic PBC's new AI model, Mythos, on financial institutions in meetings within the Cross-Market Operational Resilience Group (CMORG) and the CMORG AI Working Group, which includes representatives from the UK Treasury, the Financial Conduct Authority, and the National Cyber Security Centre. This follows Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell assembling bank CEOs at Treasury's headquarters on Tuesday to make sure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models. Claude Mythos is an artificial intelligence model that detects security vulnerabilities in software and can spot software flaws that no human has noticed before; the security concern is that Mythos will help bad actors find coding vulnerabilities faster than banks can fix them. UK and US regulatory perspectives align on the severity of the risk, though critics debate whether the threat level justifies the urgency and whether Anthropic's warnings contain marketing elements.

Left-Leaning Perspective

Left-leaning and progressive analysts emphasize the importance of regulatory oversight and coordination on AI security risks. The Washington Post's Gerrit De Vynck noted that the specific concerns being called out is that Mythos is really good at finding gaps in software that hackers could exploit, though all software has bugs and you need to really know what you're doing to find something you could use to hack into a system. NBC News reporting highlighted that not everyone is convinced Mythos Preview represents the leap Anthropic claims, with Heidy Khlaaf, chief AI scientist at the AI Now Institute, saying Anthropic's detailed blog post left out many key details needed to verify its claims and warned against "taking these claims at face value" without information like the rates of false positives. Progressive coverage emphasizes procedural transparency and equity in access to security capabilities. Security experts and software developers committed to open-source software argue the world would be safer if Mythos were released so that every defender, not just Anthropic's chosen partners, could use it to find and patch vulnerabilities, with Jonathan Iwry of the Wharton Accountable AI Lab warning against reliance on the judgment of a handful of private actors who aren't accountable to the public. This framing positions the Bank of England's cautious regulatory approach as potentially insufficient if it defers to Anthropic's access controls rather than mandating broader defensive capability distribution. Left-leaning coverage does not emphasize skepticism about the underlying threat level but rather focuses on procedural accountability and whether Bank of England oversight will be rigorous enough. The Treasury Select Committee warned that the Bank of England, the Financial Conduct Authority (FCA) and the Treasury are exposing the public and the financial system to potentially serious harm due to their current positions on the use of artificial intelligence in financial services, arguing that by adopting a wait-and-see approach, the major public financial institutions are not doing enough to manage the risks.

Right-Leaning Perspective

Right-leaning and Trump administration-aligned commentators acknowledge Mythos risks but emphasize concerns about Anthropic's credibility and question whether regulatory response is proportionate or marketing-driven. David Sacks, who leads Trump's technology advisory council, stated that "The world has no choice but to take the cyber threat associated with Mythos seriously" but "it's hard to ignore that Anthropic has a history of scare tactics." This framing validates the Bank of England's regulatory caution while questioning Anthropic's motivations for the disclosure itself. Conservative analysis emphasizes the commercial incentives behind Mythos warnings. Anthropic is among several contenders in a fierce artificial intelligence race, and promoting the awe of Anthropic's own technology boosts business and enhances its allure in the event it soon goes public, as is rumored. Alex Stamos of cybersecurity firm Corridor critiqued Anthropic's marketing approach, saying "They have these adorable cutesy cartoons about these products that are so incredibly dangerous that they won't even let people use them. It's like if the Manhattan Project announced the nuclear bomb within a cute little Calvin and Hobbes cartoon." This tone suggests right-wing skepticism that regulatory urgency may reflect hype rather than proportionate risk assessment. Right-leaning outlets do not dispute that financial sector cyber risks exist but question whether Bank of England discussions should defer to Anthropic's threat characterization. The framing treats regulatory meetings as appropriate but implies they should independently verify rather than accept company risk assessments uncritically. Some analysts have dismissed these cautious, limited releases as more about marketing and creating hype around new models, rather than purely safety-driven decisions.

Deep Dive

The Bank of England's April 11 announcement to discuss Mythos with UK financial institutions reflects a coordinated global regulatory response to what officials describe as an unprecedented AI-driven cybersecurity risk. The urgency is evidenced by Treasury Secretary Bessent and Federal Reserve Chair Powell assembling bank CEOs at Treasury headquarters on Tuesday to ensure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models, and are taking precautions to defend their systems. The Federal Reserve and Treasury have already held emergency meetings on this topic, and the Bank of Canada also held a meeting with banks and financial companies on Friday to discuss the cybersecurity risks posed by Mythos. This synchronized regulatory response suggests genuine concern at senior policymaking levels that AI-driven vulnerability discovery represents a new category of systemic financial risk. However, the coverage reveals substantial fault lines regarding whether the threat level justifies the regulatory tempo or reflects sophisticated marketing by Anthropic. Some researchers say that much of what Mythos can do may already be possible with smaller, cheaper, openly available models. Some analysts have dismissed these cautious, limited releases as more about marketing and creating hype around new models, rather than purely safety-driven decisions. Left-leaning critics focus on procedural concerns—whether Anthropic's controlled access serves public security or private interests. Security experts and software developers committed to open-source software argue the world would be safer if Mythos were released so that every defender, not just Anthropic's chosen partners, could use it to find and patch vulnerabilities, with Jonathan Iwry of the Wharton Accountable AI Lab warning against reliance on a handful of private actors not accountable to the public. Right-leaning critics focus on credibility concerns—questioning whether Anthropic uses threat escalation as a competitive tactic. David Sacks, heading Trump's technology advisory council, said the cyber threat must be taken seriously but that "it's hard to ignore that Anthropic has a history of scare tactics." The Bank of England's forthcoming meetings represent a moment where regulatory caution meets legitimate uncertainty about underlying technical claims. Heidy Khlaaf, chief AI scientist at the AI Now Institute, warned against "taking these claims at face value" without information like false positive rates. What both left and right observers agree upon is that financial cyber risk has intensified, but they diverge on whether the source is Mythos specifically or whether Mythos accelerates an already-dangerous trend with existing open-source or smaller models. The Bank of England's regulatory discussions will likely focus on stress-testing bank defenses and clarifying vendor accountability rather than resolving this underlying empirical dispute about Mythos's technical novelty.

Regional Perspective

The Bank of England plans to discuss the impact of Anthropic PBC's new AI model with financial institutions, as UK regulators join their peers in the US and elsewhere in raising alarms over the risks posed by the tool. The discussion will take place within the Cross-Market Operational Resilience Group (CMORG) and the CMORG AI Working Group, which includes representatives from the UK Treasury, the Financial Conduct Authority, and the National Cyber Security Centre. Meanwhile, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell summoned Wall Street leaders to an urgent meeting on concerns that the latest artificial intelligence model from Anthropic PBC will usher in an era of greater cyber risk, assembled at Treasury's headquarters in Washington on Tuesday to make sure banks are aware of possible future risks and are taking precautions to defend their systems. Government cyber experts at the UK's AI Security Institute (AISI) are testing Mythos to help develop defences, and researchers at AISI, a taxpayer-backed lab launched by Rishi Sunak's government, had begun stress-testing the AI bot. The UK approach leverages government testing infrastructure (AISI) in parallel with regulatory discussions, whereas the US Treasury-Fed response centered on direct bank engagement. The Bank of England and FCA have been expanding their own focus on AI-related operational resilience, but have not yet convened an equivalent high-level response, and as frontier AI models grow more capable at identifying financial system vulnerabilities, the gap between US and UK regulatory urgency may become harder to sustain. This suggests UK regulators may be playing catch-up to the US regulatory tempo set by Bessent and Powell. The Bank of Canada also held a meeting with banks and financial companies on Friday to discuss the cybersecurity risks posed by Mythos. The synchronized regulatory response across US, UK, and Canada reflects a transatlantic consensus that Mythos poses genuine systemic risk to financial infrastructure, though the UK approach emphasizes independent technical testing via AISI rather than relying solely on Anthropic's risk characterization.

OBJ SPEAKING

← Daily BriefAbout

Bank of England warns about Anthropic's Claude Mythos AI risks

The Bank of England plans to discuss the impact of Anthropic PBC's new AI model with financial institutions, as UK regulators join their peers in the US and elsewhere in raising alarms over the risks posed by the tool.

Apr 11, 2026
What's Going On

The Bank of England plans to discuss the impact of Anthropic PBC's new AI model with financial institutions, as UK regulators join their peers in the US and elsewhere in raising alarms over the risks posed by the tool. The Bank of England is set to address the implications of Anthropic PBC's new AI model, Mythos, on financial institutions in meetings within the Cross-Market Operational Resilience Group (CMORG) and the CMORG AI Working Group, which includes representatives from the UK Treasury, the Financial Conduct Authority, and the National Cyber Security Centre. This follows Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell assembling bank CEOs at Treasury's headquarters on Tuesday to make sure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models. Claude Mythos is an artificial intelligence model that detects security vulnerabilities in software and can spot software flaws that no human has noticed before; the security concern is that Mythos will help bad actors find coding vulnerabilities faster than banks can fix them. UK and US regulatory perspectives align on the severity of the risk, though critics debate whether the threat level justifies the urgency and whether Anthropic's warnings contain marketing elements.

Left says: Some skeptics note that much of what Mythos can do may already be possible with smaller, cheaper, openly available models. Progressive analysts emphasize transparency concerns about private AI companies controlling access to critical security tools.
Right says: Trump administration advisor David Sacks acknowledged the cyber threat as real but argued "it's hard to ignore that Anthropic has a history of scare tactics." Conservative critics question whether regulatory urgency reflects genuine risk or Anthropic marketing strategy.
Region says: UK regulators are joining their counterparts in the US and other countries in warning of the risks posed by Mythos, with the Federal Reserve and Treasury already holding emergency meetings and the Bank of Canada doing likewise. The Bank of England's April 11 announcement signals UK regulatory response is now aligning with North American urgency.
✓ Common Ground
Both left and right observers agree that these meetings reflect growing concerns among regulators that a new type of cyberattack is becoming one of the biggest risks facing the financial industry.
Both sides appear to agree that AI-driven cyber capabilities have reached a dangerous tipping point.
Some observers across the spectrum acknowledge that while caution in interpreting results is warranted given limited public information, many agree the model's debut and Anthropic's caution represent a significant moment.
Objective Deep Dive

The Bank of England's April 11 announcement to discuss Mythos with UK financial institutions reflects a coordinated global regulatory response to what officials describe as an unprecedented AI-driven cybersecurity risk. The urgency is evidenced by Treasury Secretary Bessent and Federal Reserve Chair Powell assembling bank CEOs at Treasury headquarters on Tuesday to ensure banks are aware of possible future risks raised by Anthropic's Mythos and potential similar models, and are taking precautions to defend their systems. The Federal Reserve and Treasury have already held emergency meetings on this topic, and the Bank of Canada also held a meeting with banks and financial companies on Friday to discuss the cybersecurity risks posed by Mythos. This synchronized regulatory response suggests genuine concern at senior policymaking levels that AI-driven vulnerability discovery represents a new category of systemic financial risk.

However, the coverage reveals substantial fault lines regarding whether the threat level justifies the regulatory tempo or reflects sophisticated marketing by Anthropic. Some researchers say that much of what Mythos can do may already be possible with smaller, cheaper, openly available models. Some analysts have dismissed these cautious, limited releases as more about marketing and creating hype around new models, rather than purely safety-driven decisions. Left-leaning critics focus on procedural concerns—whether Anthropic's controlled access serves public security or private interests. Security experts and software developers committed to open-source software argue the world would be safer if Mythos were released so that every defender, not just Anthropic's chosen partners, could use it to find and patch vulnerabilities, with Jonathan Iwry of the Wharton Accountable AI Lab warning against reliance on a handful of private actors not accountable to the public. Right-leaning critics focus on credibility concerns—questioning whether Anthropic uses threat escalation as a competitive tactic. David Sacks, heading Trump's technology advisory council, said the cyber threat must be taken seriously but that "it's hard to ignore that Anthropic has a history of scare tactics."

The Bank of England's forthcoming meetings represent a moment where regulatory caution meets legitimate uncertainty about underlying technical claims. Heidy Khlaaf, chief AI scientist at the AI Now Institute, warned against "taking these claims at face value" without information like false positive rates. What both left and right observers agree upon is that financial cyber risk has intensified, but they diverge on whether the source is Mythos specifically or whether Mythos accelerates an already-dangerous trend with existing open-source or smaller models. The Bank of England's regulatory discussions will likely focus on stress-testing bank defenses and clarifying vendor accountability rather than resolving this underlying empirical dispute about Mythos's technical novelty.

◈ Tone Comparison

Left-leaning sources like NBC News quote critics demanding more information to verify claims, using language focused on "rigor," "verification," and "transparency." Right-leaning coverage uses more dismissive language, with critics like Alex Stamos sarcastically describing Anthropic's approach as a "marketing schtick" with "adorable cutesy cartoons" for a product "so incredibly dangerous that they won't even let people use them."