Trump Calls AI as Presenting 'Mostly Good Aspects' Amid Safety Concerns

Trump acknowledged AI risks to banking but emphasized potential benefits, endorsing government safeguards amid Anthropic's Mythos model concerns.

Objective Facts

President Trump acknowledged the risks artificial intelligence posed to the banking system in an April 15 Fox Business interview and said there should be government safeguards, while also believing the technology could make the banking system better and safer. When asked if AI could undermine confidence in the banking system, Trump told Fox Business Network "Yeah, probably," but noted "it could also be the kind of technology that allows greatness in the banking system, makes it better and safer and more secure." Trump said government should have safeguards on AI technology, including a potential "kill switch." His comments came after cybersecurity experts warned that Anthropic's new AI model, Mythos, could supercharge complex cyberattacks and poses significant challenges to the banking industry. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called a surprise meeting with the heads of the biggest U.S. banks to address the potential threat of Mythos, and reportedly urged them to use the model to detect vulnerabilities.

Left-Leaning Perspective

Progressive tech policy advocates have consistently criticized Trump's approach to AI regulation as insufficiently protective. Genevieve Smith argues that the White House's AI framework prioritizes innovation over safeguards, ignoring bias risks, with algorithmic bias and discrimination, data privacy beyond children, transparency, and environmental impacts being entirely absent from the framework, making the document read primarily as an industry growth strategy with limited safeguards. Groups advocating for consumer rights and tech regulation sounded the alarm on Trump's executive order, with Liana Keesing, Issue One's policy lead for technology reform, stating: "After spending millions of dollars on lobbying, Big Tech has successfully leveraged those around the president to pass a federal moratorium that aims to wipe out bipartisan AI safeguards passed in both blue and red states." Dozens of House Democrats, including Reps. Ted Lieu (D-Calif.) and Don Beyer (D-Va.), introduced a bill that would repeal Trump's executive order aimed at overriding state AI laws, with Beyer stating: "Until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public." The left's concern extends beyond Trump's overall framework to his specific embrace of banking AI without sufficient preconditions. Left-leaning observers worry that approving AI in critical financial infrastructure without robust pre-deployment safety testing invites systemic risk, particularly given the Mythos model's demonstrated capacity for identifying zero-day vulnerabilities.

Right-Leaning Perspective

The Trump administration and its supporters frame the president's AI comments as demonstrating prudent leadership that acknowledges legitimate concerns while refusing to let fear drive counterproductive regulation. Sen. Marsha Blackburn, a Tennessee Republican, stated: "By releasing a national framework on AI, the Trump administration gave us a roadmap for crafting legislation, and now it is Congress' turn to pass a bill that will codify the President's agenda, protect Americans, and unleash AI innovation." The right's position is that Trump's dual acknowledgment of risks and benefits reflects sound judgment—recognizing that AI can enhance banking security and efficiency while maintaining specific guardrails. By proposing a federal "kill switch" rather than broad restrictions, Trump addresses safety concerns without imposing the kind of prescriptive regulations that conservatives argue would handicap American competitiveness. JPMorgan Chase CEO Jamie Dimon, in a letter to shareholders, "lauded the advantages that AI will bring, but also warned of the cybersecurity threats, from deepfakes to information to vulnerabilities," writing: "These risks are real, but they are manageable if companies, regulators and governments prepare. The worst mistakes we can make are predictable: overreact at the first serious incident and regulate out important innovation or underreact and fail to learn from what went wrong." The right argues this represents the appropriate calibration of policy.

Deep Dive

Trump's April 15 comments on AI and banking reveal the core tension in his administration's approach to artificial intelligence: a genuine desire to position the U.S. as a global AI leader competing against China, balanced against limited appetite for prescriptive regulation. The timing is significant—coming days after Anthropic released its Mythos model to a restricted group, with Treasury and Fed officials convening banks to discuss cybersecurity implications. Trump's willingness to say "there should be" government safeguards on AI suggests some openness to guardrails, a modest shift from his broader deregulatory posture. The core disagreement between left and right hinges on timing and scope of intervention. The left argues that AI systems in banking require exhaustive pre-deployment safety testing, transparent model documentation, and clear liability frameworks before deployment. They point to the left-brain AI that can identify thousands of zero-day vulnerabilities in operating systems as proof that frontier AI systems carry unknowable risks. The right contends that perfect safety is impossible to achieve and that insisting on it amounts to a de facto moratorium. Trump's framework essentially trusts companies to deploy AI while maintaining government ability to intervene through mechanisms like a kill switch. What's underreported is the internal contradiction in Trump's approach: his administration labeled Anthropic a national security supply-chain risk and directed the Pentagon to stop using its technology, yet simultaneously told banks to adopt that same technology for cybersecurity purposes. A federal judge in California issued a preliminary injunction blocking the supply chain designation, noting that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary," while an appeals court in Washington, D.C., denied Anthropic's request to temporarily halt the blacklisting. The net effect: Anthropic is excluded from DoD contracts but can continue working with other government agencies, and it is into that gap, excluded from the Pentagon but not from the Treasury or the Fed, that Bessent and Powell stepped. This schism reflects not policy coherence but institutional competition—the Pentagon vs. Treasury/Fed—and suggests the administration has not fully resolved its stance on which companies and which AI capabilities it trusts.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

Trump Calls AI as Presenting 'Mostly Good Aspects' Amid Safety Concerns

Trump acknowledged AI risks to banking but emphasized potential benefits, endorsing government safeguards amid Anthropic's Mythos model concerns.

Apr 16, 2026
What's Going On

President Trump acknowledged the risks artificial intelligence posed to the banking system in an April 15 Fox Business interview and said there should be government safeguards, while also believing the technology could make the banking system better and safer. When asked if AI could undermine confidence in the banking system, Trump told Fox Business Network "Yeah, probably," but noted "it could also be the kind of technology that allows greatness in the banking system, makes it better and safer and more secure." Trump said government should have safeguards on AI technology, including a potential "kill switch." His comments came after cybersecurity experts warned that Anthropic's new AI model, Mythos, could supercharge complex cyberattacks and poses significant challenges to the banking industry. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called a surprise meeting with the heads of the biggest U.S. banks to address the potential threat of Mythos, and reportedly urged them to use the model to detect vulnerabilities.

Left says: Progressive advocates argue Trump's AI policies prioritize industry growth over public safety and strip away state protections, while Democratic lawmakers continue pushing for stronger guardrails and state regulatory authority.
Right says: Trump's balanced statement on AI in banking reflects the administration's core approach: enabling innovation while maintaining targeted safeguards, with federal rules replacing fragmented state regulations.
✓ Common Ground
Several voices across the spectrum acknowledge that Anthropic's Mythos model poses genuine cybersecurity risks worthy of serious government attention, as evidenced by Treasury and Fed warnings to banks.
There is broad agreement that AI in critical infrastructure like banking requires some form of oversight mechanism, though sharp disagreement remains on how prescriptive that oversight should be.
Both sides recognize that an entirely unregulated AI environment in financial services could create systemic risks, though they differ on whether existing state laws or new federal frameworks provide better protection.
Experts and policymakers across the aisle acknowledge that the banking sector's legacy technology infrastructure creates particular vulnerability to AI-enabled attacks, necessitating deliberate security planning.
Objective Deep Dive

Trump's April 15 comments on AI and banking reveal the core tension in his administration's approach to artificial intelligence: a genuine desire to position the U.S. as a global AI leader competing against China, balanced against limited appetite for prescriptive regulation. The timing is significant—coming days after Anthropic released its Mythos model to a restricted group, with Treasury and Fed officials convening banks to discuss cybersecurity implications. Trump's willingness to say "there should be" government safeguards on AI suggests some openness to guardrails, a modest shift from his broader deregulatory posture.

The core disagreement between left and right hinges on timing and scope of intervention. The left argues that AI systems in banking require exhaustive pre-deployment safety testing, transparent model documentation, and clear liability frameworks before deployment. They point to the left-brain AI that can identify thousands of zero-day vulnerabilities in operating systems as proof that frontier AI systems carry unknowable risks. The right contends that perfect safety is impossible to achieve and that insisting on it amounts to a de facto moratorium. Trump's framework essentially trusts companies to deploy AI while maintaining government ability to intervene through mechanisms like a kill switch.

What's underreported is the internal contradiction in Trump's approach: his administration labeled Anthropic a national security supply-chain risk and directed the Pentagon to stop using its technology, yet simultaneously told banks to adopt that same technology for cybersecurity purposes. A federal judge in California issued a preliminary injunction blocking the supply chain designation, noting that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary," while an appeals court in Washington, D.C., denied Anthropic's request to temporarily halt the blacklisting. The net effect: Anthropic is excluded from DoD contracts but can continue working with other government agencies, and it is into that gap, excluded from the Pentagon but not from the Treasury or the Fed, that Bessent and Powell stepped. This schism reflects not policy coherence but institutional competition—the Pentagon vs. Treasury/Fed—and suggests the administration has not fully resolved its stance on which companies and which AI capabilities it trusts.

◈ Tone Comparison

The left employs alarm-laden language about "demolishing" protections and "vacuums of accountability," while the right uses optimization language like "balance," "pragmatic," and "unleash innovation." The left treats Trump's banking AI endorsement as reckless; the right treats it as a measured acknowledgment of both promise and risk.