Federal agencies report 3,611 AI use cases in 2025

Federal agencies reported 3,611 AI use cases in 2025, doubling from prior year amid concerns over oversight gaps and workforce implications.

Objective Facts

The Office of Management and Budget unveiled the completed 2025 Federal Agency Artificial Intelligence Use Case Inventory, documenting 3,611 individual use cases across 56 submitting agencies, representing a 105% increase from 2024's reported use cases of 1,757. While many reported use cases in 2025 are designed to streamline operations and facilitate back-office processes, others facilitate mission-critical functions such as benefits delivery, health and medical services, and law enforcement—52% of Social Security Administration cases support service delivery and government benefits processing, 36% of DHS and 54% of Department of Justice cases support law enforcement, and 20% of HHS and 45% of Veterans Affairs cases facilitate health and medical services. A critical gap exists: more than 85% of high-impact deployed AI use cases in 2025 lacked required information about risk mitigation despite OMB requirements intended to ensure safety and oversight. Since returning to office in January 2025, the Trump administration has renewed efforts to embed AI across the executive branch, launching the Department of Government Efficiency in February 2025 to tackle waste, fraud, and abuse, and releasing America's AI Action Plan in July 2025 encouraging federal agencies to accelerate AI adoption with emphasis on partnering with the private sector.

Left-Leaning Perspective

Jay Stanley, senior policy analyst with the American Civil Liberties Union's Speech, Privacy, and Technology Project, told FedScoop that the DOJ's 2025 inventory provides a 'snapshot' of how the federal government 'is aggressively seeking to test and exploit a wide variety of AI algorithms and sifting through data on ordinary people.' Skye Perryman, president and CEO of Democracy Forward, stated in a lawsuit complaint: 'The public has a right to know the extent to which the administration has used unreliable and unproven AI tools to expand its agenda of undermining regulations that protect people.' The left-leaning coverage emphasizes two core concerns: first, the lack of transparency and accountability mechanisms surrounding rapid federal AI deployment, particularly in high-impact domains like immigration enforcement and benefits administration; second, the perverse incentive structure where DOGE's use of AI to cut federal workforce and regulations creates a conflict of interest. Organizations like the Federation of American Scientists point to cases such as the Veteran's Administration's REACH VET program, which uses predictive models to identify veterans at elevated suicide risk; because it draws on health records and includes explicit race coding, experts express concern about opaque modeling choices and the possibility of inequitable or incorrect flags, with risk that 'if veterans feel that an algorithm is driving interventions without clear transparency, clinical guardrails, and accountability or if it misses potential intervention needs, trust can erode.' Progressive outlets and advocates highlight that among the most controversial cases is the use of predictive AI to analyze inmate behavior, with experts expressing concerns about the biases and real-life consequences of such technology. Progressive coverage largely omits the Trump administration's argument that AI deployment is necessary for governmental efficiency or that the rapid adoption reflects market confidence. The left-leaning outlets focus almost exclusively on governance gaps and risks rather than benefits, and do not engage substantively with the administration's deregulatory rationale or productivity gains claims.

Right-Leaning Perspective

The White House released on July 23, 2025, 'Winning the Race: America's AI Action Plan' pursuant to President Trump's Executive Order, with the administration seeking to accelerate AI innovation by removing regulatory roadblocks and 'ideological biases' and to speed up building of U.S. AI infrastructure. President Trump emphasized in his January 23, 2025 remarks the need for a 'common sense federal standard that supersedes all states,' warning that 'if you are operating under 50 different sets of state laws, the most restrictive state of all will be the one that rules,' and calling for a federal standard 'so you don't end up in litigation with 43 states at one time.' Right-leaning commentators frame the rapid expansion of federal AI use cases as evidence of the Trump administration's successful deregulatory agenda and its focus on American technological competitiveness against global rivals. Federal News Network commentary argues that 'AI is the mission-critical tool that can drive new levels of efficiency, enabling agencies to do more with fewer workers yet still deliver for Americans,' and that 'with strategic and thoughtful implementation, AI can help fill the gaps left by severe and sudden workforce reduction and ease the burden on our country's federal workforce.' DOGE stated on X that 'President Trump was given a mandate by the American people to modernize the federal government and reduce waste, fraud and abuse,' and announced that the U.S. DOGE Service is working on a project to use AI to process over 600,000 pieces of federal correspondence each month. The right-leaning narrative positions the tripling of federal AI use cases as vindicating the administration's emphasis on innovation and efficiency rather than as problematic. Right-leaning coverage largely does not address the Brookings finding that 85% of high-impact AI use cases lack required risk mitigation documentation, nor does it substantially engage with concerns about algorithmic bias in law enforcement or immigration contexts.

Deep Dive

Federal AI adoption has tripled in two years (571 cases in 2023 to 3,611 in 2025), driven by three administrations' prioritization of the technology but diverging sharply on how to govern it. The Trump administration's explicit linkage of AI to workforce reduction through DOGE creates conflicting incentives: agencies see AI as tools to do more with fewer staff, yet this same framing makes federal workers hesitant to adopt systems that theoretically replace them. The data reveals a critical accountability gap: 85% of high-impact deployed AI systems lack required risk mitigation documentation, suggesting the reported 3,611 use cases represent rapid experimentation without parallel governance maturation. Left-leaning critics emphasize that federal AI deployment in high-stakes domains—predictive policing, benefits determination, immigration enforcement—mirrors private-sector failures documented in lending, hiring, and housing contexts. They note that the Trump administration used AI through DOGE to surveil employee communications for ideological conformity and to consolidate sensitive data across agencies. Right-leaning proponents counter that the 105% increase reflects necessary modernization to compete globally and that the administrative streamlining produces measurable savings and efficiency. The administration's preemption of state AI regulation via executive order, framed as removing "burdensome" standards, represents a form of centralized control that critics argue contradicts the deregulatory narrative. What remains unresolved: whether the federal government can deploy high-risk AI systems (immigration screening, recidivism prediction, benefits eligibility) with the transparency and human oversight required to maintain public trust. Brookings data shows only 44% of Americans trust their government to regulate AI effectively, compared to 89% in India and 74% in Indonesia. Federal agencies report workforce shortages (fewer than 3% of tech job postings mention AI) even as they expand AI use, creating sustainability questions. The inventory excludes classified systems and DOGE-developed tools, meaning the actual scope of federal AI deployment remains unknown. Future flashpoints will likely emerge around: algorithmic bias in law enforcement (DOJ's 315 AI cases include predictive tools); whether transparency mandates are enforced; and whether state-level privacy protections survive federal preemption efforts.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

Federal agencies report 3,611 AI use cases in 2025

Federal agencies reported 3,611 AI use cases in 2025, doubling from prior year amid concerns over oversight gaps and workforce implications.

Apr 17, 2026
What's Going On

The Office of Management and Budget unveiled the completed 2025 Federal Agency Artificial Intelligence Use Case Inventory, documenting 3,611 individual use cases across 56 submitting agencies, representing a 105% increase from 2024's reported use cases of 1,757. While many reported use cases in 2025 are designed to streamline operations and facilitate back-office processes, others facilitate mission-critical functions such as benefits delivery, health and medical services, and law enforcement—52% of Social Security Administration cases support service delivery and government benefits processing, 36% of DHS and 54% of Department of Justice cases support law enforcement, and 20% of HHS and 45% of Veterans Affairs cases facilitate health and medical services. A critical gap exists: more than 85% of high-impact deployed AI use cases in 2025 lacked required information about risk mitigation despite OMB requirements intended to ensure safety and oversight. Since returning to office in January 2025, the Trump administration has renewed efforts to embed AI across the executive branch, launching the Department of Government Efficiency in February 2025 to tackle waste, fraud, and abuse, and releasing America's AI Action Plan in July 2025 encouraging federal agencies to accelerate AI adoption with emphasis on partnering with the private sector.

Left says: While the Trump administration's AI Action Plan promotes AI as a tool for national competitiveness and administrative modernization, it does so with limited emphasis on transparency or public accountability. The Department of Justice's 31% year-over-year surge in AI use cases includes work with predictive models and surveillance technologies that sparked concern from privacy and technology safety advocates.
Right says: The Trump administration seeks to accelerate AI innovation by removing regulatory roadblocks and 'ideological biases' to speed up the building of U.S. AI infrastructure. The Trump administration's National Policy Framework lays out legislative proposals to encourage innovation in artificial intelligence, promote American AI dominance, and preempt certain state laws.
✓ Common Ground
Multiple voices across the political spectrum acknowledge that three consecutive administrations—spanning both parties—have made adoption of artificial intelligence across the U.S. federal government a priority, and the Trump administration's AI Action Plan highlighted AI's potential to 'help deliver the highly responsive government the American people expect and deserve.'
There is apparent consensus that the federal government needs to strengthen transparency practices around high-impact systems and that agencies require dedicated time and resources for experimentation and training, along with efforts to strengthen workforce pipelines.
Both oversight-focused observers and the Trump administration acknowledge that workforce capacity is a recurring constraint in federal AI adoption, with fewer than 3% of federal technical job postings explicitly mentioning AI capabilities despite a Biden-era hiring push, and workforce reductions in early 2025.
Objective Deep Dive

Federal AI adoption has tripled in two years (571 cases in 2023 to 3,611 in 2025), driven by three administrations' prioritization of the technology but diverging sharply on how to govern it. The Trump administration's explicit linkage of AI to workforce reduction through DOGE creates conflicting incentives: agencies see AI as tools to do more with fewer staff, yet this same framing makes federal workers hesitant to adopt systems that theoretically replace them. The data reveals a critical accountability gap: 85% of high-impact deployed AI systems lack required risk mitigation documentation, suggesting the reported 3,611 use cases represent rapid experimentation without parallel governance maturation.

Left-leaning critics emphasize that federal AI deployment in high-stakes domains—predictive policing, benefits determination, immigration enforcement—mirrors private-sector failures documented in lending, hiring, and housing contexts. They note that the Trump administration used AI through DOGE to surveil employee communications for ideological conformity and to consolidate sensitive data across agencies. Right-leaning proponents counter that the 105% increase reflects necessary modernization to compete globally and that the administrative streamlining produces measurable savings and efficiency. The administration's preemption of state AI regulation via executive order, framed as removing "burdensome" standards, represents a form of centralized control that critics argue contradicts the deregulatory narrative.

What remains unresolved: whether the federal government can deploy high-risk AI systems (immigration screening, recidivism prediction, benefits eligibility) with the transparency and human oversight required to maintain public trust. Brookings data shows only 44% of Americans trust their government to regulate AI effectively, compared to 89% in India and 74% in Indonesia. Federal agencies report workforce shortages (fewer than 3% of tech job postings mention AI) even as they expand AI use, creating sustainability questions. The inventory excludes classified systems and DOGE-developed tools, meaning the actual scope of federal AI deployment remains unknown. Future flashpoints will likely emerge around: algorithmic bias in law enforcement (DOJ's 315 AI cases include predictive tools); whether transparency mandates are enforced; and whether state-level privacy protections survive federal preemption efforts.

◈ Tone Comparison

Left-leaning coverage uses language emphasizing risk, opacity, and surveillance—"unreliable," "unproven," "aggressively seeking to exploit," and "surveilling." Right-leaning coverage emphasizes efficiency, competitiveness, and modernization—"tremendous benefits," "trillions of dollars of investments," "responsible," and "delivering for Americans." The left frames AI deployment as a governance problem requiring restraint; the right frames it as an opportunity requiring acceleration.