Vermont Approves AI Election Campaign Material Bill

Vermont's bill on the use of AI in election campaign material passed into law earlier this month.

Objective Facts

S.23, sponsored by Senator Ruth Hardy and others, was approved by both the House and Senate with a conference committee report on February 18-19, 2026, and was signed by Governor Phil Scott on March 5, 2026. The new law requires any campaign media featuring AI-generated images, audio, or video used within 90 days of an election to include a clear disclosure and establishes regulations on the creation of materially false AI-generated information for voters, mandating visible, easy-to-read disclosures for deepfakes of candidates. The Attorney General's office is given authority to enforce the provisions under Vermont's Consumer Protection Law. A legal challenge appears likely, as the law's reliance on compelled disclaimers and a "reasonable person" standard for deception—the same mechanisms that doomed California's nearly identical law—makes it vulnerable to a First Amendment lawsuit on similar grounds.

Left-Leaning Perspective

Vermont Public Interest Research Group applauded the enactment of S.23, framing disclosure as compatible with free speech and democratic values, arguing that "an informed electorate is the foundation of free and fair elections" and that "disclosure is not censorship; it is a safeguard for truth." Ilana Beller of Public Citizen testified before lawmakers that deepfakes like an AI-manipulated likeness of Vice President Kamala Harris demonstrated how easily synthetic media could be weaponized, noting that 21 states had already passed similar legislation; Quinn Houston of VPIRG framed the disclosure model as a necessary safeguard, comparing it to Vermont's 2024 law criminalizing non-consensual sexually explicit deepfakes. Progressive supporters emphasized the urgency of protecting election integrity in an age of advanced AI and focused on real-world examples of misuse, treating disclosure as fundamentally different from censorship. Their coverage largely omitted discussion of the research suggesting disclaimers may be ineffective or could backfire, and downplayed First Amendment vulnerability concerns.

Right-Leaning Perspective

The Foundation for Individual Rights in Education (FIRE) submitted testimony in opposition, arguing that S.23 institutes a content-based restriction on political expression, that First Amendment doctrine does not reset after technological advances, and that such restrictions require strict scrutiny—meaning the law must be narrowly tailored to promote a compelling government interest. FIRE cited the federal court ruling in Kohls v. Bonta (the California case), which found that "counter speech is a less restrictive alternative to prohibiting videos" and that while lawmakers may harbor "a well-founded fear of a digitally manipulated media landscape," this "does not give legislators unbridled license to bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment." FIRE contended in written testimony that "the evidence does not presently demonstrate that 'deepfakes' have created an actual problem that would justify such heavy-handed regulation." Conservative free speech advocates treated the bill as a dangerous precedent, emphasizing the absence of documented harm and the availability of existing legal remedies. Their coverage omitted progressive arguments about the scale and speed of AI technology development and downplayed the difficulty voters face in detecting sophisticated deepfakes.

Deep Dive

Vermont lawmakers took action to regulate AI use in political ads as the 2026 campaign season approached, with the Senate giving final approval to S.23 in February 2026, requiring candidates and political groups to disclose deceptive use of AI while not limiting how AI video or audio can be used in political ads, but instead requiring disclosure if creators use such technology with intent to deceive voters. According to the National Council of State Legislatures, 26 states have enacted laws regarding AI in campaign ads, with two states (Minnesota and Texas) banning use before elections while most have adopted Vermont's disclosure approach for deceptive AI. The bill arrives amid legal uncertainty: California's nearly identical law has already been struck down by federal court as unconstitutional, and the same mechanisms that failed in California—compelled disclaimers and a "reasonable person" standard—create constitutional vulnerability for Vermont. Beyond legal questions, academic research raises doubts about whether AI disclaimers accomplish what lawmakers intend, with the NYU Center on Technology Policy conducting experiments in 2024 and 2025 finding results "not encouraging for the disclosure model." At the federal level, a White House executive order directs federal agencies to challenge state laws that "obstruct innovation" and aims to preempt "onerous" disclosures, with Vermont's specific font-size and duration requirements as a potential target. The central tension is that progressive advocates emphasize election integrity and voter protection, while conservative free speech advocates contend the regulation exceeds constitutional bounds and lacks evidence of significant harm, and researchers suggest the chosen policy tool—disclaimers—may prove ineffective even if legally sustained.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisPolicy GuideAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

Vermont Approves AI Election Campaign Material Bill

Vermont's bill on the use of AI in election campaign material passed into law earlier this month.

May 1, 2026
What's Going On

S.23, sponsored by Senator Ruth Hardy and others, was approved by both the House and Senate with a conference committee report on February 18-19, 2026, and was signed by Governor Phil Scott on March 5, 2026. The new law requires any campaign media featuring AI-generated images, audio, or video used within 90 days of an election to include a clear disclosure and establishes regulations on the creation of materially false AI-generated information for voters, mandating visible, easy-to-read disclosures for deepfakes of candidates. The Attorney General's office is given authority to enforce the provisions under Vermont's Consumer Protection Law. A legal challenge appears likely, as the law's reliance on compelled disclaimers and a "reasonable person" standard for deception—the same mechanisms that doomed California's nearly identical law—makes it vulnerable to a First Amendment lawsuit on similar grounds.

Left says: Progressive groups framed disclosure as protecting democracy, arguing that "an informed electorate is the foundation of free and fair elections" and that disclosure "is not censorship; it is a safeguard for truth."
Right says: Conservative free speech advocates argued that S.23 institutes an unconstitutional content-based restriction on political expression.
✓ Common Ground
Voices across the spectrum acknowledged that most states adopting AI deepfake laws choose disclosure over outright bans, and that the Vermont bill includes an exemption for content considered satire or parody.
Representatives from across the political spectrum recognized Vermont's first successful effort to advance AI-related legislation, and Rep. Matt Birong noted the bill was written with a narrow focus, including a satire/parody exemption taken by multiple states after California's parody ban was overturned.
Both supporters and critics acknowledged that the bill gives the attorney general's office authority to enforce provisions under Vermont's Consumer Protection Law.
Most stakeholders—including both supporters like Ilana Beller of Public Citizen and the bill's sponsor Senator Ruth Hardy—agreed that disclosure requirements are preferable to an outright ban "in an effort to limit the laws' potential impacts on content creators' freedom of speech."
Objective Deep Dive

Vermont lawmakers took action to regulate AI use in political ads as the 2026 campaign season approached, with the Senate giving final approval to S.23 in February 2026, requiring candidates and political groups to disclose deceptive use of AI while not limiting how AI video or audio can be used in political ads, but instead requiring disclosure if creators use such technology with intent to deceive voters. According to the National Council of State Legislatures, 26 states have enacted laws regarding AI in campaign ads, with two states (Minnesota and Texas) banning use before elections while most have adopted Vermont's disclosure approach for deceptive AI. The bill arrives amid legal uncertainty: California's nearly identical law has already been struck down by federal court as unconstitutional, and the same mechanisms that failed in California—compelled disclaimers and a "reasonable person" standard—create constitutional vulnerability for Vermont. Beyond legal questions, academic research raises doubts about whether AI disclaimers accomplish what lawmakers intend, with the NYU Center on Technology Policy conducting experiments in 2024 and 2025 finding results "not encouraging for the disclosure model." At the federal level, a White House executive order directs federal agencies to challenge state laws that "obstruct innovation" and aims to preempt "onerous" disclosures, with Vermont's specific font-size and duration requirements as a potential target. The central tension is that progressive advocates emphasize election integrity and voter protection, while conservative free speech advocates contend the regulation exceeds constitutional bounds and lacks evidence of significant harm, and researchers suggest the chosen policy tool—disclaimers—may prove ineffective even if legally sustained.

◈ Tone Comparison

Progressive framing characterized disclosure as "a safeguard for truth," using protective language around democracy. Conservative legal advocates invoked cautionary language, warning that regulation risks allowing government to "bulldoze over the longstanding tradition of critique, parody, and satire protected by the First Amendment."