State Lawmakers Pass Deepfake Protection Laws

Minnesota Senate unanimously passed nation's first ban on "nudification" apps that create non-consensual deepfake nude images.

Objective Facts

Minnesota's House Bill 1606, titled "Nudification technology access prohibited," passed the Minnesota House by a 132-1 vote and was authored by Rep. Jess Hanson. The Minnesota Senate then passed it 65-0, making it the first attempt in the country to ban websites or apps that promote digital undressing, where photographs of fully clothed people can be uploaded and manipulated with generative AI to appear nude. The bill would allow survivors to sue the owners of nudification apps for damages and empower the state attorney general to collect fines of $500,000 per violation. Rep. Drew Roach, a Republican from Farmington, cast the only House dissent, arguing the bill doesn't address the root cause because someone with technical skill could still create the material without a covered tool, stating "What we're going to do here is we're going to attack a software, a manufacturer and instead, shifting our focus on that instead of the perpetrators of these crimes. If we want to prevent this from happening in the future, we should go after those perpetrators with the full force of the law." Meanwhile, the number of states with laws regulating political deepfakes increased from 28 to 31 this year, with Maine, Tennessee, and Vermont enacting new laws, and Maryland's SB0141 passed with unanimous bipartisan support in the Senate. Left-leaning outlets emphasize victim protection and sexual violence prevention, while conservative free speech advocates express concerns about vague definitions and First Amendment implications in political deepfake regulations.

Left-Leaning Perspective

Rep. Jessica Hanson, the House sponsor of Minnesota's nudification ban, argued "We need to ban nudification features because they allow users to create non-consensual, unauthorized deep fakes of sexually explicit content, including child sexual abuse material". Sen. Richard J. Durbin, sponsor of the federal DEFIANCE Act, said the legislation would give victims of explicit deepfakes "the tools to fight back against those who would exploit them". The Cyber Civil Rights Initiative, RAINN, SAG-AFTRA, the National Organization for Women, Public Citizen, and the Joyful Heart Foundation endorsed federal protections, with Dr. Mary Anne Franks of the Cyber Civil Rights Initiative stating the nonconsensual distribution of deepfakes "inflicts severe and often irreparable psychological, financial, and reputational injury on victims". RAINN, the national nonprofit that runs the National Sexual Assault Hotline, has been a main force behind Minnesota's bill because tech-facilitated abuse is on the rise, with senior legislative policy counsel Sandi Johnson noting increased numbers of children calling about digital violence over the past five years. Maryland's testimony supporting deepfake legislation emphasized it "is narrowly tailored to ensure the protection of First Amendment rights. It does not criminalize falsehoods or misleading statements about a candidate's record, issue positions, or political ideology, as these are protected by the First Amendment. Instead, the bill specifically targets false information about the election process itself". Alexandria Ocasio-Cortez has consistently brought attention to deepfake abuse and shared her experiences as a target, with the observation that "women lawmakers are championing the issue, as they are likely to be targeted by this kind of digital sexual violence". Left-leaning coverage emphasizes victim empowerment and the scale of the abuse problem but downplays First Amendment concerns, treating narrow tailoring language as dispositive. Progressive outlets largely focus on sexual violence prevention rather than addressing free speech objections from civil liberties groups to disclosure requirements or political deepfake regulations.

Right-Leaning Perspective

Conservative commentator Christopher Kohls and state Representative Mary Franson sued in 2024 to strike down Minnesota's deepfake law, arguing it's unconstitutional, with Kohls having created a video of Democratic presidential candidate Kamala Harris in 2024 that falsely presented her as saying "I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate". According to the Institute For Free Speech, "Laws aimed at regulating or prohibiting deepfakes in the context of political speech are likely to run afoul of the First Amendment. The Supreme Court has consistently held that government may not punish speech simply because it is false" and "punishing false or misleading political speech will inevitably suppress political speech generally and do more harm than good". John Coleman of FIRE testified that the bills "raise serious First Amendment concerns because it broadly criminalizes the sharing of altered media, even though false or misleading speech is constitutionally protected. Outside of narrow categories like fraud and defamation, the First Amendment protects false or misleading speech". In California's case, conservative social commentator Christopher Kohls sued and won in federal court, with the ruling in favor of Kohls and AB2839 halted largely due to the argument that it was too broad and discriminated based on content. Rep. Drew Roach's dissent in the Minnesota House vote argued the bill "does not address the root cause" and stated "What we're going to do here is we're going to attack a software, a manufacturer and instead, shifting our focus on that instead of the perpetrators of these crimes. If we want to prevent this from happening in the future, we should go after those perpetrators with the full force of the law". Right-leaning free speech advocates distinguish between intimate deepfakes (where they show less concern) and political deepfake regulations (where they emphasize First Amendment risks). They argue that vague standards like "reasonable person" invite discriminatory enforcement and that courts have already struck down similar laws. However, conservative outlets provide less coverage of the sexual assault protection rationale, focusing instead on constitutional risks.

Deep Dive

Deepfake protection legislation in 2026 reflects a fundamental tension: states are racing to protect citizens from AI-generated sexual abuse while First Amendment doctrine constrains how broadly they can regulate synthetic media. As of April 2026, state lawmakers have enacted 15 bills addressing deepfakes so far this year, and since 2019, 47 states have adopted laws dealing with deepfakes, with no change in that total since 2025. The urgency stems partly from incidents like December 2025 when X's Grok chatbot created over 1.8 million sexualized images of women in nine days. The left's position rests on victim empowerment and the unprecedented scale of AI-enabled abuse. Intimate deepfakes cause serious emotional harm and trauma to victims, with the vast majority being women and children, and advocates argue the nonconsensual distribution of digitally manipulated intimate images inflicts severe and often irreparable psychological, financial, and reputational injury on victims. They point to legislation addressing deepfakes that has received bipartisan support in every state where it has passed as evidence their approach is constitutionally sound. However, their framing often treats narrow tailoring and carve-outs as self-evidently protective without fully addressing how courts have evaluated similar language. The right's position emphasizes that even well-intentioned laws with vague standards risk criminalizing protected speech. They argue laws aimed at regulating political deepfakes are likely to run afoul of the First Amendment because the Supreme Court has consistently held that government may not punish speech simply because it is false. Courts have already found political deepfake laws constitutionally flawed, usually concluding they are overly broad in restricting speech or unnecessary in addressing the problem. Conservatives note that even on the least controversial category—nudification apps—disagreement exists about whether the approach targets the real problem, with some arguing perpetrator accountability is more important than access restrictions. They also warn that the Trump administration's advocacy for federal preemption could void state legislation. What comes next: The Supreme Court has not yet directly addressed whether AI-generated deepfakes receive First Amendment protection distinct from traditional synthetic media. Federal attempts to create a civil right of action for survivors have stalled, with the DEFIANCE Act yet to make it to the House floor, though it has been passed by the Senate twice. Minnesota's nudification ban will likely face litigation similar to efforts in California and Minnesota's political deepfake law. The outcome will determine whether states can regulate AI-enabled abuse or must defer to federal preemption under the Trump administration's approach.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisPolicy GuideAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

State Lawmakers Pass Deepfake Protection Laws

Minnesota Senate unanimously passed nation's first ban on "nudification" apps that create non-consensual deepfake nude images.

Apr 30, 2026· Updated May 1, 2026
What's Going On

Minnesota's House Bill 1606, titled "Nudification technology access prohibited," passed the Minnesota House by a 132-1 vote and was authored by Rep. Jess Hanson. The Minnesota Senate then passed it 65-0, making it the first attempt in the country to ban websites or apps that promote digital undressing, where photographs of fully clothed people can be uploaded and manipulated with generative AI to appear nude. The bill would allow survivors to sue the owners of nudification apps for damages and empower the state attorney general to collect fines of $500,000 per violation. Rep. Drew Roach, a Republican from Farmington, cast the only House dissent, arguing the bill doesn't address the root cause because someone with technical skill could still create the material without a covered tool, stating "What we're going to do here is we're going to attack a software, a manufacturer and instead, shifting our focus on that instead of the perpetrators of these crimes. If we want to prevent this from happening in the future, we should go after those perpetrators with the full force of the law." Meanwhile, the number of states with laws regulating political deepfakes increased from 28 to 31 this year, with Maine, Tennessee, and Vermont enacting new laws, and Maryland's SB0141 passed with unanimous bipartisan support in the Senate. Left-leaning outlets emphasize victim protection and sexual violence prevention, while conservative free speech advocates express concerns about vague definitions and First Amendment implications in political deepfake regulations.

Left says: Democratic supporters, including sponsor Rep. Jessica Hanson, emphasize that nudification technology "has empowered and enabled pedophiles and sexual predators around the globe. It has harmed children who are made victims by their cruel peers, women who are made victims by men they have trusted for decades". Left-leaning advocates prioritize victim protection and call the Minnesota ban an urgent response to digital sexual violence.
Right says: Conservative and free speech advocates, while not defending the technology itself, warn that even narrowly-tailored deepfake laws risk criminalizing protected speech through vague "reasonable person" standards and could chill legitimate political parody and satire.
✓ Common Ground
Both parties recognize that deepfake legislation addressing election integrity and non-consensual intimate imagery has received bipartisan support in every state where it has passed, with state legislatures across the country passing urgently needed legislation to regulate deepfakes in elections, usually with bipartisan backing.
There is broad agreement that intimate deepfakes cause serious harm to victims who experience emotional trauma and reputational damage, that the vast majority of victims are women and children, and that state legislatures across the country are working to address this growing problem by passing legislation with bipartisan support.
Even Rep. Drew Roach, the sole House dissenter on Minnesota's nudification ban, stated that material disseminated using the technology is "disgusting" and "vile" and that victims deserve accountability and justice, showing agreement on the seriousness of the harm despite disagreement on the legislative approach.
Some voices across the spectrum recognize that narrowly-tailored regulations targeting non-consensual intimate imagery can be distinguished from broader political deepfake restrictions, with commentators noting that "Deepfakes that violate privacy, defraud individuals/groups, or damage reputations, fall well within these established exceptions".
The federal TAKE IT DOWN Act was introduced by Senator Ted Cruz and passed both houses by near unanimous votes, signed into law by President Donald Trump on May 19, 2025, demonstrating rare bipartisan consensus on criminal penalties for nonconsensual intimate deepfakes.
Objective Deep Dive

Deepfake protection legislation in 2026 reflects a fundamental tension: states are racing to protect citizens from AI-generated sexual abuse while First Amendment doctrine constrains how broadly they can regulate synthetic media. As of April 2026, state lawmakers have enacted 15 bills addressing deepfakes so far this year, and since 2019, 47 states have adopted laws dealing with deepfakes, with no change in that total since 2025. The urgency stems partly from incidents like December 2025 when X's Grok chatbot created over 1.8 million sexualized images of women in nine days.

The left's position rests on victim empowerment and the unprecedented scale of AI-enabled abuse. Intimate deepfakes cause serious emotional harm and trauma to victims, with the vast majority being women and children, and advocates argue the nonconsensual distribution of digitally manipulated intimate images inflicts severe and often irreparable psychological, financial, and reputational injury on victims. They point to legislation addressing deepfakes that has received bipartisan support in every state where it has passed as evidence their approach is constitutionally sound. However, their framing often treats narrow tailoring and carve-outs as self-evidently protective without fully addressing how courts have evaluated similar language.

The right's position emphasizes that even well-intentioned laws with vague standards risk criminalizing protected speech. They argue laws aimed at regulating political deepfakes are likely to run afoul of the First Amendment because the Supreme Court has consistently held that government may not punish speech simply because it is false. Courts have already found political deepfake laws constitutionally flawed, usually concluding they are overly broad in restricting speech or unnecessary in addressing the problem. Conservatives note that even on the least controversial category—nudification apps—disagreement exists about whether the approach targets the real problem, with some arguing perpetrator accountability is more important than access restrictions. They also warn that the Trump administration's advocacy for federal preemption could void state legislation.

What comes next: The Supreme Court has not yet directly addressed whether AI-generated deepfakes receive First Amendment protection distinct from traditional synthetic media. Federal attempts to create a civil right of action for survivors have stalled, with the DEFIANCE Act yet to make it to the House floor, though it has been passed by the Senate twice. Minnesota's nudification ban will likely face litigation similar to efforts in California and Minnesota's political deepfake law. The outcome will determine whether states can regulate AI-enabled abuse or must defer to federal preemption under the Trump administration's approach.

◈ Tone Comparison

Left-leaning outlets emphasize moral urgency, with sponsors describing how nudification technology "has empowered and enabled pedophiles and sexual predators around the globe. It has harmed children who are made victims by their cruel peers". Conservative outlets focus on constitutional risk, emphasizing that "the standards and definitions for what constitutes a 'deepfake' or 'manipulated media' are inherently subjective and vague" and that "a 'reasonable person' standard is impossible to determine in any objective manner".