Musk's Grok AI chatbot still generating sexual deepfakes despite promises
Grok continues to generate sexualized images of people without consent, despite Musk's pledge months ago to halt abusive deepfakes.
Objective Facts
Elon Musk's artificial intelligence software, Grok, continues to generate sexualized images of people without their consent, despite his company's pledge months ago to halt abusive deepfakes after a public backlash and government investigations. NBC News found dozens of AI-generated sexual images and videos depicting real people posted publicly on Musk's social media app, X, over the past month, with images showing women whose likenesses were edited by the AI chatbot to put them in more revealing clothing, such as towels, sports bras, skintight Spider-Woman outfits or bunny costumes. The Grok software, created by Musk's company xAI, made the images at the request of users who tried to break through undressing restrictions the service put in place in January. xAI said Monday it wanted to review NBC News' findings but a representative did not respond to follow-up questions; on Tuesday, most of the images were no longer on X and were replaced with messages saying the post 'is unavailable' or 'violated the X Rules,' and X and Musk did not respond to a separate request for comment. International regulators remain active: investigations and regulatory actions by the UK, Ireland, France, and the EU against xAI continue.
Left-Leaning Perspective
Senators Ron Wyden (D-OR), Ed Markey (D-MA) and Ben Ray Luján (D-NM) urged Apple and Google to enforce app store terms of service against Grok. California Governor Gavin Newsom described xAI's decision to create what he called "a breeding ground for predators" as "vile" and called on Attorney General Rob Bonta to investigate. House Energy and Commerce Committee Democrats—Reps. Frank Pallone of New Jersey, Jan Schakowsky of Illinois, and Yvette Clarke of New York—sent a letter asking Musk when he became aware that people were prompting Grok to generate explicit, nonconsensual images of women or children. Democratic-aligned commentators emphasized the gendered harm and structural failure of safeguards. The Center for Countering Digital Hate estimated Grok generated roughly 3 million sexualized images within an 11-day period, averaging about 190 images per minute, meaning that technology capable of producing harmful content at this pace is not simply a passive tool but an amplifier, with developers sharing responsibility for consequences. Critics noted that the failure of platforms and states to respond adequately or in a timely manner produces a silencing effect, with AI-enabled violence resulting in psychological distress, reputation damage, and potentially deepening the digital gender gap and limiting women's economic and political participation. Progressive outlets and commentators downplayed counterarguments about relative harm and comparative risk to other AI tools. They focused exclusively on Grok's disproportionate scale and Musk's earlier promotion of "spicy mode," rather than addressing whether other AI systems posed equivalent risks or whether regulatory approaches might have unintended consequences.
Right-Leaning Perspective
Rep. Anna Paulina Luna (R-Fla.) characterized the regulatory response as 'a political war against @elonmusk and free speech—nothing more'. Musk argued that other AI chatbots and digital tools can edit images in the same way as Grok, claiming the U.K. singled out his platform. Sen. Ted Cruz (R-Texas), while acknowledging that many Grok-created posts were in 'clear violation' of the federal nonconsensual deepfake ban passed last year, noted he was 'encouraged' that X had announced it was taking the violations seriously. Right-leaning outlets and commentators framed the controversy as regulatory overreach targeting Musk specifically rather than a genuine child safety issue. The Spectator Australia suggested that UK Labour's response was "so wildly out of proportion to the offense that one couldn't help but notice that Starmer and his sanctimonious cohort were grasping at bikinis as a small pretext to exterminate X entirely," questioning whether "the real crime in the eyes of the British government wasn't the deepfakes but the existence of X itself". The outlet noted that it was "fairly easy to make sexualized images of real people with other AI chatbots or photo-editors until they too added safeguards," yet "no bans have been issued, nor investigations launched into these other companies". Right-leaning coverage minimized the scale of harm by focusing on Musk's actions to restrict access (paywall) and his own consent-based bikini image posts, treating the story primarily as a political attack rather than a harm-mitigation issue.
Deep Dive
The April 15, 2026 NBC News finding that Grok still generates sexual deepfakes despite January 2026 restrictions represents the intersection of three dynamics: technical inadequacy of safeguards, regulatory escalation across jurisdictions, and fundamental disagreement over whether the problem is architectural (the tool itself) or behavioral (user exploitation). While the volume of sexualized deepfakes posted to X has decreased significantly since January—when Grok was generating thousands per hour—the tool continues to produce nonconsensual imagery, with Grok turning down or ignoring many requests but still complying with workarounds. The controversy exposes structural weaknesses in moderation systems originally built for user-uploaded files rather than AI outputs; because Grok can generate new images on demand, traditional takedown workflows struggle to keep pace, and even if a specific image is deleted, a similar version can be reproduced seconds later with minor prompt changes. Neither side's analysis adequately addresses the core tension: regulation cannot easily be designed around consent violations when generation speed and distribution scale exceed human review capacity. Left-leaning critics demand stronger safeguards and enforcement but offer limited acknowledgment that Musk's legal position—that platforms are not liable for user outputs under Section 230—differs meaningfully from responsibility for AI model outputs (which may not enjoy the same protection). Right-leaning defenders emphasize comparative fairness (other AI tools also generate nonconsensual images) and free speech principles but do not directly address why a tool marketed as having "fewer safeguards" became disproportionately weaponized. Experts note it's difficult to research all of what Grok produces, especially when people access the software privately on Grok's app, on the Grok website or on the private Grok tab of X, meaning regulatory audits and public monitoring remain incomplete. The key unresolved question is whether Grok can achieve genuine consent protection within a free-expression framework, or whether its design incentives (marketing as "spicy," fewer guardrails) are fundamentally misaligned with harm prevention.
Regional Perspective
The UK's Ofcom has launched a formal investigation into X, labeling Grok reports as 'deeply concerning' and warning that deepfake creation could amount to 'intimate image abuse'; UK Prime Minister Keir Starmer said the images were 'disgusting' and 'unlawful,' putting pressure on the regulator to act. The European Commission opened an investigation into Grok regarding sexually explicit fake images of women and minors, examining X's compliance with the Digital Services Act, which requires social media companies to address illegal and harmful online content. In response, Musk accused the UK government of being 'fascist' and trying to curb free speech. Malaysia and Indonesia became the first countries to block Grok after authorities said it was being misused; Indonesia's Communication and Digital Affairs Minister stated the government sees non-consensual sexual deepfakes as a serious violation of human rights and safety, with the measure intended to protect women, children and the broader community. Canada's Artificial Intelligence Minister Evan Solomon described deepfake sexual abuse as a form of violence and stressed the need to protect Canadians—especially women and young people—from exploitation, advancing legislation (Bill C-16) to treat deepfake intimate images as a criminal offense when shared without consent. Regional coverage emphasizes direct harm to local communities and legal compliance frameworks (DSA in EU, Online Safety Act in UK, bans in Malaysia and Indonesia) rather than free-speech abstractions. Where right-leaning Western outlets frame enforcement as political, Southeast Asian regulators and European officials frame it as child protection and dignity preservation. This reflects divergent cultural and legal frameworks: Europe and Southeast Asia prioritize vulnerable person protection through precautionary regulation, while Musk's defense relies on American free-speech tradition. The regional disagreement is less about facts than about whether platform responsibility supersedes user responsibility in jurisdictions with different legal cultures.