Taylor Swift files to trademark voice and likeness amid AI deepfakes
Taylor Swift files trademark applications for her voice and image to legally protect against AI-generated deepfakes in response to widespread unauthorized digital replications.
Objective Facts
Pop superstar Taylor Swift filed trademark applications for two audio clips and one image of herself in an attempt to protect her voice and likeness from deepfake videos and audio created by artificial intelligence. The applications were filed with the U.S. Patent and Trademark Office on Friday and list Swift's TAS Rights Management as being the owner of the audio clips and image. Swift's image and voice have been used in countless AI-generated deepfakes – from false advertising to fake political endorsements to explicit images. While existing 'Right of Publicity' laws offer some protection against unauthorized use of a famous individual's likeness, trademark filings can provide an additional layer of protection, according to trademark attorney Josh Gerben. The move comes months after actor Matthew McConaughey filed similar trademarks to protect his likeness against unauthorized replications.
Left-Leaning Perspective
Coverage from outlets aligned with creative-industry interests, such as NBC News' reporting on the Human Artistry Campaign, emphasizes that Swift's trademark filing reveals deeper structural problems: the inadequacy of existing intellectual property law to handle AI threats. NBC News reporter Angela Yang contextualized Swift's move within a broader coalition of over 700 creatives, including Scarlett Johansson and Cate Blanchett, who launched the "Stealing Isn't Innovation" campaign. Rolling Stone noted that the NO FAKES Act, designed to protect people's voices and visual likeness from being exploited with AI, was introduced in Congress a couple years ago, but remains under committee consideration, framing trademark filing as a workaround for legislative gridlock. This framing suggests that celebrity self-help measures underscore the need for comprehensive federal protections rather than reliance on state-level right-of-publicity laws. Left-leaning coverage emphasizes that the Human Artistry Campaign is pushing for licensing requirements and opt-out protections so creators can control how their work is used. Outlets covering the campaign frame Swift's filing alongside artist-advocacy efforts calling on tech companies to stop training AI on copyrighted works without permission. The broader narrative positions individual trademark filings as a symptom of market failure—a sign that private IP strategies are inadequate when the underlying problem is unregulated corporate AI development. What left-leaning coverage downplays is the extent to which Swift's existing position of wealth and legal resources makes her able to pursue such protective strategies in ways unavailable to less-famous or less-wealthy artists. There is minimal focus on the potential inequality aspects of a trademark strategy that primarily benefits the most recognizable celebrities.
Right-Leaning Perspective
Right-leaning and business-focused coverage, particularly from sources citing trademark attorney Josh Gerben directly, celebrates Swift's legal innovation as an example of how market actors can creatively use existing frameworks to protect their interests. The Wall Street Journal quoted actor Matthew McConaughey saying he wanted to create "a clear perimeter around ownership with consent and attribution the norm in an AI world," framing the issue as one of consent and property rights rather than corporate malfeasance. Gerben's own analysis, widely quoted across outlets, emphasizes the sophistication of using trademark law's "confusingly similar" standard rather than requiring exact copies—a narrower technical argument about legal optimization. Business-oriented outlets focus on the strategic efficacy of the trademark approach. Variety and Billboard provide detailed legal analysis of how trademark law doesn't just stop identical uses (like copyright law): it stops anything that is confusingly similar to the registered trademark—a much broader right and more powerful tool in an AI world. This framing emphasizes Swift's proactive, individual agency rather than any failure of regulation. Coverage notes that Disney sent a cease-and-desist letter to Google in December 2025, alleging the tech giant's Gemini AI platform was being used to illegally generate copies of its trademarked characters, suggesting that the trademark enforcement mechanism already works in practice. Right-leaning coverage largely omits discussion of the human cost of deepfakes or their role in political disinformation (like Trump's 2024 deepfake image), focusing instead on property law and individual contractual rights as the solution.
Deep Dive
Taylor Swift's trademark filing on April 24, 2026, occurs at a critical juncture where existing intellectual property law—designed for a pre-AI landscape—is struggling to address the challenges of synthetic media. Historically, singers relied on copyright law to protect recorded music and on state-level right-of-publicity laws to prevent unauthorized commercial use of their name and likeness. But AI has broken that model. Now, anyone can spin up a version of an artist's voice, have it say anything, attach it to anything, and distribute it at scale, without copying an existing recording (which would violate copyright) and often without clear commercial intent (which would protect right-of-publicity claims). By registering sound marks and visual trademarks, Swift and McConaughey are testing whether federal trademark law—which protects anything "confusingly similar" to a registered mark, not just identical copies—can bridge this gap. The strategy is legally novel and untested. The "trademark yourself" approach has not yet fully been tested in court with respect to AI. Trademark law was designed to protect brand recognition and consumer confusion, not personal identity broadly. A court might narrowly construe Swift's trademarks as protecting only the specific phrases and images registered, making them easy to design around (using slightly different phrases or visuals). Alternatively, a court could interpret them more expansively, treating registered sound marks as protecting Swift's voice more generally. The outcome will likely determine whether this strategy becomes a widely-adopted workaround or a limited tool. What both left and right agree on—though they frame it differently—is that the current legal framework is inadequate. The left argues this proves the need for federal legislation like the NO FAKES Act; the right argues it shows how entrepreneurs (in this case, celebrities and their lawyers) can innovate within existing frameworks. Neither side defends the status quo as sufficient. The disagreement is whether the solution is top-down regulation or bottom-up property-rights protection. What remains unresolved is whether trademark-based enforcement will actually deter anonymous deepfake creators or whether it will primarily benefit wealthy celebrities who can afford to monitor and litigate infringements across global platforms.