Technology

Taylor Swift's Voice Trademark Bid: Testing New Ground Against AI Deepfakes

Taylor Swift filed trademark applications to protect her voice and image from AI impersonation, following Matthew McConaughey's pioneering effort months earlier. The filings test whether trademark law

Martin HollowayPublished 2w ago6 min readBased on 9 sources
Reading level
Taylor Swift's Voice Trademark Bid: Testing New Ground Against AI Deepfakes

Taylor Swift filed three trademark applications with the U.S. Patent & Trademark Office on April 24, 2024, seeking to protect two audio clips of her voice saying "Hey, it's Taylor" and "Hey, it's Taylor Swift," along with a photograph from her Eras Tour showing her on stage with a pink guitar. The filings mark one of the most visible attempts to use trademark law as a shield against AI-generated impersonations — and they break significant new legal ground.

What Are Sound Marks, and Why Do They Matter Now?

Swift's applications fall under a trademark category called sound marks, a less common form of intellectual property protection that traditionally covers jingles, distinctive audio logos, and other recognizable sounds. Think of Intel's three-note chime or the NBC broadcast signal. Now, Swift is testing whether the same legal tool can protect a human voice from being synthesized and impersonated by AI.

The goal is explicit: to use trademark law to guard against deepfakes — AI-generated audio that mimics her voice. As AI voice synthesis tools have become easier to use and more convincing, this legal approach represents one possible answer to a problem that didn't exist in the way we think of it today even five years ago.

Matthew McConaughey Went First

Swift is not the pioneer here. Actor Matthew McConaughey filed similar trademark applications in January 2024, covering his image, voice, and video, establishing the first high-profile test case for this strategy. McConaughey's filings came several months before Swift's, and both are exploring whether existing trademark law can stretch far enough to address AI-generated content.

The pattern suggests something important: established intellectual property frameworks, particularly right-of-publicity laws (which protect a person's name, image, and likeness), may not be enough against the capabilities of modern AI systems. Trademark law, at least in theory, offers different protections and may apply in situations where traditional rights fail.

The Government Is Paying Attention

These celebrity filings arrived during a broader wave of government interest in the subject. In August 2024, the U.S. Patent & Trademark Office held a listening session on "Name, Image, and Likeness Protection in the Age of AI," led by Kathi Vidal (Under Secretary of Commerce for Intellectual Property and USPTO Director) and USPTO Copyright Attorney Ann Chaitovitz.

That session signals that federal agencies are actively thinking about whether today's intellectual property frameworks are adequate for AI-generated content. The listening session format — gathering input before making policy moves — suggests the USPTO is not yet ready to propose formal solutions but is building a foundation for future guidance or regulation.

The Legal Path Forward Is Unclear

Approving Swift's voice trademark would require the USPTO to overcome some real obstacles. Trademark law traditionally demands that a protected sound function as a "source identifier in commerce" — meaning it reliably tells consumers who made or sold something. A voice, unlike a corporate jingle, appears in countless contexts: interviews, live performances, social media, music videos. Proving that a specific vocal pattern uniquely identifies Swift as a commercial source is more complicated than it sounds.

There is also a threshold question: does AI-generated content even count as trademark infringement in the first place. Traditional trademark law focuses on consumer confusion in commercial settings. Many AI voice synthesis applications happen in entertainment, parody, or personal contexts that may fall outside what trademark law was designed to address. A deepfake circulating on social media is not the same as a counterfeit product on a store shelf.

The broader context here mirrors something we saw before. In the early days of the commercial internet, copyright law struggled to handle digital reproduction and distribution. Existing rules assumed physical goods and established channels of commerce. Courts and regulators eventually adapted those rules, but it took time, litigation, and sometimes new legislation. We are at a similar moment with AI, except the questions are even more tangled because the technology is newer and affects identity itself, not just copies of media.

Enforcement Could Be the Harder Problem

Even if the trademark applications win approval, enforcement presents substantial challenges. A traditional trademark violation typically happens through visible commercial channels — counterfeit goods in stores, infringing logos on websites. AI-generated voice and image content can emerge from countless platforms, applications, and individual creators worldwide. There is no centralized place to police it the way a trademark holder might monitor a retail channel.

Swift's existing trademark portfolio already includes "FEARLESS TAYLOR'S VERSION" for plush toys. The voice and image applications would extend her protection into genuinely new territory, but that newness cuts both ways. It opens a door to protection, but it also raises questions about how broad that protection could or should be.

What This Means for AI Companies and Beyond

The celebrity trademark strategy reflects genuine uncertainty across the technology industry about how to govern AI responsibly. Major tech companies have mostly relied on terms of service and voluntary compliance, while lawmakers continue to debate regulatory frameworks that remain largely unfinished.

From the perspective of AI companies, these trademark applications introduce a new compliance consideration: would training data, output capabilities, or user-generated content on AI platforms potentially trigger trademark liability under an expanded celebrity protection scheme. It is not yet clear, but the question is now live in a way it was not two years ago.

The broader ecosystem is testing multiple legal approaches simultaneously. The music industry has pursued copyright strategies through litigation and licensing deals. Actors and performers have negotiated contract protections through union agreements. Swift's team is trying a different route — trademark law — which offers potential advantages. Unlike copyright, which requires proving that AI output is substantially similar to an original work, trademark infringement could theoretically apply to any commercial use of a protected voice, regardless of how the AI generated it.

What Happens Next

The real test arrives when the USPTO examines Swift's applications and, potentially, when federal courts weigh in if the applications are challenged or applied in an actual dispute. Those outcomes will help define whether trademark law can evolve to handle AI-generated content, or whether entirely new legal frameworks will be necessary to protect individuals' rights in an era of synthetic media.

We do not yet have that answer. What we do know is that the legal system is responding faster to this problem than it did to earlier technological shifts, and that multiple strategies are being pursued in parallel. That is worth noting, even if it feels like the most cautious possible outcome.