Whose Voice Is It Anyway? Deepfakes and the Battle Over Copyright
Who owns a voice or likeness when it can be digitally cloned?

In Hollywood today, the hottest special effect isn’t CGI explosions—it’s artificial intelligence. AI tools can now recreate an actor’s face or voice so convincingly that audiences may not know what’s real. While this technology opens creative possibilities, it also raises a thorny legal question: who owns a voice or likeness when it can be digitally cloned?
The Rise of AI-Generated Voices
From TikTok parodies to blockbuster trailers, AI-generated voices are popping up everywhere. Imagine a studio re-creating an actor’s dialogue for reshoots without bringing them back on set, or a fan producing a fake Morgan Freeman narration online. Without clear boundaries, this technology risks erasing performers’ control over their most personal asset—their identity.
Copyright vs. Right of Publicity
The legal landscape is complex. Copyright law typically protects creative works like scripts, films, and songs—but it doesn’t cover a person’s face or voice. Instead, those fall under “right of publicity” laws, which vary widely by state and country. That patchwork makes it difficult for performers to enforce their rights when their digital doubles appear without consent.
Why It Matters for Creators and Studios
Actors fear that unauthorized voice clones could replace them or undermine future opportunities. Unions like SAG-AFTRA have fought hard to secure contract provisions requiring consent and compensation when AI is used to replicate performances. Studios, meanwhile, want clarity on what’s allowed so they can innovate without risking expensive lawsuits.
Final Thought
Deepfake and AI voice technology remind us that creativity and identity are inseparable. Protecting both requires more than innovation—it demands clear rules, consent, and respect for the human talent behind the digital magic.