Scarlett Johansson

Who Owns Your Voice in the Age of AI?

Emerging AI services present scenarios that could challenge the laws over rights to a persona

A kerfuffle erupted last week after actor Scarlett Johansson complained that one of OpenAI’s chatbot voices sounded a lot like her. It isn’t hers — the company created it using recordings from someone else. Nevertheless, the firm has suspended the voice out of respect for Johansson’s concerns. But the media flurry has cracked open a broader discussion about peoples’ rights to their own personas. In the age of generative artificial intelligence (genAI), are existing laws sufficient to protect the use of a person’s appearance and voice?

The answer isn’t always clear, says Carys Craig, an intellectual-property scholar at York University in Toronto, Canada, who will be speaking on this topic next month during a Canadian Bar Association webcast.

Several members of the US Congress have, in the past year, called for a federal law to enshrine such protections at the national level. And some legal scholars say that action is needed to improve privacy rights in the United States. But they also caution that hastily written laws might infringe on freedom of speech or create other problems. “It’s complicated,” says Meredith Rose, a legal analyst at the non-profit consumer-advocacy group Public Knowledge in Washington DC. “There’s a lot that can go wrong.”

“Rushing to regulate this might be a mistake,” Craig says.

FAKE ME

GenAI can be used to easily clone voices or faces to create deepfakes, in which a person’s likeness is imitated digitally. People have made deepfakes for fun and to promote education or research. However, they’ve also been used to sow disinformation, attempt to sway elections, create non-consensual sexual imagery or scam people out of money.

Many countries have laws that prevent these kinds of harmful and nefarious activities, regardless of whether they involve AI, Craig says. But when it comes to specifically protecting a persona, existing laws might or might not be sufficient.

Copyright does not apply, says Craig, because it was designed to protect specific works. “From an intellectual-property perspective, the answer to whether we have rights over our voice, for example, is no,” she says. Most discussions about copyright and AI focus instead on whether and how copyrighted material can be used to train the technology, and whether new material that it produces can be copyrighted.

Aside from copyright laws, some regions, including some US states, have ‘publicity rights’ that allow an individual to control the commercial use of their image, to protect celebrities against financial loss. For example, in 1988, long before AI entered the scene, singer and actor Bette Midler won a ‘voice appropriation’ case against the Ford Motor Company, which had used a sound-alike singer to cover one of her songs in a commercial. And in 1992, game-show host Vanna White won a case against the US division of Samsung when it put a robot dressed as her in a commercial.

“We have a case about a person who won against a literal robot already,” says Rose. With AI entering the arena, she says, cases will become “increasingly bananas”.

Much remains to be tested in court. The rapper Drake, for example, last month released a song featuring AI-generated voice clips of the late rapper Tupac Shakur. Drake removed the song from streaming services after receiving a cease-and-desist letter from Shakur’s estate. But it’s unclear, says Craig, whether the song’s AI component was unlawful. In Tennessee, a law passed this year, called the Ensuring Likeness Voice and Image Security (ELVIS) Act, seeks to protect voice actors at all levels of fame from “the unfair exploitation of their voices”, including the use of AI clones.

In the United States, actors have some contractual protection against AI — the agreement that in December ended the Hollywood strike of the Screen Actors Guild-American Federation of Television and Radio Artists included provisions to stop filmmakers from using a digital replica of an actor without explicit consent from the individual in each case.

Meanwhile, individual tech companies have their own policies to help prevent genAI misuse. For example, OpenAI, based in San Francisco, California, has not released to the general public the voice-cloning software that was used to make its chatbot voices, acknowledging that “generating speech that resembles people’s voices has serious risks”. Usage policies for partners testing the technology “prohibit the impersonation of another individual or organization without consent or legal right”.

Others are pursuing technological approaches to stemming misuse: last month, the US Federal Trade Commission announced the winners of its challenge to “protect consumers from the misuse of artificial intelligence-enabled voice cloning for fraud and other harms”. These include ways to watermark real audio at the time of recording and tools for detecting genAI-produced audio.

BROAD SCOPE

More worrying than loss of income for actors, say Rose and Craig, is the use of AI to clone people’s likenesses for uses including non-consensual pornography. “We have very spare, inadequate laws about non-consensual imagery in the first place, let alone with AI,” says Rose. The fact that deepfake porn is now easy to generate, including with minors’ likenesses, should be serious cause for alarm, she adds. Some legal scholars, including Danielle Citron at the University of Virginia in Charlottesville, are advocating for legal reforms that would recognize ‘intimate privacy’ as a US civil right — comparable to the right to vote or the right to a fair trial.

Current publicity-rights laws aren’t well suited to covering non-famous people, Rose says. “Right to publicity is built around recognizable, distinctive people in commercial applications,” she says. “That makes sense for Scarlett Johansson, but not for a 16-year-old girl being used in non-consensual imagery.”

However, proposals to extend publicity rights to private individuals in the United States might have unintended consequences, says Rose. She has written to the US Congress expressing concern that some of the proposed legislation could allow misuse by powerful companies. A smartphone app for creating novelty photos, for example, could insert a provision into its terms of service that “grants the app an unrestricted, irrevocable license to make use of the user’s likeness”.

There’s also a doppelganger problem, says Rose: an image or voice of a person randomly generated by AI is bound to look and sound like at least one real person, who might then seek compensation.

Laws designed to protect people can run the risk of going too far and threatening free speech. “When you have rights that are too expansive, you limit free expression,” Craig says. “The limits on what we allow copyright owners to control are there for a reason; to allow people to be inspired and create new things and contribute to the cultural conversation,” she says. Parody and other works that build on and transform an original often fall into the sphere of lawful fair use, as they should, she says. “An overly tight version [of these laws] would annihilate parody,” says Rose.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button