Scarlett Johansson, OpenAI, and Silencing ‘Sky’

A dispute about the alleged unauthorized mimicry of a star’s voice puts concerns about GenAI in the spotlight again.

Joao-Pierre S. Ruth, Senior Editor

May 22, 2024

5 Min Read
Scarlett Johansson at the 2020 Screen Actors Guild Awards.
Scarlett Johansson at the 2020 Screen Actors Guild Awards.Sydney Alford via Alamy Stock Photo

[UPDATED in the second paragraph] Hollywood A-lister Scarlett Johansson recently called out OpenAI for using a voice with ChatGPT 4o that she asserted mimicked her. The voice, known as Sky, was switched off -- at least for now -- with OpenAI claiming that the voice was not an imitation of Johansson. A bit of history behind OpenAI’s interaction with Johansson raised questions about the veracity of the company’s assertions, as well as the complicated road ahead in AI ethics.

After Johansson's contentions went public, the Washington Post reported it received documents that show a different voice actor was hired to voice Sky. This story will be updated further should additional details surface.

A letter issued by Johansson states that she previously rejected an offer made last September by Sam Altman to hire her to lend her voice to ChatGPT 4o. According to her letter, Johansson, her family, and friends were surprised to later hear that OpenAI’s Sky sounded like her. She also pointed out that Altman tweeted a reference to the movie “Her,” which featured Johansson voicing an interactive chat system.

Johansson’s letter states her attorneys requested in writing that OpenAI describe in detail how it created the voice for Sky.

OpenAI paused Sky and posted an explainer online to relate how it chose the voice and asserted it did not attempt to mimic Johansson.

Related:How Could AI Be a Tool for Workers?

Sky High Stakes

This issue -- control over likeness and potential similarities -- is not new. Even before generative AI (GenAI) shook up the world, other disputes were waged over the alleged use of an individual’s likeness without their assent. For example, actress Lindsay Lohan sued Rockstar Games on claims she was the inspiration for a character in video game “Grand Theft Auto V,” but the case was ultimately dismissed.

GenAI’s spread intensified arguments over rights to one’s likeness as technology can increasingly emulate, if not copy, the performances and works of real people. Further, potential abuses of one’s likeness could lead to the creation of deepfakes as images, voice, and video to cause personal harm or to further political propaganda.

Johansson, in her letter, said she looked forward to transparency and “the passage of appropriate legislation” to protect individual’s rights.  

Contentions in 2023’s SAG-AFTRA strike, for instance, included a tug-of-war with movie and television studios that hoped to leverage GenAI for future uses such as the creation of background characters based on the likenesses of actors -- who would be paid just once for perpetual use of their images. The strike ended after SAG-AFTRA (Screen Actors Guild-American Federation of Television and Radio Artists) got concessions that included terms on the use of AI and digital replication.

Related:EU AI Act Clears Final Hurdle to Become Global Landmark

Not Just a Concern for Celebs

Sourcing for GenAI and ownership of the content produced can be murky, compounded by differing interpretations of existing laws and potential laws in the works.  

One point of contention is the application of the right publicity, which is meant to guard against misappropriation of elements of personal identity such as name, voice, and likeness for commercial use. “The right of publicity, many decades ago, was described as a haystack in a hurricane, and it hasn’t gotten any better,” says Matt Savare, partner and chair of commercial contracts with law firm Lowenstein Sandler. “Especially in the US, there’s no federal right of publicity statute.”

This means states may have varied interpretations of the right, Savare says. “Some states recognize the right of publicity by common law, court cases. Some recognize it by statute. Some recognize it by both. Some don’t recognize it at all.”

That is even before AI gets added to the conversation. “It’s a very confusing, complicated and nuanced area of the law,” he says. “It’s probably one of the more complicated areas that I get asked about quite a bit.”

Related:Fake News, Deepfakes: What Every CIO Should Know

Regulating a Path Forward

Though the intensity of arguments raised about GenAI is palpable, there seems to be an awareness that the technology and its use are not about to disappear. In a statement shared with InformationWeek, SAG-AFTRA offered this comment on the issue Johansson raised about OpenAI and Sky: “We share in her concerns and fully support her right to have clarity and transparency regarding the voice used in developing the Chat GPT-4o appliance ‘Sky.’”

The statement also expressed that the union is “strongly championing federal legislation” that would protect its members' voices and likenesses, along with the public, from unauthorized digital replication: “We are pleased that OpenAI has responded to these concerns and paused their use of ‘Sky,’ and we look forward to working with them and other industry stakeholders to enshrine transparent and resilient protections for all of us.”

Joe Jones, director of research and insights at the International Association of Privacy Professionals (IAPP), says this topic becomes messy because of the varied parts that converge on intellectual property and copyright. “If voices are being trained from audiovisual content -- films, movies, songs -- you’ve got so many different tiers of copyright and rights owners,” he says. “The script, the recorder, the person who’s filming it, the producer. So, you’ve got a web of different proprietary rights there.”

The lack of cohesive federal law on rights, privacy, and how GenAI works makes for a quagmire that might deepen as AI continues to evolve. Companies that seek to augment AI with voice might run afoul of state-level regulations in the process. “Some states in the US treat voice as a biometric data point,” Jones says. Moreover, organizations might not want to presume that every data source they have access to is fair game for their AI to work with. “You’ve got privacy issues around the reuse of data for secondary purposes.” he says. “That voice data could have been collected for one purpose and then years later used to train an algorithm. That’s not always straightforward whether you can rely on data for secondary uses.”

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights