(The Hill) — Actors on strike amid contract negotiations with production studios are fighting for protections from artificial intelligence (AI), but the concerns they’re highlighting raise risks beyond the scope of movies and television.
Advancements in AI that can replicate performers’ voices, appearances and movements raise critical concerns about individuals’ control over their own likenesses — and how lifelike replicas are used to generate profit or spread disinformation.
Clark Gregg, a Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) union member known for his role as Phil Coulson in the Marvel Cinematic Universe, told a House panel that he was recently sent inappropriate content appearing to show himself doing “acrobatic pornography.”
With the caveat that his AI likeness had abs Gregg said he would “kill for,” he said the content was lifelike and “terrifying.”
“That’s disturbing to have out there with a daughter who is online, but it’s just an example of [it]. This is where my business transcends into your business,” Gregg said during the House Energy and Commerce Committee hearing on AI and data privacy last week.
“If they can make me appear to do something I would never do, it’s very dangerous that they could make you, the Speaker — if we ever get one — the president, say things, especially in really tense moments as we are going through right now with what’s going on the Middle East,” he added.
How actors got involved in AI
SAG-AFTRA, which represents about 160,000 actors and other media professionals, has been on strike since July while in contract negotiations with the Alliance of Motion Picture and Television Producers (AMPTP).
Along with wage increases and boosts in compensation for streaming, the union is seeking protections from AI due to risks of likenesses being duplicated and used again.
Gregg said the “futuristic comic book tech” his character in Marvel had access to has “already become a reality.”
“Our voices, images and performances are presently available on the internet, both legally and illegally, because we do not have data and privacy and security protections across the nation,” he said.
“AI models can ingest, reproduce and modify them at will without consideration, compensation or consent for the artist. It’s a violation of privacy rights but it’s also a violation of our ability to compete fairly in the marketplace,” Gregg added.
One risk he pointed out is background performers being replicated and having their likenesses used again without being compensated by studios.
The Hill reached out to AMPTP for comment.
An AMPTP spokesman denied to The Verge in July claims that the studios proposed replicating background performers to use their likenesses repeatedly without consent or compensation.
The spokesman said the proposal was to permit a company to use a digital replica of a background actor in the motion picture in which the actor is employed, and any other use would require the actor’s consent and bargaining for use.
Beyond how studios may use replicas, examples of celebrity deepfakes have already circulated online.
Earlier this month, Tom Hanks warned the public in an Instagram post of an AI version of him promoting a dental plan. He said he has “nothing to do with it.”
CBS anchor Gayle King warned her followers of a manipulated version of her appearing to advertise her weight loss “secret.” She said she had “NOTHING to do with this company,” in an Instagram post.
As part of their contract negotiations, actors are pushing for protections that would focus on giving performers consideration, compensation and consent rights over how AI is used, Gregg said.
A similar battle emerged when the Writers Guild of America (WGA) went on strike. In late September, the WGA and the AMPTP reached an agreement that included AI protections after a 148-day strike.
Michelle Miller, the co-founder and former co-executive of Coworker.org, an early-stage worker organizing group, said the debate actors are having over AI is resonating with the public because of similar concerns raised by workers across industries in recent years.
“Many, many workers … have faced various incursions of bits and pieces of software showing up in their working lives that they don’t have control over,” Miller said.
“They don’t know how much information it’s taking from them. And so it’s really resonant when there’s a group of workers who have the collective power to stand up and say, ‘Wait a minute, we want to have a conversation about this,’” Miller added.
Pushes by unions, though, won’t protect the majority of working Americans who aren’t part of one.
Roughly 10 percent of wage and salary workers were members of unions in 2022, marking the lowest union membership on record, according to annual data released by the Bureau of Labor Statistics in January.
What can Congress do?
To protect nonunion workers, the federal government could take action to set national guidelines. The guardrails on AI, especially as it relates to the deepfake replicas, could also be critical in mitigating the spread of disinformation.
Congress has been mulling ways to set AI guardrails in place in a number of areas, in addition to creating an overarching framework for regulation.
Expert witnesses at Wednesday’s hearing also urged lawmakers to push forward with a comprehensive data privacy bill. In doing so, it would help protect against some of the risks of how AI systems, as well as other tech entities, collect and use Americans’ data.
The House committee advanced a privacy bill, the American Data Privacy and Protection Act, last year with bipartisan support, but it failed to be brought to a floor vote or up for consideration in the Senate.
Last week, a bipartisan bill was introduced in the Senate that aimed to address AI concerns around name, image and likeness. The Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act would hold individuals or companies liable for producing or hosting unauthorized digital replicas of individuals in a performance.
Although the bill has some carveouts from liability on First Amendment protections — for example, replicas for news, documentaries or satire — experts highlighted free speech concerns that could arise from the draft proposal.
Miller said there is a difference between an “individual goofing around on the internet” and a “full-on corporate entity or shadowy entity creating something to establish a pattern of disinformation or profit off somebody’s own creative [intellectual property].”
The draft legislation doesn’t seem to differentiate between the two, she said.
“I get very worried when we start litigating individual behavior on the internet in that way,” Miller said.
Chris Callison-Burch, an engineering professor at the University of Pennsylvania, said that in addition to free speech concerns, the bill poses challenges for AI developers. He said the bill doesn’t provide sufficient guidance for AI developers to implement a way to avoid infringing the law.
“If I were an AI system developer, I cannot imagine how I could have a system that would have to check against every image to say, ‘Is this sufficiently close to a person that could run afoul of this law,’” Callison-Burch said.
“So that concerns me because I feel like it has a potential to reduce innovation in this area,” he said.