Menu Close

Why Are Striking Actors Being Recruited by Meta and AI Companies?

Meta and other AI companies hired actors during the Hollywood Writers Guild of America and SAG-AFTRA strikes to train AI with their expressions, movements, and voice, paying as little as $300 to use the data collected “in perpetuity.”

A larger-than-usual number of out-of-work actors have been looking at the increasing number of AI job postings that offer acting work that doesn’t cross the picket line. What are these jobs exactly? According to MIT Technology Review, “trainers,” who are actors covered in data points, teach AI to appear more human with their expressions, voice, and movements.


There have been attempts before to generate humans or human-like avatars with facial recognition, other biometric analysis, or generative AI models through data scraped off the internet–including private surveillance camera footage that was shared or sold without the knowledge of anyone being filmed (but more than likely agreed when they mindless agreed to any Terms and Agreements), but need for higher-quality data become impossible to not implement a better plan.

When it comes to facial recognition, other biometric analysis, or generative AI models that aim to generate human or human-like avatars, it is human faces, movements, and voices that serve as the data.

While Realeyes, a London-based emotion AI company, and Meta, which participants were unaware of the company’s involvement until they arrived on site, stated that they would not reproduce any individual likenesses for future projects, no one is sure what their data will be used for.

“This is fully a research-based project,” the job posting said. The job promised $150 per hour, with a minimum of two hours of work, and emphasized that “your individual likeness will not be used for any commercial purposes.” Since this is not scabbed work, many background actors looking for work during the strikes agreed to follow through with the project.

James Dean smoking a cigarette on the set of 'Giant'

But the question remains: What are these actors’ faces and performances being used for in not for TV shows or movies?

The broad nature of what actors are signing up for makes it impossible to know what the implications are. What makes the waters even mudder is that actors were asked to sign away certain rights “in perpetuity” for technologies and use cases that may not exist yet.

However, actors don’t know what rights they are truly giving away. “Likeness” is changing daily as more and more technology comes out. When MIT Technology Review asked one of the actors participating in the study how Realeyes defines “likeness,” the actor said that the company uses the term in a broad way that has no universally agreed-upon definitions.

In the agreement obtained by MIT Technology Review, the rights of Meta and the parties acting on the company’s behalf include:

  • Asserting certain rights to the participants’ identities (“identifying or recognizing you … creating a unique template of your face and/or voice … and/or protecting against impersonation and identity misuse”)
  • Allowing other researchers to conduct future research, using the study data however they see fit (“conducting future research studies and activities … in collaboration with third party researchers, who may further use the Study Data beyond the control of Meta”)
  • Creating derivative works from the study data for any kind of use at any time (“using, distributing, reproducing, publicly performing, publicly displaying, disclosing, and modifying or otherwise creating derivative works from the Study Data, worldwide, irrevocably and in perpetuity, and in all formats and media existing now or in the future”)
The only limit on use was that the companies would “not use Study Data to develop machine learning models that generate your specific face or voice in any Meta product.”

Generative AI has been plaguing social media and entertainment for a while now. From the Deep Fakes of past Presidents of the United States singing and dancing to Ice Spice’s biggest hit song to the de-aging of Harrison Ford, Tom Hanks, and Robin Wright to portray younger versions of themselves on screen, we have been living and using AI as a way to make art without thinking of the repercussions to come. Unfortunately, much of the deep-learning AI is generated with information that has been mined without consent.

Earlier this month, Tom Hanks posted on Instagram a video that showed him promoting a dental plan that was not actually him.

When it comes to non-A-list-celebrities, AI is ruthless.

Background actors are being asked to undergo digital body scans on set so their likenesses can be recreated by studios and used without payment for the work. There are no rules or regulations on how an actor’s likeness can be used in the future, and their fear is that there will be no place for them in the future of the industry. There have already been multiple reports from voice actors that video games are using their voices in video games that they were not hired for.

The truth is that there will always be people who are participating in studies that are powering generative AI. What we can do as filmmakers at any point in our career is try to be aware of how technology is influencing the industry and attempt to put regulations on how technology can be used at any stage of production.

While I am not suggesting that we don’t use this technology at all, there has to be an ethical limit. Actors deserve payment for each time their recorded data is used. Think of them as residuals for artificial performances.

We must understand that AI is not evil but should have limitations in creating art. Studios are stuck in a vicious cycle of cost-cutting, but art shouldn’t be limited or stifled by technology. Instead, technology should be used to enhance art and performances.

Let us know what you think about the future of acting and AI in the comments below!

Author: Alyssa Miller
This article comes from No Film School and can be read on the original site.

Related Posts