AI Cloning Remorse: Actors Regret Selling Their Faces and Voices as Digital Likeness Appears in Scams, Propaganda

Actors who licensed their likeness for AI avatars have expressed regret after misuse in scams and propaganda, prompting responses from platforms like Synthesia.

Performers are finding their digital twins in unexpected and often unwelcome places online, sparking regret over deals signed with AI companies to license their likenesses.

Reports detail actors discovering their AI-generated avatars promoting questionable health cures, delivering political propaganda, or featuring in other scam-like content, often after agreeing to contracts for sums as low as $1,000 or $5,000 without fully grasping the implications or lack of control.

Synthesia, a major UK-based AI video platform valued at $2.1 billion in January, features frequently in these accounts, leading the company to address the concerns with new actor programs and policies.

The human side of this technological wave is illustrated by performers like Adam Coy, a New York actor who received $1,000 for a one-year license to MCM, only for his AI counterpart to appear in doomsaying videos. His contract reportedly only prohibited pornographic use or association with alcohol/tobacco.

South Korean actor Simon Lee told AFP he was stunned and ashamed when his avatar promoted questionable health cures, with his contract apparently offering no recourse to remove the scam videos.

Actor Dan Dewhirst, known for roles in major productions, contracted with Synthesia in 2021; despite his agent and the Equity union attempting to negotiate better terms (which Synthesia reportedly refused), his avatar later appeared in Venezuelan political propaganda, according to an October 2024 Equity news post.

British actor Connor Yeates, paid around $5,000 by Synthesia for a three-year deal he took because, as he told AFP, he needed the money, saw his likeness used in propaganda supporting Burkina Faso’s 2022 coup leader – a use Synthesia acknowledged violated its terms.

Contracts, Compensation, and Control

These incidents underscore the potential pitfalls lurking in the contracts actors sign for AI likeness licensing. Lawyer Alyssa Malchiodi, speaking to AFP, noted that actors often didn’t fully comprehend the terms they agreed to, sometimes signing agreements with clauses considered abusive. “One major red flag is the use of broad, perpetual and irrevocable language that gives the company full ownership or unrestricted rights to use a creator’s voice, image, and likeness across any medium”, Malchiodi warned.

While the upfront payment might seem appealing – Coy conceded it’s decent money for little work – the lack of ongoing control and potential for reputational damage is proving problematic.

This situation contrasts sharply with protections being established in traditional media. Following industry strikes, the SAG-AFTRA 2023 TV/Theatrical agreement now mandates informed consent and fair compensation for using an actor’s “digital replica,” setting different rules for replicas created during employment versus those generated independently. Unions broadly advocate for principles often summarized as the “four pillars”: transparency, consent, compensation, and control, standards seemingly absent in some early AI avatar deals.

Synthesia’s Response and Market Growth

Synthesia, whose platform is used by a reported 70% of Fortune 500 companies, and which hit an estimated $100M ARR in March 2025, has responded to the growing scrutiny.

In March 2025, the company launched a “talent experience program” and a $1 million equity fund for select actors whose avatars are widely used or featured in marketing. Synthesia stated, These actors will be part of the program for up to four years, during which their equity awards will vest monthly. The company states its guiding principles are Consent, Control, and Collaboration, requiring explicit consent for likeness use.

The company also detailed its multi-layered content moderation approach, involving checks at creation, automated tools, manual verification, and a Trust and Safety team, plus financial deterrents like account bans without refunds for misuse.

Alexandru Voica, Synthesia’s head of corporate affairs, acknowledged past issues, stating, “Three years ago, a few videos slipped our content moderation partly because there was a gap in our enforcement for factually accurate but polarizing type content or videos with exaggerated claims or propaganda, for example.”

He added that the company now treats “harmful content as a serious security threat, while for many of these AI apps this type of content is a business model”.

Synthesia now prohibits stock avatar use in paid ads, broadcast media (without separate release), and political or news-style content. Actors are also offered an opt-out for future video creation, triggering data deletion, though this doesn’t impact already existing videos. Synthesia continues its technological development, announcing a deal in April 2025 with Shutterstock to license footage to improve avatar realism, and received strategic investment from Adobe that same month.

Wider Industry and Legal Responses

The ethical dilemmas surrounding AI-generated likenesses extend beyond visual avatars. While Synthesia focuses on enterprise applications like corporate training, the technology’s potential for misuse mirrors concerns seen elsewhere. AI voice cloning tools are rapidly advancing; OpenAI developed its highly realistic Voice Engine but hesitated on a public release in April 2024 due to misuse concerns, proposing safeguards like consent verification.

Microsoft Teams is integrating voice cloning for translation, emphasizing user consent. Meanwhile, companies like Runway AI are advancing realistic facial animation, facing similar questions about training data ethics.

Legislative bodies are also starting to react. California passed laws in September 2024 (AB 2602, AB 1836) to strengthen consent requirements for digital replicas, and federal proposals like the NO FAKES Act aim to establish a national standard. As the technology enabling digital twins becomes more accessible and convincing, the need for clear ethical guidelines, fair contracts, and robust legal frameworks governing their creation and use becomes increasingly apparent.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x