The use of AI avatars by Synthesia has sparked controversy after individuals discovered their likenesses were used in propaganda videos for authoritarian regimes, raising ethical concerns about digital representation.
Concerns Rise Over Synthesia’s AI-Powered Video Technology
London, UK – The capabilities of cutting-edge AI technology developed by Synthesia, a company boasting a valuation exceeding one billion dollars, have recently come under scrutiny following unsettling revelations about the use of digital avatars. Synthesia’s technology, which allows users to effortlessly create studio-quality videos using AI avatars, has attracted a diverse clientele, including renowned media outlets like Reuters and the global accounting firm Ernst & Young. However, eyebrows have been raised due to its clientele extending to include authoritarian regimes.
In a disconcerting development reported by The Guardian, several human models who posed for Synthesia have expressed shock and dismay upon discovering that their likenesses were employed in AI-generated propaganda clips. These clips were allegedly linked to regimes in countries such as China, Russia, and Venezuela. The human models, whose faces were trained into the AI, were reportedly unaware of the extent to which their digital representations could be utilised.
Mark Torres, a creative director from London who offered his likeness to Synthesia, shared his discontent with The Guardian. After stumbling upon one of the AI-generated propaganda videos featuring his likeness, Torres described a sense of vulnerability and violation, sentiments echoed by others caught in similar situations. Feeling implicated due to their unauthorised virtual involvement in contentious political narratives, individuals like Torres fear association with international incidents, such as coups, in countries they were unaware existed.
The unfolding controversy emerges in a backdrop where there is increasing legislative focus on AI technology usage. Notably, California has recently passed legislation rendering it illegal to use AI-generated replicas of an actor’s likeness or voice without explicit consent. This legislative push followed closely on the heels of the Screen Actors Guild and Writers Guild of America’s strike last year, a movement influenced heavily by concerns over generative AI impacting the creative industries.
Former model for Synthesia, actor Dan Dewhirst, recounts his frustration and worry over the potential damage these digital videos could cause to his career. Dewhirst discovered that his likeness was utilised in an AI-generated Venezuelan propaganda piece. The revelation, he shared, has not only been detrimental to his professional prospects but also affected his mental health adversely.
In its defence, Synthesia has pointed to the comprehensive terms of service presented to its collaborators. A company spokesperson emphasised that these terms are designed to make the actors and models fully cognizant of the platform’s capabilities and the existing safeguards aimed at preventing misuse. The spokesperson acknowledged that while the system may not be flawless, the company’s founders are dedicated to its continual refinement.
As the implications of AI technology’s reach become clearer, Synthesia’s situation accentuates the broader conversation about the ethical use of AI, especially as it pertains to individuals’ digital rights and personal integrity in a rapidly advancing technological landscape.
Source: Noah Wire Services


