The renowned naturalist expresses deep concern about the unauthorised use of his voice through AI technology, sparking a debate on digital privacy and intellectual property rights.

David Attenborough, the revered British naturalist and broadcaster, has expressed significant unease regarding the unauthorised use of his voice through artificial intelligence technology. Known globally for his distinctive narration of natural history documentaries, Attenborough found himself at the centre of a digital ethics debate when it was revealed that AI systems had been using a voice eerily similar to his across various platforms.

The controversy began when the BBC included an AI-generated voice identical to Attenborough’s in a segment of their latest series, “Asia.” The series, a seven-part exploration of the continent’s natural splendour, began airing on 3 November on BBC One and iPlayer, in collaboration with BBC Studios Natural History Unit, BBC America, France Télévisions, and ZDF. During a broadcast, the BBC ran a clip featuring Attenborough’s voice and then presented an AI-generated version, asking viewers to discern the differences between the two. The similarity was striking, raising concerns about the ethical use of voice cloning technology.

Further compounding the issue is the discovery of Attenborough’s AI-generated voice on YouTube channels such as The Intellectualist, where it was used to narrate content on politically sensitive topics, including the Ukraine-Russia conflict and former US President Donald Trump. Attenborough, aged 98, voiced his dismay in an interview with the BBC, stating, “Having spent a lifetime trying to speak what I believe to be the truth, I am profoundly disturbed to find these days my identity is being stolen by others and greatly object to them using it to say whatever they wish.”

The creator behind the AI-generated voice responded by mimicking Attenborough’s voice once more, humorously denying any employment relationship with the broadcaster. The creator insisted, “I am not David Attenborough,” clarifying that although their voice may share similarities, they are distinct individuals.

The situation reflects a broader concern over AI’s capacity to replicate human voices without consent. This is not an isolated incident; similar issues have arisen, including a case involving actress Scarlett Johansson. Earlier this year, Johansson criticised OpenAI for employing a voice resembling hers in their ChatGPT product without her approval. Despite her prior refusal when approached by OpenAI, the company initially released the voice known as “Sky” before eventually retracting it following public backlash and legal threats.

This type of AI-driven likeness cloning has prompted discussions regarding intellectual property and privacy rights. In the United States, these discussions have catalysed legislative efforts, such as the proposed No Fakes Act. This bipartisan bill seeks to hold accountable those who create and distribute unauthorised digital clones of individuals’ voices or likenesses.

This incident highlights the complexities that emerge when the rapid advancements in technology intersect with issues of personal likeness and privacy. For Attenborough, the involuntary use of his iconic voice speaks to a broader conversation about the responsible development and application of artificial intelligence in an era where personal identities can be easily replicated. As the debate continues, it brings into sharp focus the need for clear guidelines and ethical considerations within the digital landscape.

Source: Noah Wire Services

More on this

Share.
Leave A Reply

Exit mobile version