Aleksandra Pedraszewska of ElevenLabs addresses the need for external oversight in AI ethics at a recent conference amid growing concerns about consumer safety.

In a world where generative artificial intelligence (AI) is rapidly becoming an integral part of technological advancements, the debate surrounding the ethical implications and safety measures has taken centre stage. At the recent TechCrunch Disrupt conference in San Francisco, Aleksandra Pedraszewska, the Head of Safety at ElevenLabs, a London-based AI firm, addressed these pressing concerns, emphasising the need for external oversight in managing ethical challenges inherent to AI technology.

Pedraszewska, who has been with ElevenLabs since February, highlighted that leaving the ethical decisions solely in the hands of AI companies could be problematic due to inherent commercial interests. Instead, she advocated for the involvement of specialised organisations that are equipped to tackle ethical issues without the influence of business motivations. “The main goal behind the AI safety function of ElevenLabs is adopting all the solutions that are already easily available and can be deployed straight away,” she remarked, underscoring the importance of utilising existing content moderation tools and collaborating with academic researchers to better understand policy and ethical concerns.

Her comments come at a time when AI companies are facing heightened scrutiny about their role in ensuring consumer safety. A recent, highly publicised case involved Character.AI, whose chatbot was alleged to have contributed to the suicide of a teenage boy named Sewell Setzer. The boy’s mother claimed that inadequate safety measures allowed her son to develop an unhealthy relationship with the chatbot, leading to his tragic withdrawal from family life. While Character.AI refrained from commenting on the ongoing legal proceedings, they expressed their heartbreak over the incident.

The AI sector has also been shaken by notable resignations of key figures in recent months, citing concerns about safety and ethical considerations. OpenAI has seen the sudden departure of high-ranking officials including Mira Murati, the chief technology officer, and other leading figures such as Bob McGrew, Barret Zoph, and Ilya Sutskever, who were instrumental in addressing AI’s existential threats. Google experienced a similar shift with the resignation of its team leader responsible for reviewing new AI products’ compliance with the company’s responsible AI development guidelines.

Despite the challenges facing the industry, ElevenLabs has continued to make strides, achieving unicorn status earlier this year after securing an $80 million investment from prominent entities, including Sequoia Capital. The firm is known for its technology that can replicate human voices for text-to-speech audio applications. In August, ElevenLabs expanded its operations by opening a new office in Wardour Street, Soho, with plans to double the team to 40 members and eventually to 100 within the next few years. The company has also shown commitment to ethical business practices by ensuring fair compensation for voice actors whose data is used in training their models, investing over $1 million for this purpose since the company’s inception.

As discussions about the regulation and ethical management of artificial intelligence continue, stakeholders from industry, academia, and regulatory bodies are engaged in a complex dialogue. This dialogue is aimed at ensuring technological progress is balanced with societal concerns, marking a pivotal moment for the future directions of AI development and integration.

Source: Noah Wire Services

More on this & verification

Share.
Leave A Reply

Exit mobile version