Founder of ElevenLabs AI Proposes Solution for Combatting Audio Deepfakes

1. Companies like ElevenLabs are using AI to generate convincing audio clips, but are facing challenges in regulating deepfakes to prevent abuse.
2. Lawmakers and experts are concerned about the potential for dangerous misuse of AI voice technology, such as in supercharged phone scams and generating offensive deepfakes.
3. ElevenLabs CEO believes in the positive potential of the technology, such as helping patients with neurodegenerative diseases communicate, and is working on digitally watermarking voices to differentiate real from fake.

AI voice technology companies like ElevenLabs are facing challenges in regulating deepfakes while still promoting innovation. The company uses AI to generate authentic audio clips, including text-to-speech voiceovers and voice cloning. Despite the potential benefits of this technology for patients with neurodegenerative diseases and cross-cultural communication, there are concerns about its misuse for scams and generating deepfakes.

Lawmakers are worried about the dangerous potential for abuse of AI voice technology, especially in scenarios like impersonation phone scams. Last year, 4chan users exploited ElevenLabs’ technology to create deepfakes of celebrities spreading hateful content. However, ElevenLabs CEO Staniszewski believes the technology can be used for positive purposes, such as helping patients communicate and facilitating cross-cultural communication in cities like New York.

To prevent fraud and misuse of AI-generated voices, Staniszewski suggests digitally watermarking synthetic voices so humans can distinguish between real and fake voices. ElevenLabs is working on developing this technology and has partnered with other AI companies like OpenAI, Anthropic, Google, and Meta to combat deepfakes in the future. By cooperating with other companies, ElevenLabs aims to capitalize on the potential of AI voice technology while mitigating its risks.

Source link