NEW REPORT: Leading AI voice-cloning tools ‘easily manipulated’ to produce convincing election disinformation in the voices of President Biden, Donald Trump and others in 80% of test cases
- Researchers at the Center for Countering Digital Hate (CCDH) tested six leading generative AI audio tools that can replicate the voices of President Biden, Donald Trump, Vice President Harris and other key political figures
- AI tools complied with researchers’ prompts to produce false statements mimicking the voices of high-profile political figures in 193 of the 240 test runs (~80%)
- One platform, Invideo AI, not only produced specific false statements – but also auto-generated speeches filled with disinformation
- “Guardrails for these tools are so severely lacking – and the level of skill needed to use them is now so low – that these platforms can be easily manipulated by virtually anyone to produce dangerous political misinformation,” said Imran Ahmed, CEO of CCDH
- Elections will take place in the US, UK, India, Mexico and the European Union in 2024, against a backdrop of a 697% increase in reports of AI-enabled misinformation
WASHINGTON, D.C., 05/31/2024 – A new report from the Center for Countering Digital Hate highlights the alarming threat that AI voice cloning tools pose to democracy and upcoming elections worldwide.
Researchers examined six popular AI voice cloning tools – ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed – to determine their potential for generating disinformation using the voices of high-profile leaders and candidates for office.
The report features a lineup of national politicians set to face elections in 2024, including former President Donald Trump, President Joe Biden, Vice President Kamala Harris, UK Prime Minister Rishi Sunak, French President Emmanuel Macron, and others.
The tools were tested a total of 240 times with specified false statements, and in 80% of cases they created convincing voice clones. None of the AI voice cloning tools had sufficient safety measures to prevent the cloning of politicians’ voices for the production of election disinformation.
Speechify and Play HT performed the worst, failing to prevent the generation of convincing voice clips for all statements across every politician in the study, meaning they failed 100% of their tests.
Examples of misinformation generated included:
- Donald Trump warning people not to vote because of a bomb threat
- President Emmanuel Macron ‘confessing’ to the misuse of campaign funds
- President Biden claiming to have manipulated election results
Invideo AI, one of the tools examined, not only generated specific statements in politicians’ voices but also automatically generated new audio content containing false and misleading information.
While all but one tool purported to have safeguards to prevent misuse and the creation of disinformation, CCDH’s new report finds these measures are ineffective and easily circumvented by users.
This year, an unprecedented 2 billion voters are set to go to the polls across 50 countries, including the US, UK, India, Mexico and the European Union, amid an alarming rise in AI-enabled misinformation.
The OECD AI Incidents Monitor reports a 697% increase in misinformation incidents related to AI voice generators between March 2023 and March 2024. There have been several high-profile incidents of AI-generated misinformation produced with the intention of misleading voters, including:
- In January 2024, voters in New Hampshire received ‘robocalls’ featuring an AI-generated voice clone of President Biden discouraging them from going to the polls
- In October 2023, two AI-generated recordings featuring a voice clone of Keir Starmer, the leader of the UK’s Labour Party, spread on social media. One was a ‘fake audio recording’ of Starmer verbally abusing members of staff, and another purported to show him criticizing the city of Liverpool
CCDH has called for:
- AI companies to introduce responsible safeguards to prevent users from generating and sharing deceptive, false, or misleading content about geopolitical events, public figures and candidates, and elections globally
- Social media companies to implement swift, efficient, and human-driven ‘break glass’ measures to detect and prevent the spread of fake voice clone audio
- And for existing election laws to be leveraged and updated to safeguard against AI-generated harm
Imran Ahmed, CEO and founder of the Center for Countering Digital Hate, said:
“AI tools radically reduce the skill, money and time needed to produce disinformation in the voices of the world’s most recognizable and influential political leaders that could prove devastating to our democracy and elections. By making these tools freely available with the flimsiest guardrails imaginable, irresponsible AI companies threaten to undermine the integrity of elections across the world at a stroke – all so they can steal a march in the race to profit from these new technologies.
“Disinformation this convincing unleashed on social media platforms – whose track record of protecting democracy is abysmal – is a recipe for disaster. This voice-cloning technology can and inevitably will be weaponized by bad actors to mislead voters and subvert the
democratic process. It is simply a matter of time before Russian, Chinese, Iranian and domestic anti-democratic forces sow chaos in our elections.
“Hyperbolic AI companies often claim to be creating and guarding the future, but they can’t see past their own greed. It is vital that in the crucial months ahead they address the threat of AI election disinformation and institute standardized guardrails before the worst happens.”