Tech companies pledge to fight election-related deep-fakes as policymakers step up pressure.
Today at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe and IBM signed an agreement signaling their intention to adopt a common framework for responding to AI-generated deepfakes that are meant to mislead voters. Thirteen other companies, including artificial intelligence startups OpenAI, Anthropic, Inflection AI, ElevenLabs and Stability AI, and social media platforms X (formerly Twitter), TikTok and Snap, joined in signing the deal, along with chip maker Arm and security companies McAfee and TrendMicro.
The undersigned said they will use methods to detect and flag misleading political deepfakes when they are created and distributed on their platforms, sharing best practices with each other and providing “rapid and proportionate responses” when deepfakes start to spread. The companies added that they will pay particular attention to the framework for responding to deepfakes, aiming to “[safeguard] educational, documentary, artistic, satirical and political expression” while maintaining transparency with users about their policies regarding misleading election content.
The agreement is essentially toothless and, some critics might say, little more than virtue signaling – its measures are voluntary. But the ballyhooing shows a wariness among the tech sector of regulatory targets when it comes to elections, in a year when 49% of the world’s population will head to the polls in national elections.
“There is no way the technology sector can protect elections on its own from this new type of election abuse,” said Brad Smith, vice president and president of Microsoft. Press release. “As we look to the future, it seems to those of us at Microsoft that we will also need new forms of multi-stakeholder action… It is abundantly clear that election protection [will require] that we all work together”.
No federal law in the US prohibits deepfakes, election-related or otherwise. However, 10 states across the country have enacted laws criminalizing them, with Minnesota being the first to target Deepfakes used in political campaigns.
Elsewhere, federal agencies have taken what enforcement measures they can to combat the spread of deepfakes.
This week, the FTC announced that it is seeking to amend an existing rule that prohibits the impersonation of businesses or government agencies to cover all consumers, including politicians. And the FCC moved to make AI-voiced robocalls illegal, reinterpreting a rule banning artificial and pre-recorded spam voice messages.
In the European Union, the bloc’s AI law would require all AI-generated content to be clearly labeled as such. The EU is also using its Digital Services Act to force the tech industry to clamp down on deepfakes in various forms.
Deepfakes continue to proliferate, meanwhile. According to data from Clarity, a deepfake detection company, the number of deepfakes created has increased by 900% every year.
Last month, AI robocalls imitating US President Joe Biden’s voice tried to discourage people from voting in the New Hampshire primary. And in November, just days before elections in Slovakia, AI-generated recordings impersonated a liberal candidate discussing plans to raise beer prices and rig elections.
In a recent voting from YouGov, 85% of Americans said they are very or somewhat concerned about the spread of deceptive video and audio deepfakes. A special one overview by the Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe artificial intelligence tools will increase the spread of false and misleading information during the 2024 US election cycle.