Tech giants sign voluntary pledge to combat election-related deepfakes


Tech companies are pledging to combat election-related deepfakes as policymakers step up the pressure.

Today at the Munich Security Conference, vendors including Microsoft, Meta, Google, Amazon, Adobe and IBM signed an agreement signaling their intention to adopt a common framework for responding to deepfakes generated by AI and intended to mislead voters. Thirteen other companies, including AI startups OpenAI, Anthropic, Inflection AI, ElevenLabs and Stability AI and social media platforms X (formerly Twitter), TikTok and Snap, joined in signing the deal, alongside from chipmaker Arm and security companies McAfee and TrendMicro.

The undersigned said they would use methods to detect and label misleading political deepfakes when they are created and distributed on their platforms, sharing best practices with each other and providing “rapid and proportionate responses” when deepfakes begin. to spread. The companies added that they would pay particular attention to context in their response to deepfakes, aiming to “[safeguard] educational, documentary, artistic, satirical and political expression” while maintaining transparency with users about their policies regarding misleading election content.

The deal is actually ineffective and, some critics will say, merely virtue signaling: its measures are voluntary. But this hype shows a distrust on the part of the tech sector of regulation when it comes to elections, at a time when 49% of the world’s population will go to the polls in national elections.

“There is no way the technology industry can protect elections alone from this new type of election abuse,” Brad Smith, vice president and president of Microsoft, said in a statement. Press release. “As we look to the future, it seems to those of us who work at Microsoft that we will also need new forms of multi-stakeholder action…It is abundantly clear that protecting elections [will require] that we all work together.

In the United States, no federal law prohibits deepfakes, whether election-related or otherwise. But ten states across the country have passed laws criminalizing them, with Minnesota being the first to do so. target deepfakes used in political campaigns.

Elsewhere, federal agencies have taken every possible enforcement action to combat the spread of deepfakes.

This week, the FTC announcement that it seeks to change an existing rule that prohibits impersonation of businesses or government agencies to cover all consumers, including politicians. And the FCC moved Make AI voice robocalls illegal by reinterpreting a rule that prohibits spamming artificial, pre-recorded voice messages.

In the European Union, the bloc’s AI law would require any AI-generated content to be clearly labeled as such. The EU is also using its Digital Services Act to force the tech industry to combat deepfakes in various forms.

Meanwhile, deepfakes continue to proliferate. According to data from Claritya deepfake detection company, the number of deepfakes created increased by 900% year over year.

Last month, AI robocalls imitating US President Joe Biden’s voice attempted to discourage people from voting in New Hampshire’s primary election. And in November, just days before elections in Slovakia, AI-generated audio recordings impersonated a liberal candidate discussing plans to raise beer prices and rig elections.

In a recent survey According to YouGov, 85% of Americans say they are very or somewhat concerned about the spread of misleading video and audio deepfakes. A separate investigation from the Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe AI tools will increase the spread of false and misleading information during the 2024 U.S. election cycle.


Leave a Comment

Your email address will not be published. Required fields are marked *