Google, Meta, OpenAI and other companies sign technology deal to combat AI election interference globally

[ad_1]

A group of 20 technology companies announced Friday that they have agreed to work together to prevent misleading artificial intelligence content from interfering with elections around the world this year.

The rapid growth of generative artificial intelligence (AI), capable of creating text, images and videos in seconds in response to prompts, has increased fears that the new technology could be used to influence major elections this year, when more than half of the world’s population is ready to go to the polls.

Signatories to the technology deal, announced at the Munich Security Conference, include companies that build generative AI models used to create content, including OpenAI, Microsoft and Adobe. Other signatories include social media platforms that will face the challenge of keeping harmful content off their sites, such as Meta Platforms, TikTok and X, formerly known as Twitter.

The agreement includes commitments to collaborate on developing tools to detect misleading AI-generated images, videos and audio, creating public awareness campaigns to educate voters about misleading content and taking action against this content on their services.

Technology to identify AI-generated content or certify its origin could include watermarking or metadata embedding, the companies said.

The agreement does not specify a timetable for meeting the commitments or how each company will implement them.

“I think the value of this (agreement) is how many companies are signing up to it,” said Nick Clegg, president of global business at Meta Platforms.

“It’s fine if individual platforms develop new policies for detection, provenance, labeling, watermarking, etc., but unless there is a broader commitment to do this in an interoperable and shared way, we will find ourselves stuck in a hodgepodge. different commitments,” Clegg said.

Generative AI is already being used to influence politics and even convince people not to vote.

In January, a robocall using fake audio from U.S. President Joe Biden was broadcast to New Hampshire voters, urging them to stay home during the state’s presidential primary election.

Despite the popularity of text generation tools like OpenAI’s ChatGPT, tech companies will focus on preventing the harmful effects of AI photos, videos and audio, in part because people tend to be more skeptical of with regard to the text, Dana Rao, head of Adobe’s trust, said in an interview.

“There is an emotional connection between audio, video and images,” he said. “Your brain is wired to believe this kind of media.”

© Thomson Reuters 2024


(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

Affiliate links may be automatically generated – check out our ethics statement for more details.

For more details on the latest launches and news from Samsung, Xiaomi, Realme, OnePlus, Oppo and other companies present at the Mobile World Congress in Barcelona, ​​visit our MWC 2024 Center.

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *