Anthropic takes steps to prevent election misinformation

[ad_1]

On the eve of the 2024 US presidential election, Anthropic, the well financed The AI ​​startup is testing technology to detect when users of its GenAI chatbot ask questions about political topics and redirects those users to “authoritative” sources of voting information.

Called Prompt Shield, the technology, which relies on a combination of AI detection models and rules, displays a pop-up if a user based in the United States Claude, Anthropic’s chatbot, asks for information about voting. The pop-up offers to redirect the user to TurboVote, a resource from the nonpartisan organization Democracy Works, where they can find up-to-date and accurate voting information.

Anthropic says that Prompt Shield was necessitated by Claude’s shortcomings in political and election information. Claude is not trained frequently enough to provide real-time information on specific elections, Anthropic acknowledges, and so he is prone to mind-blowing — that is to say, inventing facts — about these elections.

“We’ve had a ‘prompt shield’ in place since we launched Claude – it flags a number of different types of harm, based on our acceptable use policy,” a spokesperson told TechCrunch by email. “We will launch our election-specific Rapid Shield response in the coming weeks and intend to monitor its use and limitations… We have spoken to a variety of stakeholders, including policymakers, other businesses, society civil and non-governmental agencies and election-specific actors. consultants [in developing this].”

This appears to be a limited test at this time. Claude didn’t present the pop-up when I asked him how to vote in the upcoming election, but instead spat out a generic voting guide. Anthropic says it’s tweaking Prompt Shield as it prepares to roll it out to more users.

Anthropic, which prohibits the use of its tools in political campaigns and lobbying, is the latest GenAI vendor to implement policies and technologies to try to prevent election interference.

The timing is no coincidence. This year, globally, more voters than ever before in history will go to the polls, as at least 64 countries representing a combined population of approximately 49% of the world’s population are expected to hold national elections.

In January, OpenAI announced it would ban people from using ChatGPT, its viral AI-powered chatbot, to create bots that impersonate real candidates or governments, misrepresent how voting works, or discourage people to vote. Like Anthropic, OpenAI currently does not allow users to create applications using its tools for political campaigning or lobbying purposes – a policy the company reiterated last month.

In a technical approach similar to Prompt Shield, OpenAI also uses detection systems to direct ChatGPT users who ask logistical questions about voting to a nonpartisan website, CanIVote.org, operated by the Association national secretaries of state.

In the United States, Congress has yet to pass legislation to regulate the political role of the AI ​​industry, despite some bipartisan support. Meanwhile, more than a third of US states have passed or introduced bills to combat deepfakes in political campaigns while federal legislation has stalled.

Instead of legislating, some platforms – under pressure from watchdogs and regulators – are taking steps to prevent GenAI from being misused to mislead or manipulate voters.

Google said Last September, it would require political ads using GenAI on YouTube and its other platforms, such as Google Search, to be accompanied by a prominent disclosure if the images or sounds were synthetically altered. Meta also banned political campaigns from using GenAI tools – including its own – to advertise on its properties.

[ad_2]

Leave a Comment

Your email address will not be published. Required fields are marked *