Ahead of the 2024 US presidential election, well-funded AI startup Anthropic is testing a technology to detect when users of its GenAI chatbot ask about political issues and redirect those users to “valid” polling information sources.
The technology, called Prompt Shield, which is based on a combination of AI detection models and rules, displays a popup if a user of Claude, US-based Anthropic’s chatbot, requests voting information. The pop-up window redirects the user to TurboVote, a resource from the non-partisan organization Democracy Works, where they can find up-to-date, accurate voting information.
Anthropic says the Prompt Shield was necessitated by Claude’s shortcomings in the area of political and election related information. Claude is not trained often enough to provide real-time information about specific elections, Anthropic acknowledges, and so is prone to hallucinating — that is, inventing facts — about those elections.
“We’ve been building the ‘instant communication shield’ since Claude was released — it flags a number of different types of vulnerabilities, based on our accepted policy for users,” a spokesperson told TechCrunch via email. “We will be launching our direct election-specific shield intervention in the coming weeks and plan to monitor usage and restrictions… We have spoken to a range of stakeholders, including politicians, other companies, civil society and non-governmental organizations and election experts consultants [in developing this].”
It’s apparently a limited trial right now. Claude didn’t present the popup when I asked about how to vote in the upcoming election, instead spitting out a general voting guide. Anthropic claims to be improving Prompt Shield as it prepares to roll it out to more users.
Anthropic, which prohibits the use of its tools in political campaigns and lobbying, is the latest GenAI vendor to implement policies and technologies to try to prevent election interference.
The timing is no accident. This year, globally, more voters than ever in history will head to the polls as at least 64 countries representing a combined population of around 49% of the world’s people are set to hold national elections.
In January, OpenAI said it would ban people from using ChatGPT, the bot powered by AI viruses, to create bots that impersonate real candidates or governments, falsify how voting works, or discourage people from voting. Like Anthropic, OpenAI does not allow users to build apps using its tools for political campaigning or lobbying purposes — a policy the company reiterated last month.
In a technical approach similar to Prompt Shield, OpenAI also uses detection systems to direct ChatGPT users who ask logistical questions about voting to a nonpartisan website, CanIVote.org, maintained by the National Association of Secretaries of State .
In the US, Congress has yet to pass legislation seeking to regulate the AI industry’s role in politics despite bipartisan support. Meanwhile, more than a third of US states have passed or introduced bills to address political campaign hazing as federal legislation lags.
Instead of legislation, some platforms — under pressure from watchdogs and regulators — are taking steps to prevent GenAI from being misused to mislead or manipulate voters.
Google said last September that it would require political ads using GenAI on YouTube and its other platforms, such as Google Search, to be accompanied by prominent disclosure if the images or sounds were synthetically altered. Meta also banned political campaigns from using GenAI tools — including its own — in ads on its properties.