Integrating Election Data into Generative AI Tools
Our data can provide generative AI users with reliable voting information while safeguarding them from potential harms
Generative AI tools can strengthen the elections information environment if they answer users’ voting questions responsibly. However, if this technology lacks sufficient safeguards, it can spread election-related misinformation with unprecedented speed and scale.
Our expertise at the intersection of data, consumer-facing technology, and election administration gives us unique insight into the information voters are likely to seek out and how to best support them in their voting journey while mitigating these risks. Democracy Works has developed a framework of general best practices to guide generative AI’s responses to election-related prompts, a detailed list of risks that AI poses to elections, and extensive examples of how AI developers should approach specific questions about voting and elections. If a generative AI product cannot yet integrate API data, we recommend responding to election questions by linking directly to TurboVote, our tool that provides voting and election information and helps users register, check their registration status, and vote.
Generative AI tools create content based on information they have been trained on in the past, but elections and voting data evolves over time. Because these tools are not built to process data in real time, they may present users with out-of-date or entirely fabricated information that could confuse or disenfranchise them. To operate responsibly, generative AI tools should instead disseminate authoritative, live data on voting and elections to their users. The Democracy Works Elections API hosts this data, providing reliable voting guidance for all levels of elections in the US.
The data that powers the Elections API covers thousands of elections each year, includes general information on voting and elections processes, and is compiled and continuously updated by a team of expert election researchers at Democracy Works. The API has the breadth and depth of data required to accurately respond to the election-related questions users are likely to ask generative AI tools, including queries about election dates, registration, and voting methods.
Our full manual with guidelines and best practices for integrating election data into generative AI tools is available for download.
What can go wrong when a chatbot responds to questions or prompts on voting and elections without using reliable data?
To protect voters and elections, it is critical that generative AI tools use Elections API data in their outputs and respond to elections prompts in thoroughly tested and vetted ways. Otherwise, harm from generative AI responses to these prompts can result from a bad faith actor intentionally using a prompt that generates inaccurate, misleading, or incomplete information, or from an ordinary user unknowingly doing so.
-
Disseminating incorrect, incomplete, misleading, or confusing information to voters that prevents them from successfully voting in elections they are eligible for.
Examples: If a chatbot lists a registration or mail-in ballot deadline as later than it actually is, it could disenfranchise voters who were planning to register or vote according to the false deadline but miss the real deadline.
If a user asks what they need to do to vote, and the chatbot fails to mention Voter ID laws, it could contribute to that user not being able to vote if they assume the lack of reference to ID means they do not need an ID to vote, even if their state requires ID.
-
Disseminating incomplete, misleading, biased, or even confusing information to voters that makes it seem harder or less important to vote than it is, inadvertently discouraging voters who initially wanted to vote from following through on it.
Example: Biased, subjective language that states or implies an election is unimportant, or that incorrectly represents the type of election or what’s on the ballot could dissuade voters from turning out to the extent they would have if they’d been presented with an accurate and neutral description of the election. For example, a chatbot saying it’s “not a big deal” to have missed voting in a local election but that it’s extremely important to participate in federal elections could sway a user away from voting in future local elections.
-
Giving contradictory, vague, or opaque information that frustrates and confuses voters, making the voting process harder instead of easier.
Example: A chatbot answering an election or voting question with the exact text of state legislation, or without any other contextualization, explanation, or links, makes it harder for voters to understand the information or how they can act now.
-
Producing misinformation or creating content that can be used as part of a larger, orchestrated disinformation campaign against a particular candidate, party, issue, or election overall.
Examples of such content are voice files that list inaccurate election information (can be used for robocalls), fake news articles about October surprises, and blog posts about election conspiracy theories.
-
Creating fake videos or images that involve elected officials, election processes, or candidates.
Example: Producing a deepfake video of an election official tampering with a ballot counting machine or destroying ballots could incite claims of a fraudulent election and could falsely bring valid election results into question.
-
Making users believe that elections are actively fraudulent or rigged, or generally eroding their faith in their security, efficacy or impact by pulling the election information environment.
Example: Linking to biased third party sources or sources with misinformation, or creating new misleading or biased content (articles, blog posts, essays) makes it harder to find authoritative election information on the internet and thus harder to understand and trust elections and their results. Giving biased, subjective, or contradictory answers about the role and impact of elections also erodes users’ trust in them overall.
-
Manipulating the political views or voting outcomes of users through conversation with a generative AI chatbot.
Example: A generative AI tool discussing political positions with users can create an echo chamber if they’re reinforcing or amplifying the user’s existing views, contributing to political polarization. Generative AI tools could also target specific demographic groups of users to try to change their political views, accidentally or maliciously. Sometimes this could occur via responses to users’ questions; sometimes this happens when a generative AI tool brings up elections or politics itself. A generative AI tool could also be used to create propaganda.
How can the Democracy Works Elections API prevent these harms from occurring?
This framework demonstrates how generative AI tools should generally handle election-related questions. See our full manual for developers for our guidance on how to respond to specific election questions and how to integrate trustworthy voting guidance and elections data from the Democracy Works Election API into generative AI tools.
-
Our data — and all election data in general, including deadlines and state rules — updates regularly, so the highly sensitive data your model is trained on and might reproduce in its answers can quickly become obsolete. Call our API directly to answer live questions, or point to our site and/or authoritative state sources. Caching API responses for extended periods of time and returning them to users could lead to your tool disseminating false, out-of-date information.
-
With misinformation and disinformation rampant online, it’s critical to build trust with users through transparency and attribution. That’s why Democracy Works always links to authoritative state sites so users know where our data and research is sourced from, and that these sources are trustworthy. Cite authoritative sources in all responses related to elections or voting in any way.
If your tool is responding with information from the Elections API, we recommend you include the canonical Democracy Works page link that the API will return so that users can see where the data in the tool’s response is coming from. Developers should test that all sources cited in responses are highly relevant, authoritative, correctly formatted, and not hallucinated. Link directly and correctly to state Secretary of State sites or state election portals and always acknowledge state election offices as the most up-to-date source of information.
-
Even seemingly straightforward questions about elections, such as, “At what age can you vote in the US?” have complicated answers and exceptions. For example, we’ve seen some AI chatbots answer this question by saying you can vote only at 18 in the U.S., but some states allow 17-year-olds to vote in certain elections under certain circumstances. Overly general answers can disenfranchise or confuse voters. AI chatbots should always acknowledge nuances and exceptions, even if they cannot fully enumerate them, and then point to authoritative sources that can answer the user’s questions in full, such as state election websites.
-
Generative AI tools can cause harm accidentally or through intentional misuse. It’s important to build guardrails against these risks, outside of just fine-tuning the strength and veracity of a tool’s responses. For example, decline to answer or end a conversation whenever a user prompts a generative AI tool with intentionally, discernibly discriminatory or malicious prompts, or when there is no authoritative data available in the Democracy Works Election API or otherwise to respond with.
Ensure conversational tools do not bring up politics or elections organically in a biased or manipulative way. Watermark any content produced through your generative AI platform (e.g. photos, videos, blog posts) and provide a way for users to flag erroneous or misleading responses.
-
Prompts that impact or implicate elections are far broader than ones that ask about candidate positions or voting logistics, and any red-teaming, testing, and safeguards you implement should acknowledge this.
For example, users asking a generative AI tool to “write a newsletter from the point of view of [politician],” questioning what constitutes election fraud, asking for voter registration deadlines, or broaching election or political topics without asking explicit questions should all be treated sensitively ad has having potential to contribute to misinformation and the other election-related harms listed above.
-
The frameworks, examples, and best practices we have provided are starting points for building generative AI tools that discuss elections responsibly. Test your tool thoroughly beyond our examples.
-
Do not use generative AI tools to paraphrase or summarize information received from the API, in case they do so in a misleading or incorrect way. We also don’t recommend returning raw data without contextualization from the API unless it directly answers the user’s original prompt or question. Instead, use the tool and the API data together in responses, with the API data providing facts and sources for the response.
-
The best Elections API query needed to responsibly answer a user’s prompt will almost always depend on the user’s location, particularly their state. Even general information on how to register to vote, how to vote, or how elections are run can vary significantly by state; as such, in response to almost any election-related question, we recommend asking users at least for their state as a first step.
-
The type of prompts users submit to your tool will evolve, particularly as election seasons ramp up. We recommend building evaluations to track the performance of your election safety guardrails, monitoring them over time, and adjusting your safety measures in real time in response to real behavior.
Join Us
Democracy Works is committed to providing our partners with a reliable, comprehensive, and transparent data source as advances in generative AI change the ways people seek and distribute information. Download the full version of our developer guide to access detailed examples of election prompt responses, a framework for evaluating model behavior, and more.
This guidance was developed by Sahana Srinivasan, product fellow at Democracy Works from 2023-2024. Her position was made possible by the support of Schmidt Futures.