Woman enters a polling location inside a mall to cast a ballot in the November 8, 2022 midterm election while two other women examine a ballot drop box.
Discovery & Impact
Faculty Research

Bringing AI to the Ballot Box: What Election Officials Need to Know

How can artificial intelligence transform election administration while preserving public trust? This spring, the McCourt School of Public Policy partnered with The Elections Group and Discourse Labs to tackle this critical question at a groundbreaking workshop.

The day-long convening brought together election officials, researchers, technology experts and civil society leaders to chart a responsible path forward for AI in election administration. Participants focused particularly on how AI could revolutionize voter-centric communications—from streamlining information delivery to enhancing accessibility.

The discussions revealed both promising opportunities for public service innovation and legitimate concerns about maintaining institutional trust in our democratic processes. Workshop participants developed a comprehensive set of findings and actionable recommendations that could shape the future of election technology.

Expert Insights from Georgetown’s Leading Researchers

To unpack the workshop’s key insights, we spoke with two McCourt School experts who are at the forefront of this intersection between technology and democracy:

Ioannis Ziogas

Ioannis Ziogas is an Assistant Teaching Professor at the McCourt School of Public Policy, an Assistant Research Professor at the Massive Data Institute, and Associate Director of the Data Science for Public Policy program. His work bridges the gap between cutting-edge data science and real-world policy challenges.

Headshot of Lia Merivaki

Lia Merivaki is an Associate Teaching Professor at the McCourt School of Public Policy and Associate Research Professor at the Massive Data Institute, where she focuses on the practical applications of technology in democratic governance, particularly election integrity and voter confidence.

Together, they address five essential questions about AI’s role in election administration—and what it means for voters, officials and democracy itself.

Q1

How is AI currently being used in election administration, and are there particular jurisdictions that are leading in adoption?

Ioannis: When we talk about AI in elections, we need to clarify that it is not a single technology but a family of approaches, from predictive analytics to natural language processing to generative AI. In practice, election officials are already using generative AI routinely for communication purposes such as drafting social media posts and shaping public-facing messages. These efforts aim to increase trust in the election process and make information more accessible. Some offices have even experimented with using generative AI to design infographics, though this can be tricky due to hallucinations or inaccuracies. More recently, local election officials are exploring AI to streamline staff training, operations, or to summarize complex legal documents.

Our work focuses on alerting election officials to the limitations of generative AI, such as model drift and bias propagation. A key distinction we emphasize in our research is between AI as a backend administrative tool (which voters may never see) and AI as a direct interface with the public (where voter trust and transparency become central). We believe that generative AI tools can be used in both contexts, provided that there is awareness of the challenges and limitations.

Lia: Election officials have been familiar with AI for quite some time, primarily to understand how to mitigate AI-generated misinformation. A leader in this space has been the Arizona Secretary of State Adrian Fontes, who conducted the first of its kind deepfake detection tabletop exercise in preparation for the 2024 election cycle. 

We’ve had conversations with election officials in California, New Mexico, North Carolina, Florida, Maryland and others whom we call early adopters, with many more being ‘AI curious.’

Q2

Security is always a big concern when it comes to the use of AI. Talk about what risks are introduced by bringing AI into election administration, and conversely, how AI can help detect and prevent any type of election interference and voter fraud.

Ioannis: From my perspective, the core security challenge is not only technical but also about privacy and trust. AI systems, by design, rely on large volumes of data. In election contexts, this often includes sensitive voter information. Even when anonymized, the use of personal data raises concerns about surveillance, profiling, or accidental disclosure. Another risk relates to delegating sensitive tasks to AI, which can render election systems vulnerable to adversarial attacks or hidden biases baked into the models. 

At the same time, AI can support security: machine learning can detect coordinated online influence campaigns, identify anomalous traffic to election websites, or flag irregularities that warrant further human review. In short, I view AI as both a potential shield and a potential vulnerability, which is why careful governance and transparency are essential. That is why I believe it is critical to pair AI adoption with clear safeguards, training and guidance, so that officials can use these tools confidently and responsibly.

Lia: A potential risk we are trying to mitigate is the impact of relying on AI for important administrative tasks on voter trust. For instance, voters who call/email their election official and expect to talk with them, but instead interact with a chatbot, may feel disappointed and in turn not trust the information as well as the election official. There is also some evidence that voters do not trust information that is generated with AI, particularly when its use is disclosed.

As for detecting and preventing any irregularities, over-reliance on AI can be problematic and can lead to disenfranchisement. To illustrate, AI can help identify individuals in voter records whose information is missing, which would seemingly make the process of maintaining accurate lists more efficient. The election office can send a letter to these individuals to verify they are citizens, and ask for their information to be updated. This seems like a sound practice; however, it violates federal law, and it risks making eligible voters feel intimidated, or having their eligibility challenged by bad actors. The reality is that maintaining voter records is a highly complex process, and data entry errors are very common. Deploying AI models to substitute existing practices in election administration such as voter list maintenance – with the goal of detecting whether non-citizens register or whether dead dead voters exist in voter rolls – can harm voters and undermine trust.

Q3

What are the biggest barriers to AI adoption in election administration – technical, financial, or political?

Lia: There are significant skills and knowledge gaps among election officials when it comes to utilizing technology generally, and we see such gaps with AI adoption, which is not surprising. Aside from technical barriers, election offices are under-resourced, especially at the local jurisdiction level. We observe that policies around AI adoption in public administration generally, and election administration specifically, are sparse at the moment.

While the election community invested a lot of resources to safeguard the election infrastructure against the threats of AI, we are not seeing – yet – a proportional effort to educate and prepare election officials on how to use AI to improve elections. To better understand the landscape of AI adoption and how to best support the election community, we hosted an exploratory workshop at McCourt in April 2025, in collaboration with The Elections Group and Discourse Labs. In this workshop, we brought together election officials, industry, civil society leaders and other practitioners to discuss how AI tools are used by election officials, what technical barriers exist and how to move forward with designing policies on ethical and responsible use of AI in election administration. Through this workshop, we identified a list of priorities which require close collaboration among the election community, academia, civil society and industry, to ensure that the adoption of AI is done responsibly, ethically and efficiently, without negatively affecting the voter experience.

Ioannis: I would highlight that barriers are not just about resources but also about institutional design. Election officials often work in environments of high political scrutiny but low budgets and limited technical staff. Introducing AI tools into that context requires financial investment and clear guidance on how to evaluate these systems: what counts as success, how to measure error rates and how to align tools with federal and state regulations. Beyond that, there is a cultural barrier. Many election officials are understandably cautious; they’ve spent the past decade defending democracy against disinformation and cyber threats, so embracing new technologies requires trust and confidence that AI will not introduce new risks. That is why partnerships with universities and nonpartisan civil-society groups are critical: they provide a space to pilot ideas, build capacity, and translate research into practice.

Our two priorities are to help narrow the skills gap and build frameworks for ethical and responsible AI use. At McCourt, we’re collaborating with the Arizona State University’s Mechanics of Democracy Lab, which is developing training materials and custom-AI products for election officials. Drawing on our background in AI and elections, we aim to provide election officials with a practical resource that maps out both the risks and the potential of these tools, and that helps them identify ideal use cases where AI can enhance efficiency without compromising trust or voter experience.

Q4

Looking ahead, what emerging AI technologies could transform election administration in the next 5-10 years?

Lia: It’s hard to predict really. At the moment we are seeing high interest from vendors and election officials to integrate AI into elections. Concerns about security and privacy will undoubtedly shape the discussion about what AI can do for the election infrastructure. It could be possible that we see a liberal approach to using AI technologies to communicate with voters, produce training materials, translate election materials into non-English languages, among others. That said, elections are run by humans, and maintaining public trust relies on having “humans in the – elections – loop.” This, coupled with ongoing debates about how AI should or should not be regulated, may result in more guardrails and restrictions over time.

Ioannis: One promising direction is multimodal AI: systems that process text, audio and images together. For election officials, this could mean automatically generating plain-language guides, sign-language translations, or sample audio ballots to improve accessibility. But these same tools can amplify risks if their limitations are not understood. For that reason, any adoption will need to be coupled with auditing, transparency and education for election staff, so they view AI as a supportive tool rather than a replacement platform or a black box.

Q5

What guidelines or regulatory frameworks are needed to govern AI use in elections?

Ioannis: We urgently need a baseline framework that establishes what is permissible, what requires disclosure, and what is off-limits. Today, election officials are experimenting with AI in a largely unregulated space, and they are eager for guidance. A responsible framework should include at least three elements: a) transparency: voters should know when AI-generated materials are used in communications; b) accountability: human oversight should retain the final authority, with AI serving only as a support; and c) auditing: independent experts must be able to test and evaluate these tools for accuracy, bias and security.

Tagged
5 Insights
AI
Research