With the advent of generative artificial intelligence, significant cybersecurity concerns have been raised about its application by foreign actors to disrupt American democracy. While disinformation has been a concern during previous American elections, it is feared that generative AI may supercharge the ability of bad actors to threaten each part of the electoral process. This article outlines the mounting pressures on election officials to perform their duties safely and securely in this quickly changing environment, from the casting of the initial ballot to the tabulation of results. An increasing number of foreign actors are leveraging generative abilities to carry out hack-and-leak operations, dox politicians, create deepfakes, and spread falsehoods via social media. Additionally, AI-enabled translation services, account creation tools, and data aggregation methods can further supercharge the ability of foreign adversaries to maximize their impact at scale. While officials did not observe the malicious use of generative AI in the November 2023 elections, there is a real threat that these tools are being used as they become increasingly available to the public. In protecting against generative AI cyberthreats, many traditional security best practices can be reapplied, such as increasing the resilience of networks to phishing attacks and communicating consistently and transparently with the public regarding the known capabilities of foreign adversaries in this space.