Large Language Models (LLMs) and generative AI are still developing technologies, and immature technologies like these can perform quite well in defined, non-critical roles. However, these technologies are generally insufficiently advanced to handle critical tasks such as flying aircraft, driving cars or providing health care.
Elections — at least in democracies like ours — are critical functions, so any deployment of AI in election administration tasks should be undertaken with extreme caution. To the extent it is used, officials should only direct it toward closely defined, limited tasks and institute substantial safeguards.
The Importance of Transparency
The core function of election officials is to ensure free and fair elections, and data quality is critical for arriving at valid election results. Yet today’s AI can hallucinate unreal scenarios and reinforce existing biases in results, or even create new ones. The potential consequences for elections would be anti-democratic, thwarting the will of the people.
To maintain the public’s trust and perhaps to retrieve some of it that has been lost, the fundamental principle election officials should adhere to at all stages of the development and deployment of AI tools is transparency. To paraphrase Supreme Court Justice Louis Brandeis, sunlight is the best disinfectant.
Clear documentation and traceability at all stages are critical for both transparency and human oversight. Officials must be able to ensure their AI systems operate fairly and securely, and should clearly understand when to suspend the use of any AI systems.
Likewise, constituents should be confident that all their questions and concerns can be openly addressed and satisfactorily answered. Election officials must be transparent about the purposes they have used AI for, the methodology they have followed, and the results they have gotten. They should plan for the full disclosure of their AI policy and be prepared to discuss their decision-making with voters, oversight organizations, and the public at large. In some communities, a collaborative decision-making process itself may be advisable.
Keep in mind the importance of simplicity — both in terms of minimizing the possibility of performance failures and underperformance, as well as the quality of voter and other data inputs. If something isn’t explainable, it’s unlikely to be transparent, which means it’s also unlikely to inspire public trust.
In addition, election officials must take robust cybersecurity measures to safeguard their AI-powered and other systems.
Cybersecurity for Elections
Elections encompass a wide range of activities, including voter registration, ballot casting, vote tallying, results submission and official certification. Their infrastructure includes registration databases and associated IT systems, systems to manage counting, auditing and display of results and post-election reporting, voting systems and their infrastructure, data storage and polling places.
These processes and infrastructures can be vulnerable at all stages to security threats such as malicious malware. To mitigate these, the US Cybersecurity and Infrastructure Security Agency (CISA) recommends measures including:
- Enforcement of strong cybersecurity protocols, such as Multi-factor Authentication (MFA), phishing-resistant Fast Identity Online (FIDO authentication), and end-point detection and response software.
- Adoption of email authentication security protocols such as Domain-based Message Authentication, Reporting and Conformance (DMARC), Sender Policy Framework (SPF), and DomainKeys Identified Mail (DKIM) to better guard against email spoofing.
- Hardening personal and organizational social media accounts by applying the strongest security and privacy controls possible, deactivating or deleting profiles no longer in use and removing any personally identifying information (PII) from social media profiles.
- Adoption of zero trust security principles to prevent unauthorized access to data and services, making access control enforcement as specific and detailed as possible.
Election Officials Should Proceed With Caution
In the near term, oversight and regulation of AI deployment in elections are only in their birthing stage in the US. President Biden’s 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which requires federal regulators to develop guidelines for AI use by critical infrastructure owners and operators — including operators of election infrastructure — by late Spring this year.
However, the extent to which these initiatives will cover elections is unclear, and these measures might not be ready in time for the 2024 election cycle. This makes it difficult for election administrators to proceed with certainty, given the risks to accuracy and public trust.
The Cybersecurity and Infrastructure Security Agency (CISA) has announced a roadmap for secure and resilient AI deployment and recommendations to mitigate AI-powered threats. While this provides a helpful starting point, election officials should proceed with caution, ensuring appropriate oversight and transparency to ensure the continued validity of all election results. A live election — the foundation of democracy — is not the place for field testing.
— Ed Watal is the founder and principal of Intellibus, an INC 5000 Top 100 Software firm based in Reston, Virginia. He regularly serves as a board advisor to the world’s largest financial institutions. C-level executives rely on him for IT strategy & architecture due to his business acumen & deep IT knowledge. One of Ed’s key projects includes BigParser (an Ethical AI Platform and an A Data Commons for the World). He has also built and sold several Tech & AI startups. Prior to becoming an entrepreneur, he worked in some of the largest global financial institutions, including RBS, Deutsche Bank, and Citigroup. He is the author of numerous articles and one of the defining books on cloud fundamentals called ‘Cloud Basics.’ Ed has substantial teaching experience and has served as a lecturer for universities globally, including NYU and Stanford. Ed has been featured on Fox News, QR Calgary Radio, and Medical Device News.