Key Findings
The article describes a range of AI-related challenges, such as more sophisticated phishing communications, more advanced malware, and other scams—but also notes that AI can help to detect these detrimental practices. Other AI risks include convincing impersonations, deepfakes, and spoofed official websites, which could be used to mislead voters into believing false information as though it were from an authoritative source. In addition, many election officials worry that AI-generated misinformation about elections could exacerbate the problem of excessive and/or frivolous Freedom of Information Act (FOIA) and other open-records requests, disrupting and burdening election offices.
Key Recommendations Made
The article recommends the following:
- Build more security and resilience into election systems.
- Provide local election offices with more technical support to protect election infrastructure
- Develop state cyber navigator programs to identify and resolve cybersecurity vulnerabilities.
- Offer targeted assistance from Cybersecurity and Infrastructure Security Agency (CISA).
- Focus resources on defending AI already used in elections.
- Invest in AI to protect election infrastructure.
- Seek tech company investment in free and low-cost tools that increase election security, confidence, and transparency.
- Authenticate election office communications and help the public spot impersonations.
- Move all election websites to .gov domains.
- Verify accounts and amplify truthful information.
- Implement methods to ensure that open records requests are authentic.
- Explore how to authenticate sensitive election materials.
- Take extra steps to verify election-related content.
- Give election workers AI-specific training and resources.
- Remove data that could be used to personalize AI-generated communications.
- Help election workers identify AI-generated content.
- Push back against false narratives.