"Disinformation campaigns aimed at undermining electoral integrity are expected to play an ever larger role in elections due to the increased availability of generative artificial intelligence (AI) tools that can produce high-quality synthetic text, audio, images and videos and their potential for targeted personalization. As these campaigns become more sophisticated and manipulative, the foreseeable consequence is further erosion of trust in institutions and heightened disintegration of civic integrity, jeopardizing a host of human rights, including electoral rights and the right to freedom of thought. → These developments are occurring at a time when the companies that create the fabric of digital society should be investing heavily in, but instead are dismantling, the “integrity” or “trust and safety” teams that counter these threats. Policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen. They should act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools to allow people to differentiate between AI-generated and authentic content." By David Evan Harris and Aaron Shull of the Centre for International Governance Innovation (CIGI).
Election Security Practices
Explore top LinkedIn content from expert professionals.
-
-
As generative artificial intelligence continues to advance, its dual nature becomes increasingly apparent. Tools like OpenAI's ChatGPT and image generators such as Midjourney or Claude by Anthropic are revolutionizing productivity and creativity. However, they are also being exploited for scams and abuse, with deepfakes standing out among the threats. These AI-generated audiovisual manipulations range from impersonating politicians to creating nonconsensual pornographic content featuring celebrities. 😱🤖 A recent Microsoft survey underscores global concerns, revealing that 71 percent of respondents across 17 countries are worried about AI-driven scams. Deepfakes and instances of sexual or online abuse follow closely, each concerning 69 percent of those surveyed. AI hallucinations, where chatbots like ChatGPT provide inaccurate information, rank next, while data privacy issues related to AI models trained without user consent round out the top concerns. 🌐🔍 Despite the immense economic potential of AI, estimated to reach $300-$550 billion in 2024, these findings highlight critical societal risks if these technologies are misused. With major political events like the upcoming U.S. presidential elections, the threat of AI-generated misinformation looms large over social media platforms. 🇺🇸📱 Speaking at a recent hearing on deepfakes and AI in elections, Ben Colman CEO of Reality Defender, acknowledged AI's benefits while warning of its profound dangers: "The impact of deepfakes on democracy and society cannot be overstated. We must remain vigilant against their malicious use, which threatens not just democracy, but the fabric of our nation." 🚨��️ As we navigate this complex landscape, it's imperative to harness AI responsibly, ensuring it serves humanity's best interests while safeguarding against its misuse. #AI #Deepfakes #EthicsInTech #DigitalTrust #ArtificialIntelligence #CyberSecurity #PrivacyProtection Source Visual Capitalist & Statista
-
The article discusses a recent development in Georgia's election system, where election-denying MAGA supporters have gained control of the State Elections Board. Their influence has led to the implementation of a controversial ballot-counting measure aimed at disrupting future elections. This measure is perceived as a strategic move to hinder the counting and certification of election results, particularly in the event of a Democratic victory. The ultimate goal, as suggested by the article, is to create delays that could potentially allow the election outcome to be decided by the House of Representatives, a scenario that could favor the former president. The article highlights concerns about the integrity of Georgia's election process, pointing out that these changes are part of a broader effort to undermine the electoral system. The author argues that the hand-counting measure, while framed as a way to improve accuracy, is in reality a tactic to create confusion and chaos in the vote-counting process. This could lead to significant delays, casting doubt on the legitimacy of the election results and potentially paving the way for further manipulation. Overall, the piece portrays these developments as a calculated attempt by election-deniers to exert control over Georgia's elections, with the potential to disrupt the democratic process. The author expresses alarm at the implications of these actions, warning that they could severely undermine public confidence in the electoral system and the peaceful transition of power.
-
Security of EVMs - Design Principles The security of electronic voting machines (#EVMs) is an important matter for our #democracy. Given that elections are held at the frequency of once in five years, what happens in the polling booth becomes of critical importance. In a democracy, citizens are the most important stakeholder and, given the size and complexity of our polity, voting is the only way they can express themselves. The foundational institutions set up to discharge specialised functions such as conducting elections and arbitrating disputes need to be mindful of being accountable to citizens and err on the side of transparency, rather than dismissing objections out of hand. The principle to keep in mind is security-by-design and not security through obscurity. Tasked with recording each vote of the citizen, the machine logic has to be verifiable and secure against different kinds of attacks. This would include the circuitry used, the mechanical components surrounding CPU, and the hardening of the device against attacks, which seek to change the logic of a button press and what gets recorded in the memory of the device. The EVM may be standalone device, but it would be presumptuous to say that they are incapable of being manipulated. This is why a secondary technological element, that is of VVPAT has been introduced as a feedback loop to the voter. It stands to reason that the counting process must take into account the results as given out by these two different systems, along with the paper trail. With any system, the security design will not stop at the device itself, but will get extended to the surrounding #processes. The randomisation of different candidates’ names is one control. Another control would be sealing of the machines after being tested before deployment. Third would be, how these are stored and the CPU logs for ensuring that votes recorded on the voting day remain unchanged till counting. The processes around maintaining a solid inventory, repairing the malfunctioning devices, and accounting for the spares used and discarded would also be part of the SOP. All of this needs to be in public domain to ensure that there is no unnecessary mystery. The third vital element is that of #people. The designers of the machine should be people of repute and willing to publicly defend the architecture used by them and kept up-to-date against all known attacks. It should also include people who are certified to repair the machines, the lakhs of trained government officials who use the EVMs, including their storage, retrieval and counting. The polling agents of parties must have adequate understanding of the process and the technology. The activity in each polling booth (not the polling kiosk itself) must be recorded on #CCTV and archived for resolving any disputes, in addition to real time monitoring to reduce booth captures. Hope these principles will be in evidence to enhance the #electoral #trust further.
-
5 Ways AI is Impacting the U.S. Election As AI advances, its influence on democracy grows, reshaping campaign strategies, voter outreach, and information dissemination. Embodying the duality of technology, AI has the potential to enhance democracy but introduces risks, particularly around misinformation and privacy. 1. CREATOR AND DESTROYER OF MISINFORMATION A primary concern is AI's role in fueling misinformation. Deepfakes and AI-generated text spread rapidly, polarizing the public and eroding trust. Recently, Grok, an AI chatbot on X, misled voters with outdated polling hours—taking 10 days to correct. This incident underscored the risk of unmonitored AI delivering misinformation directly rather than misinformation being created by users. Yet, AI also combats misinformation. Increasingly, it’s deployed to detect deepfake images it can generate. Platforms like Facebook now integrate AI fact-checking, providing real-time verification essential for journalists and voters alike. 2. THE ILLUSION OF CONNECTION: CHATBOTS IN POLITICS Chatbots and AI-powered communication tools allow campaigns to automate interactions, responding to voter inquiries on social media or through direct messaging at scale. The risk is these interactions create an illusion of accessibility and engagement, giving voters the sense that they’re interacting directly with a candidate when they haven’t. There are obvious philosophical implications to reducing politics to superficial interactions rather than meaningful civic engagement. 3. PRECISION OR MANIPULATION? MICRO-TARGETING AI’s data analysis powers micro-targeting in campaigns, analyzing demographic data, social media trends, and voting patterns. This lets campaigns tailor messages—for instance, climate change for younger voters, Social Security for older ones, which is critical in swing states. However, this precision raises ethical concerns around manipulation and privacy, even as it enhances campaign alignment with voter priorities. 4. DETECTING FOREIGN INFLUENCE AI is the primary tool to counter foreign influence online. FBI Director Christopher Wray noted that generative AI tools lower the barrier for foreign actors to create and distribute disinformation, like deepfakes. The FBI is collaborating with social media platforms to detect interference and deactivate bot accounts aimed at swaying public opinion. 5. SECURING THE VOTE Cybersecurity has been bolstered, with AI now detecting anomalies that could indicate hacking. The Department of Homeland Security is using advanced AI-driven monitoring in states like Pennsylvania and Arizona, responding to rising threats to election databases. In Georgia, AI has helped identify potential voter suppression patterns, especially in marginalized communities. This split between progress and concern underscores the need for scrutiny. Political philosophers, AI companies, and legislators alike must ensure AI's democratic potential is balanced against the risks it poses.
-
The Trump administration has initiated the dismantling of crucial federal defenses against foreign interference in U.S. elections, raising significant concerns: - Closure of the FBI's Foreign Influence Task Force - Reduction of over 100 positions at the U.S. Cybersecurity and Infrastructure Security Agency (CISA) - Absence of federal partners at the National Association of Secretaries of State winter meeting States cannot address this issue independently. Pennsylvania's Republican Secretary of the Commonwealth, Al Schmidt, is on record stating: "It is foolish and inefficient to think that states should each pursue this on their own." Why this matters Foreign meddling in U.S. elections is not hypothetical but a documented fact. The Senate Intelligence Committee's bipartisan report revealed that Russian operatives targeted election systems in all 50 states in 2016. The Department of Justice confirmed similar attempts by Iran, China, and Russia in recent elections. There is no reason to believe they will stop. What can we reasonably expect to happen as a consequence? 1. Heightened vulnerability: State election systems will lack federal backing against sophisticated foreign actors 2. Fragmented defenses: States may adopt inconsistent security measures without unified federal support 3. Loss of expertise: Disruption of years of institutional knowledge and security partnerships 4. Public trust: Visible security measures are crucial to maintaining trust in election integrity The crucial question is: What do we stand to gain by weakening these protections, and at what expense to our democratic processes? CISA has played a vital role in providing essential services to states, including vulnerability assessments, security evaluations, and Election Day crisis readiness. These services have bolstered election infrastructure nationwide, irrespective of political affiliations. #ElectionSecurity #CyberSecurity #VoterProtection #DemocracyMatters #NationalSecurity #CISA #CriticalInfrastructure https://lnkd.in/eqiJRn36
-
Two days after Moldova’s elections, the scenarios we outlined are unfolding. By analysing the pre-election InfoOps, we concluded that Russian and Russia-aligned threat actors would prepare Moldova’s information space for two outcomes: delegitimising the vote or attempting to break the pro-EU majority in parliament. Less than 48 hours after the results were announced, we recorded that threat actors produced over 6,000 posts across multiple media and social media. We already saw spikes in the predicted narratives, witnessed intensified coordination from Russian political figures down to small Moldovan Telegram channels, recorded activity of a bot network, and detected a deepfake. This is exactly why intelligence on InfoOps matters beyond election day. The playbook doesn’t stop at the ballot box; it evolves in real time to shape the country’s security environment and political trajectory. Last week Andrei Tiut and I shared this perspective with POLITICO ahead of the elections. The piece gives a good overview of what was expected, and today we can already see those predictions materializing. The lesson? Watch closely. The post-election phase is where information operations often get sharper, not weaker. https://lnkd.in/edwXR3bi
-
The next great threat to elections may never look like a deepfake at all. We’ve spent years warning about manipulated videos and synthetic audio, and those threats are indeed real. But a new body of research shows the more profound danger is something less obvious: AI systems that don’t just mimic people. They can persuade them. According to recent studies highlighted by “MIT Technology Review” (December 2025), brief conversations with AI chatbots shifted voter attitudes by up to 10 percentage points in some national elections, and when optimized for persuasion, by as much as 25 points. But here’s the part that stopped me in my tracks: For under $1 million, an actor could (hypothetically) generate a personalized AI conversation for every registered voter in the United States. The math is straightforward enough: roughly 174 million voters × a short, 2,700-token exchange per person × current API pricing. In other words, mass one-to-one political persuasion (once the domain of large campaigns or nation-states) has become inexpensive, scalable, feasible, and nearly invisible. For someone who spent years in national security watching adversaries learn how to fracture democratic trust, the implications are clear: the contest we face extends beyond securing networks. It is about protecting the mind’s ability to think freely in an information environment engineered for persuasion and manipulation. And a question we cannot ignore? What guardrails will we build to ensure open debate isn’t overtaken by whoever automates persuasion first? #MindSovereignty #CognitiveSecurity #NationalSecurity #DigitalTrust #InformationThreats
-
As we get closer to November 5th, the risk of nation-state #cyberattacks to disrupt the US #elections remains very real. Yesterday, the Trump campaign stated that some of its internal communications had been hacked by Iran and cited a Microsoft threat intel report. Among the files stolen was a 271-page dossier on Trump running mate JD Vance shared by the hackers with POLITICO. The Microsoft report doesn't reference the Trump campaign but does note that the Islamic Revolutionary Guard intelligence unit sent a spear-phishing email to a high-ranking official of a presidential campaign from a compromised email account of a former senior advisor in June. The #phishing email contained a fake forward with a hyperlink that directs traffic through an actor-controlled domain before redirecting to the listed domain. The human attack vector remains a key point of weakness and in this age of LLMs, spear phishing can be highly customized to the victim. Founders need to think about how we can protect collaboration across email, text messages, Slack, Zoom, and other platforms. If you're working on something new, please reach out. I'm actively spending time in this space. Microsoft report on Iran election interference: https://lnkd.in/gYiWqvAk. Politico article: https://lnkd.in/gxiuvBtC.
-
What is being done to make sure the European elections remain free and fair? ❔ It's very important to answer this question, especially as they are on 6-9 June, just a few weeks away. With the support of EU institutions, national election authorities work to safeguard elections from potential attempts at information manipulation, cyberattacks, data breaches and hybrid threats 🤜 🤛 . Authorities responsible for ensuring election integrity regularly meet in the European Commission-led European cooperation network on elections. The EU European Union Agency for Cybersecurity (ENISA) and the CERT- EU are bodies supporting cooperation among EU countries to prepare for potential cyber threats. In addition, the European Data Protection Board coordinates cooperation among national authorities to ensure a high level of data protection during the European elections. At the European Parliament level 🛡️ Since 2018, the European Parliament’s services have implemented specific elements, structural and procedures to counter and prevent #disinformation. Especially now, to pre-empt or "prebunk" disinformation narratives, my team has created a section on the 2024 #EuropeanElection website on how to secure 'Free and fair elections' 💻 . We want to make sure that Europeans are exposed to factual and trustworthy information before potentially facing manipulated narratives. We also want to empower citizens to recognise the signs of disinformation and to give them some tools to tackle it 👁️🗨️ 💬 . 🔗 That's why in this link europa.eu/!y7j4rk you will be able to find out: - Who is in charge of what - Links to the electoral authorities in charge in your country - What is disinformation and why countering it is so important. - EU resources to learn about the EU's work against disinformation - Links to the main European fact-checker networks More communication products to raise awareness on common tactics and techniques to deceive people are yet to come. Connect, stay tuned and #UseYourVote!