AI disinformation didn’t upend 2024 elections, but the threat is very real

Yet, despite the dire warnings, AI-driven manipulation failed to upend elections in 2024. While there were alarming examples — including deepfake robocalls impersonating US President Joe Biden to suppress voter turnout and AI-generated hoaxes targeting Taiwanese elections — most incidents were quickly debunked. Their actual influence on electoral outcomes remained limited.

But that does not mean AI-driven deception is overhyped. The cybersecurity community, in particular, should see this moment not as a sign of resilience, but as a warning that adversaries are still refining their tactics. The next phase of AI disinformation won’t just target voters. It will target organizations, supply chains, and critical infrastructure, where the potential for damage is even greater. In short, the real AI disinformation crisis hasn’t arrived yet—but when it does, the consequences will extend far beyond elections.

How AI has changed the disinformation threat model

AI is transforming disinformation operations into a scalable, low-cost cyber weapon, and adversaries are integrating these capabilities directly into their network attack strategies. What began as a tool for manipulating elections has rapidly evolved into an enabler for cybercriminals, intelligence agencies, and state-sponsored hackers.