Skip to main content

Malign actors are highly likely to use Artificial Intelligence (AI) tools to create more persuasive and misleading content to stoke political and societal divisions within the US – especially ahead of the 2024 presidential election.

This assessment was issued to clients of Dragonfly’s Security Intelligence & Analysis Service (SIAS) on 22 August 2023.

  • State and non-state actors are highly likely to leverage Artificial Intelligence (AI) tools in disinformation operations around a US presidential election due in November 2024
  • The US government has blamed Iran and Russia-linked groups for disinformation and influence operations around recent election cycles
  • Generative Artificial Intelligence (GenAI) tools such as voice cloning will probably make it easier for hostile actors to create more convincing content to generate scandals and spread conspiracy theories

Both Iran and Russia-linked groups, in particular, are likely to attempt to influence voter opinions and the outcome of the election, as well as undermine public confidence in the legitimacy of the vote. This is based on declassified US intelligence reports, including the Annual Threat Assessment published in February 2023 and reports on the 2020 presidential election.

AI and related technologies, including video manipulation tools, have lowered the capability threshold for both state and non-state actors to create and disseminate disinformation. US officials have recently alluded to the threat that these technologies pose. The FBI director, for example, said in July that ‘AI is going to enable threat actors to develop increasingly powerful, sophisticated, customisable and scalable capabilities’. And in our understanding, AI is likely to better enable such actors to engineer high-profile political scandals, as well as propagate conspiracy theories.

Information operations to surge ahead of election

Both state and non-state actors are probably highly intent on conducting disinformation campaigns ahead of the election. The US has previously blamed the governments of China, Iran and Russia (and to a lesser extent Cuba and Venezuela as well as Lebanese group Hezbollah) for information and influence operations around past elections. Preparations by at least some of them to conduct such operations ahead of the upcoming poll are probably already underway. Russian groups have previously ‘scanned’ US electoral systems and created ‘bot armies’ well in advance of past election cycles, according to US intelligence reports.

We assess that such states, as well as domestic and non-state actors, will almost certainly attempt to leverage advancements in GenAI technologies, in particular, to pursue more effective information operations ahead of the US election. This is because the 2024 election – regardless of the outcome – will probably be key for actors’ interests. Hostile states probably view the US election as one of the first major polls globally where they can capitalise on and experiment with novel tools.

AI enhancing threat actors’ capabilities

Advancements in GenAI will probably make it easier for malign actors to create more convincing and plausible content as part of their disinformation campaigns. In an example of this, the digital investigations firm Graphika last year identified a pro-China operation using videos of fake news anchors that were ‘almost certainly’ created by AI. These videos criticised the US’ handling of gun violence and promoted US-China cooperation. In our assessment and our monitoring of incidents related to AI in the past year, such technologies would most likely enable actors to:

  • Generate images and audio from input text
  • Manipulate authentic images and video
  • Produce deep fake videos
  • Create convincing and misleading text using large language models (LLMs), such as ChatGPT-4
  • Clone voices of people

Hostile actors are already using GenAI technologies to create adverse content which has resulted in economic, social and political impacts. For example, an AI-generated image of a fake explosion at the Pentagon building, which was widely circulated on social media in May, triggered a 0.26% fall in the US stock market, according to press outlets. And in June, Republican candidate Ron de Santis’ campaign released a video using AI-generated images that seemingly attempted to weaken former president and potential Republican nominee Donald Trump’s credibility.

Exploiting topics like LGBTQ+ rights and immigration

Based on our monitoring of open sources and reporting around previous disinformation campaigns, we anticipate that actors such as Russia and Iran will probably focus their disinformation on contentious topics, including:

  • Narratives and hate speech relating to topics such as gun and abortion rights, immigration and LGBTQ+ rights
  • AI-generated content of purported presidential candidates, particularly Trump and President Biden, making false or controversial statements such as those relating to Trump’s indictment
  • Claims of voter fraud alongside altered images of voting sites or ballots

We also anticipate that efforts to moderate or take down AI-generated, misleading content will be mostly ineffective. Social media companies are likely to particularly struggle to do so, given the probable surge in the amount of such content online ahead of the election, and a reduction in content moderation standards and lay-offs at social media firms. Candidates and campaign officials will probably also seek to publicly disprove and distance themselves from any such content.

AI driving likelihood of major security incidents

Disinformation campaigns are highly likely to increase the potential for major political scandals around the poll. This has already been a particular issue in past US elections due to foreign interference efforts, notably by Russia. After the 2016 election US intelligence confirmed that apparent Russia-backed groups amplified conspiracy theories and circulated information obtained from hack-and-leaks of emails of officials associated with the Democratic candidate Hilary Clinton.

It is likely that hostile actors, and even political figures, would use and spread AI-generated content to leverage any claims of election fraud around the poll, particularly as this would probably augment the plausibility of their claims to the public and their respective voter bases. In our assessment, this would be a particular risk issue with knock-on security impacts – namely in the form of political protests and isolated acts of extremism – if Trump were to run and lose the election. Allegations of voter fraud were seemingly pivotal in the decision of Trump supporters to storm the Capitol building in January 2021.

Image: Supporters of former US President Donald Trump during his event at Windham High School, New Hampshire, United States, on 8 August 2023. Photo by Scott Eisen via Getty Images.