Skip to main content

Digital repression, disinformation and targeted cyber threats are very likely to intensify around elections in 2024.

This assessment was issued to clients of Dragonfly’s Security Intelligence & Analysis Service (SIAS) on 12 December 2023.

  • At least 50 countries will hold national elections next year, including Bangladesh, India, Indonesia, Taiwan and the US
  • The US will be vulnerable to online disinformation and influence operations around a presidential election in November 2024, in our analysis

More than 50 countries are due to hold national elections in 2024, including Bangladesh, India, Indonesia, Pakistan, Taiwan and the US. We anticipate that several states will ramp up their online monitoring and harassment of their citizens, and limit internet and social media access. The authorities in many countries will also struggle to deal with surges of disinformation and foreign digital interference efforts (in part augmented by AI technologies) around their own elections.

Corporate security teams are very likely to focus on elections next year, both for operational and strategic purposes. That is because of not just the aforementioned, but also given the potential for elevated physical security risks (including protests, unrest, extremism and terrorism). In our assessment, those would be particularly significant in Bangladesh, Pakistan and the US. For ease of use, the graphic below shows a list of countries due to hold national elections in 2024.

A busy election year

At least 50 countries will hold national elections (general, legislative/parliamentary or presidential) in 2024; starting with general elections in Bangladesh on 7 January. Bloomberg Economics in November calculated that 41% of the world’s population have the opportunity to vote for new leaders in 2024. This number is almost certainly higher as it has only calculated 40 countries as holding national polls next year.

Disinformation campaigns highly likely to surge

Online disinformation is highly likely to surge around elections. We assess that this will be particularly prominent in Bangladesh, Georgia, India, Moldova, Pakistan, Senegal, Taiwan, the US and Venezuela. Non-state actors who are seeking to push their ideologies or undermine the electoral processes or the government would be the main perpetrators. Disrupting and undermining several elections will also be priorities for several foreign states – primarily China, Iran and Russia:

  • China will almost certainly intensify disinformation campaigns ahead of presidential and legislative elections in Taiwan on 13 January
  • China, Iran and Russia (but also Cuba, Hezbollah and Venezuela) are very likely to disseminate online disinformation to try to undermine the integrity of the US presidential election in November
  • Russia, its state media and online proxies are also highly likely to push online mis- and disinformation (e.g., over support for Ukraine) around elections in Finland, Georgia and Moldova, as well as around European Parliament elections in 2024

Homegrown political and identity-based disinformation is also very likely in countries like Bangladesh, India, Pakistan, Senegal and the US. In India, there has typically been a surge in divisive and misleading content during previous election cycles on controversial topics, such as candidates, religion and tensions with Pakistan. An investigation by the AFP news agency in September found a ‘sustained campaign of disinformation by unknown actors’ praising the Bangladeshi government’s policies ahead of elections there in January, which was published by leading media agencies in China and Asia.

Coordinated social media posts and bot networks are probably key sources of disinformation. An investigation by media organisations revealed in February 2023 that there is a prolific disinformation-for-hire industry; an Israeli firm cited in the investigation claimed to control thousands of fake social media profiles. Meta in November reportedly said it had taken down nearly 4,800 fake Facebook and Instagram accounts originating from China aimed at impersonating Americans and spreading polarising content. Meta also said Chinese and Russian interference networks were ‘building audiences’ ahead of elections next year.

AI to augment disinformation risks

Generative AI tools are likely to enable the creation of more plausible and convincing disinformation. Foreign states and domestic threat actors will probably experiment with GenAI tools that can produce or alter images, video or audio of political candidates. Political candidates in Argentina shared fabricated AI-generated content aimed at degrading the reputation of their competitors ahead of a presidential election in November. Based on an online repository of AI-related incidents, there have been recent AI controversies related to the upcoming 2024 elections:

  • A deepfake audio recording that depicted the leader of the main UK opposition party swearing in October
  • Microsoft in September revealed findings of ‘China-affiliated actors’ leveraging AI-generated images focusing on politically divisive topics, such as gun violence, and ‘denigrating US political figures and symbols’
  • A US Republican candidate Ron de Santis in the US released a video on X (formerly Twitter) in June with fake images of Donald Trump hugging his former medical advisor; the latter was largely criticised for his policies combatting Covid-19

In our Strategic Outlook 2024, we noted it is plausible that developments in GenAI tools will give fringe actors outsized means to influence audiences and win votes. In our view, this would be especially advantageous for groups on the far left or far right, as they seek to amplify their own narratives, such as those along the lines of anti-immigration and anarchy. Bloomberg reported in July that a far-right German political party distributed AI-generated images ‘of angry immigrants’ and did specify that the photographs were not real.

There do not appear to be adequate safeguards by AI developers to curb the creation of such content around elections. In an example of this, the UK-based disinformation research company Logically in July 2023 found that three text-to-image AI models accepted more than 85% of prompts that sought to generate fake evidence about elections; text inputs of a ‘stolen election’ led these models to create images depicting people stuffing ballot boxes. There are also signs that social media firms are struggling to identify and take down such content online.

Disinformation unlikely to prompt tangible risks

We doubt that disinformation campaigns alone will lead to significant security risks around most elections next year. Online disinformation has been a tried-and-tested tactic by countries such as Iran and Russia over the past several years, but this has appeared to fall short of causing unrest or public disorder. Even in countries such as Moldova and Taiwan where disinformation is likely to be rife, they appear resilient enough to combat this, and society there does not seem to be so polarised as to allow these campaigns to succeed.

The US is an exception. Online disinformation is very likely to push up the potential for major political scandals and unrest around the presidential election in November. This would probably be similar to the controversy around the 2016 presidential election when Russia-backed groups amplified conspiracy theories and hacked and leaked information from emails of officials associated with a presidential candidate. Hostile actors, and even political figures, are also likely to spread AI-generated content to leverage any claims of election fraud around the poll; we assess that this would be a particular risk if Donald Trump were to run and lose the election.

Interfering in Moldova, Taiwan and US elections

We assess there is a high foreign interference risk around elections in Moldova, Taiwan and the US. This will mainly stem from Iran and Russia in the US. An unclassified US intelligence report issued in March 2021 noted there were ‘some successful compromises of state and local government networks’ prior to US federal elections in 2020, but ‘unlike in 2016’, the US ‘did not see persistent Russian cyber efforts to gain access to election infrastructure’.

We anticipate that China and Russia will be more willing than usual to disrupt election processes in Taiwan and Moldova respectively, as a way to destabilise and undermine their respective authorities. The Moldovan authorities last month banned a pro-Russia party from taking part in local elections that they allege received Russian money to ‘buy’ voters.

Internet restrictions to enhance digital repression strategies

Curbing social media and internet access is an established tactic to control the digital information space, alongside censorship. We assess that there is a high likelihood that the authorities in Pakistan, Russia, Senegal and Venezuela will seek to do so around their elections. Pakistan and Senegal would probably aim to throttle or cut access to mobile internet services in an attempt to curb any opposition protests and unrest around their respective elections; there have been periods of unrest this year in both countries over the arrests of prominent opposition leaders.

Many other states due to hold elections next year also have the capability to restrict access to the internet and social media. But we assess they would probably only do so reactively (such as in response to major protests), and this would be brief and localised. Bangladesh, India and Sri Lanka fall into this category. While Iran already frequently throttles or curbs access to the internet to assert control over its digital information space and prevent protests, legislative polls like that on 1 March tend to be much less contentious than presidential ones.

Countries with authoritarian tendencies will probably intensify online surveillance, and the compromise of personal devices to monitor and harass citizens ahead of their respective elections. Those include Algeria, Bangladesh, Belarus, India, Mexico and Pakistan. This risk is already severe or critical for Iran, Russia and Venezuela, as reflected in our Personal Cyber Risk levels. Those relate to the level of exposure to personal cyber risks, including data and device compromise and surveillance.

AI advancements are likely to help states to surveil and control their citizens. Those would particularly enable them to enhance surveillance, censorship and social manipulation, though these trends are unlikely to fully emerge in 2024. According to data gathered by the research institution V-Dem Institute, the propensity for digital repression has increased in more than two-thirds of countries globally in the past five years, with the greatest change being in El Salvador, Nicaragua and Tunisia.

The map below shows the countries where the risk of digital repression is likely to be highest around elections. This is largely based on our Personal Cyber Risk levels and the likelihood of each country enforcing internet or social media access restrictions (as well as censorship) around their scheduled polls.

Personnel in commercial sectors are unlikely to face an elevated risk of such digital targeting around upcoming elections. Those most at risk remain journalists, activists – particularly those reporting on human rights – NGO workers and opposition politicians. According to Reuters, Apple in October warned several prominent Indian opposition leaders and journalists that their iPhones ‘may’ have been targeted by ‘state-sponsored attackers’. Press freedom groups earlier this year alleged that the Mexican government has continued to use spyware to infect the devices of human rights activists.

Image: Indian Prime Minister Narendra Modi arrives at a Bharatiya Janata Party (BJP) campaign meeting ahead of the Telangana state elections at Lal Bahadur Stadium in Hyderabad on 7 November 2023. Photo by Noah Seelam/AFP via Getty Images.