Shape of a globe
Shape of a globe

Artificial Intelligence

The near-term impact of AI on disinformation

It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, whether that’s through the intuitively alarming risk of deepfakes, or more subtle operational uplifts in state actors’ capabilities.

Author(s): Tommy Shaffer Shane

Citation: Shaffer Shane, T. (2024). The near-term impact of AI on disinformation. The Centre for Long-Term Resilience: London UK.

Date: May 16th 2024

It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, whether that’s through the intuitively alarming risk of deepfakes, or more subtle operational uplifts in state actors’ capabilities.

However, others have argued that such concerns are “overblown”, and lacking in evidence.

This is creating a confusing picture, and could undermine policymakers’ attempts to monitor the changing nature of this threat and appropriate mitigations.

In our report, we aim to provide a more nuanced picture of the threat that moves beyond broad overarching claims. We provide assessments of the impact of the latest generation of AI on five types of threat actor (from the most sophisticated states, to lone individuals), and on several types of activities (such as audience analysis and the creation of content), in the next 1-2 years. We do so by synthesising a range of sources and interviews with experts.

We establish 10 key findings, and offer recommendations for the UK Government to act on these findings, which we summarise below and delve into more deeply in our report.

Key Findings

  1. AI will likely lead to uplifts for multiple disinformation threat actors and threat capabilities, with the biggest uplifts for low-resourced actors rather than highly capable states.
  2. As AI reduces costs of content production, it will likely make the business of disinformation more cost-effective, leading to a greater number of actors engaging in more diverse contexts, such as finance.
  3. For low-resourced actors, AI offers the significant uplift of enabling actors to create cheap multimedia content for the first time, such as videos and cartoons, and experiment with new messaging and techniques.
  4. For high-resourced actors (e.g. states), there will be some uplifts in their ability to create content at lower cost, and some new tools such as deepfakes, but this only adds a few new tools to an already very large toolbox.
  5. AI will enable new disinformation techniques, such as audio deepfakes and chatbots, but due to their novelty their impact remains unproven.
  6. It is likely that AI will aid threat actors’ ability to understand their audiences and how social media platforms moderate content (i.e. the ‘attack surface’), because they can post cheap content to test how audiences and companies respond.
  7. It is likely that disseminating content to desired audiences will remain a key bottleneck for threat actors, though it is possible that threat actors will address this bottleneck with AI-driven bots.
  8. It is a realistic possibility that AI will increase the persuasiveness of content with hyper-tailoring for precise audiences, and even for specific individuals, though the evidence for this is still emerging.
  9. It is likely that AI will enhance personalised harassment of public figures for political goals, which will highly likely disproportionately target women.
  10. AI will likely further undermine public confidence in information and democracy, a ‘social fissure’ that can be exploited by all threat actors to achieve their goals.

Key Recommendations

We recommend that the UK Government take the following steps to mitigate the likely uplifts we have identified.

1. Target threat actors where they are most vulnerable

  • Develop mitigations that restrict threat actors’ dissemination channels, such as their use of advertising technologies (e.g. Facebook ads), as this is a limited resource where threat actors can be ‘choked’. We provide suggestions for what mitigations may be most successful.
  • Lead the coordination of international standards to prevent the misuse of commercial AI tools, which are the cheapest and often most attractive type of AI tool for threat actors. It is therefore urgent that standards for commercial AI tools are developed, e.g. to prevent the generation of malicious deepfakes of politicians during elections.

2. Support defensive actors to tackle disinformation

  • Convene UK businesses that detect and disrupt disinformation operations to nurture this market. DSIT should build on its success with the Safety Tech Innovation Network and identify opportunities to support this market’s growth (e.g. with challenge funds), and facilitate threat insights being shared with government and social media companies.
  • Convene the full spectrum of defensive actors (those able to mitigate the disinformation threat) and coordinate mitigations when a new threat is identified (e.g. a new model with highly persuasive capabilities). Defensive actors are likely to include AI companies, social media platforms, counter-disinformation services and Government departments. This coordination function should identify appropriate mitigations so that DSIT has a full picture of the threat mitigation.

3. Build capacity to monitor how this threat continues to evolve

  • Create a horizon-scanning function in the Department for Science, Innovation and Technology (DSIT) to identify new AI capabilities that could enhance threat actors. This function should monitor events such as product announcements (e.g. OpenAI’s upcoming ‘VoiceEngine’) and incidents (e.g. the use of deepfakes to target UK audiences by foreign states), and assess their impact on threat actors (as done in this report).
  • Commission further research to understand how AI is likely to alter the disinformation threat in future, particularly on:
    • the persuasiveness of AI models, as this represents a potential significant uplift for which there is a growing evidence base;
    • the effect of LLMs on ‘disinformation-for-hire’ businesses, particularly through lowering costs and therefore the barrier to entry to this market;
    • the UK public’s perception of AI’s impact on democracy and the public sphere, including on their confidence in their own and others’ ability to make informed judgements.

If you’re interested in discussing this work further, please reach out to the author, Tommy Shaffer Shane, using tommy@longtermresilience.org

Related Reports

Artificial Intelligence

AI incident reporting: Addressing a gap in the UK’s regulation of AI

Tommy Shaffer Shane


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since

June 26th 2024

Artificial Intelligence

Future of compute review – submission of evidence

Dr Jess Whittlestone


CLTR's Head of AI Policy, Dr Jess Whittlestone, has co-authored a submission of evidence to the UK Government's Future of Compute Review

August 08th 2022

Artificial Intelligence

Transforming risk governance at frontier AI companies

Ben Robinson and James Ginns


This report explores how aspects of best practice risk governance – particularly the Three Lines Model (3LoD), which separates risk ownership, oversight and

July 19th 2024