Shape of a globe
Shape of a globe

Biosecurity

The near-term impact of AI on biological misuse

Various broad claims have been made about the impact of AI on biological misuse risk. Some argue that AI is likely to provide threat actors with significant uplift, while others claim this threat is overhyped.

Author(s): Sophie Rose*, Richard Moulange, James Smith, Cassidy Nelson†

Date: July 26th 2024

Various broad claims have been made about the impact of AI on biological misuse risk. Some argue that AI is likely to provide threat actors with significant uplift, while others claim this threat is overhyped. There is currently a limited empirical evidence base which assesses only a subset of the theorised pathways by which AI may provide uplift to threat actors.

It is important that we find a path forward for clarifying this debate. This is necessary to facilitate proportionate policy decision-making that adequately accounts for the benefits and risks offered by AI, but also to allow governments and other stakeholders to accurately place biological risks from AI in the broader biosecurity risk landscape and prioritise them accordingly.

However, building this evidence base—including through research efforts such as human uplift studies—is resource-intensive. We therefore need a framework with which to conceptualise biological risks from AI and prioritise which uplift hypotheses to test.

As such, this report aims to improve understanding of the potential impact of AI on biological misuse risk in two ways:

1) Providing a framework to estimate how AI may provide uplift: This framework can be adopted by readers to reason about uplift with their own assumptions, make forecasts about uplift, provide structure for gathering intelligence or developing policy, and to empirically test model capabilities (e.g. through evals) or uplift (e.g. through uplift studies). We consider biological uplift, but in principle the framework can generalise to other risks.

2) Generating hypotheses about how much uplift AI provides: We generate hypotheses about the uplift that given AI capabilities may offer to different categories of threat actors using our framework. We do this to facilitate an initial assessment of where to focus efforts to further understand uplift. These hypotheses can be subsequently prioritised by others with access to intelligence signals and based on their policy priorities, which can then be tested by those with the requisite resources and model access.

Specifically, this report draws on public-facing evidence and expert consultation to assess the impact of AI foundation models on biological misuse by examining:

1) Uplift currently provided by different levels of large language model access and the impact of fine-tuning; and

2) Uplift provided by the realisation of five forecasted trends in AI capabilities and development within the next two years.

In providing this work, we hope to progress the field to the next stage of risk assessments that will help governments, policymakers and other defensive actors further understand the biological threat landscape.

* Author contributed to this report until June 2024 prior to commencing a new role that precluded further involvement

† For enquires regarding this report please reach out to CLTR’s Biosecurity Policy Unit using biosecurity@longtermresilience.org.

Related Reports

Biosecurity

Independent Progress Tracker: UK Biological Security Strategy


In June 2023, the UK Cabinet Office published an updated national Biological Security Strategy (BSS) outlining the government’s plan to strengthen the UK’s

August 30th 2024

Biosecurity

Biosecurity Policy Proposals

Sophie Rose


Transform UK and global resilience to the full range of biological threats.

September 22nd 2022

Biosecurity

Why we recommend risk assessments over evaluations for AI-enabled biological tools (BTs)

James Smith, Sophie Rose, Richard Moulange, Cassidy Nelson


As part of our work to identify the three most beneficial next steps that the UK Government can take to reduce the biological

March 27th 2024