Various broad claims have been made about the impact of AI on biological misuse risk. Some argue that AI is likely to provide threat actors with significant uplift, while others claim this threat is overhyped. There is currently a limited empirical evidence base which assesses only a subset of the theorised pathways by which AI may provide uplift to threat actors.
It is important that we find a path forward for clarifying this debate. This is necessary to facilitate proportionate policy decision-making that adequately accounts for the benefits and risks offered by AI, but also to allow governments and other stakeholders to accurately place biological risks from AI in the broader biosecurity risk landscape and prioritise them accordingly.
However, building this evidence base—including through research efforts such as human uplift studies—is resource-intensive. We therefore need a framework with which to conceptualise biological risks from AI and prioritise which uplift hypotheses to test.
As such, this report aims to improve understanding of the potential impact of AI on biological misuse risk in two ways:
1) Providing a framework to estimate how AI may provide uplift: This framework can be adopted by readers to reason about uplift with their own assumptions, make forecasts about uplift, provide structure for gathering intelligence or developing policy, and to empirically test model capabilities (e.g. through evals) or uplift (e.g. through uplift studies). We consider biological uplift, but in principle the framework can generalise to other risks.
2) Generating hypotheses about how much uplift AI provides: We generate hypotheses about the uplift that given AI capabilities may offer to different categories of threat actors using our framework. We do this to facilitate an initial assessment of where to focus efforts to further understand uplift. These hypotheses can be subsequently prioritised by others with access to intelligence signals and based on their policy priorities, which can then be tested by those with the requisite resources and model access.
Specifically, this report draws on public-facing evidence and expert consultation to assess the impact of AI foundation models on biological misuse by examining:
1) Uplift currently provided by different levels of large language model access and the impact of fine-tuning; and
2) Uplift provided by the realisation of five forecasted trends in AI capabilities and development within the next two years.
In providing this work, we hope to progress the field to the next stage of risk assessments that will help governments, policymakers and other defensive actors further understand the biological threat landscape.
* Author contributed to this report until June 2024 prior to commencing a new role that precluded further involvement
† For enquires regarding this report please reach out to CLTR’s Biosecurity Policy Unit using biosecurity@longtermresilience.org.