top of page
  • Writer's pictureCLTR

Report launch: examining risks at the intersection of AI and bio

By Cassidy Nelson and Sophie Rose, Centre for Long-Term Resilience

Read the full report here:

AI-Facilitated Biological Weapon Development [Website Copy]
Download PDF • 390KB

Artificial intelligence and biotechnology are converging in a way that could catalyse immense progress from areas like personalised medicine to sustainable agriculture—as well as substantial risks. There is a potential for new capabilities that threaten national security, including those that may lower barriers to the misuse of biological agents.

Without achieving a calibrated understanding, threats in AI-Biosecurity risk being overstated or, alternatively, not recognised and therefore underappreciated. In the face of rapid innovation, there is an imperative to monitor, measure, and mitigate these risks.

We’re excited to highlight a new publication by the Centre for Long-Term Resilience analysing the potential risks at the intersection of AI and bio. This report covers:

The role of AI in accelerating the threat of biological weapons

We explore how AI-enabled tools (particularly those with specialised life sciences capabilities) impact individual steps of the biological weapon development process, from malicious intention to a deliberate release event.

Our goal was to make a more specific case for where and how AI-enabled tools may contribute to misuse risk. We believe this is an essential starting point for (i) identifying potential intervention points and (ii) aiding the development and evaluation of different risk mitigation strategies.

Furthering understanding of AI-enabled biological tools

In this report, we share our approach to subcategorising “AI-enabled biological tools” including: what they are capable of, how mature those capabilities are and what use cases might be at high risk of misuse.

Our goal with this work was to facilitate more precise risk assessments and identify priority capabilities for monitoring. These insights enhance our ability to anticipate concerning capabilities and develop targeted governance mechanisms, without stifling innovation.

Recent Posts

See All

The near-term impact of AI on disinformation

by Tommy Shaffer Shane Read the full policy paper here: It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, wheth


bottom of page