Shape of a globe
Shape of a globe

Artificial Intelligence

Biosecurity

Understanding risks at the intersection of AI and bio

Artificial intelligence and biotechnology are converging in a way that could catalyse immense progress from areas like personalised medicine to sustainable agriculture—as well as substantial risks.

Author(s): Cassidy Nelson and Sophie Rose

Date: October 18th 2023

Artificial intelligence and biotechnology are converging in a way that could catalyse immense progress from areas like personalised medicine to sustainable agriculture—as well as substantial risks. There is a potential for new capabilities that threaten national security, including those that may lower barriers to the misuse of biological agents.

Without achieving a calibrated understanding, threats in AI-Biosecurity risk being overstated or, alternatively, not recognised and therefore underappreciated. In the face of rapid innovation, there is an imperative to monitor, measure, and mitigate these risks.

We’re excited to highlight a new publication by the Centre for Long-Term Resilience analysing the potential risks at the intersection of AI and bio. This report covers:

The role of AI in accelerating the threat of biological weapons

We explore how AI-enabled tools (particularly those with specialised life sciences capabilities) impact individual steps of the biological weapon development process, from malicious intention to a deliberate release event.

Our goal was to make a more specific case for where and how AI-enabled tools may contribute to misuse risk. We believe this is an essential starting point for (i) identifying potential intervention points and (ii) aiding the development and evaluation of different risk mitigation strategies.

Furthering understanding of AI-enabled biological tools

In this report, we share our approach to subcategorising “AI-enabled biological tools” including: what they are capable of, how mature those capabilities are and what use cases might be at high risk of misuse.

Our goal with this work was to facilitate more precise risk assessments and identify priority capabilities for monitoring. These insights enhance our ability to anticipate concerning capabilities and develop targeted governance mechanisms, without stifling innovation.

Related Reports

Artificial Intelligence

AI incident reporting: Addressing a gap in the UK’s regulation of AI

Tommy Shaffer Shane


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since

June 26th 2024

Artificial Intelligence

The near-term impact of AI on disinformation

Tommy Shaffer Shane


It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, whether

May 16th 2024