Shape of a globe
Shape of a globe

Biosecurity

How the UK Government should address the misuse risk from AI-enabled biological tools

Advances in AI-enabled biological tools (BTs) are catalysing life sciences research.

Author(s): James Smith, Sophie Rose, Richard Moulange, Cassidy Nelson

Date: March 27th 2024

Introduction

Advances in AI-enabled biological tools (BTs) are catalysing life sciences research. BTs refer to the range of narrow but highly capable AI tools trained on biological data using machine learning techniques, which are important for many aspects of scientific research and development. While beneficial in many cases, BTs could be misused to enable actors across multiple steps in the development of a bioweapon, and the risks associated with BTs are rapidly increasing.

Efforts to address biological misuse risks posed by frontier models have been initiated; however, as narrow, specialist AI tools, BTs will likely require a different set of interventions.

Given the potentially serious risks posed by BTs, it is imperative that the UK Government takes action. However, as BTs are used widely for beneficial purposes, it is important that any actions to reduce risk do not unduly hinder innovation.

What should the UK do to address the risks posed by BTs?

(1) The UK’s AI Safety Institute (AISI) should conduct risk assessments through a structured, periodic review of scientific literature and expert engagement to monitor risks from BTs

Risk assessment is critical for any BT governance regime to ensure that up to date information on AI safety and AI developments are available to the government, enable the determination of which standards and requirements should apply to different models, and help to prioritise research into mitigating risk.

We do not think evaluations, which are an important component of governance of frontier models, are currently a practical method to assess BT risk. We instead recommend that the Department for Science, Innovation and Technology (DSIT) leads the development and conduct of a risk assessment spanning the broad range of BTs available. This risk assessment process should be based on scientific (including grey and professional) literature and expert engagement, and repeated regularly.

(2) AISI and UK Research and Innovation (UKRI) should fund and do research into technical safeguards for safety and security of BTs

AI models can be trained and deployed with restrictions and controls (‘technical safeguards’) that prevent them from exhibiting undesirable behaviours. It could be possible, for example, for BTs to refuse to output harmful biological sequences, refuse inputs related to pathogens that have high pandemic risk, or have access to some capabilities restricted to some users. Research is needed to develop and test the effectiveness of technical safeguards for different BTs. If successful, technical safeguards could be a valuable tool to reduce risk.

To ensure research into technical safeguards is undertaken, we recommend that AISI explicitly includes BTs in its foundational AI safety research function, and that UKRI invests in research and innovation for technical approaches to safety and security of BTs. This will ensure that as risks are identified and better understood, there are a variety of options available to mitigate them.

(3) DSIT should create responsible development guidelines for BT developers

Guidelines for responsible development of BTs are needed to help prevent risky models from being released without appropriate mitigation. We recommend that DSIT leads the creation of responsible development guidelines for BTs as part of their broader work on voluntary commitments for AI development. Voluntary commitments to comply with the guidelines should initially be sought from BT developers, while other mechanisms to encourage compliance are investigated.

Why are these the best near-term actions for the UK Government to take?

(i) The risks and potential mitigation options for BTs are not yet sufficiently well understood to create an appropriately scoped regulatory framework.

(ii) These recommendations will build an understanding of these risks (Recommendation 1) and mitigations (Recommendations 2 and 3), allowing us to monitor and manage the challenges of today whilst contributing to the infrastructure we’re likely to need in the future.

If you’re interested in discussing this work further, please reach out to CLTR’s Biosecurity Policy Unit using biosecurity@longtermresilience.org.

Related Reports

Biosecurity

Capability-Based Risk Assessment for AI-Enabled Biological Tools

Richard Moulange, Sophie Rose*, James Smith, Cassidy Nelson†


Life sciences research and industries are undergoing a rapid transformation due to advancements in artificial intelligence (AI).

August 23rd 2024