Summary Report
Life sciences research and industries are undergoing a rapid transformation due to advancements in artificial intelligence (AI). Beyond the ongoing spotlight on ‘frontier’ AI models, AI-enabled biological tools (BTs) are driving a substantial amount of progress. There are various types of BTs, from experimental simulation to protein design tools, which provide a diverse range of expanding capabilities that enable beneficial research and innovation.
However, in addition to the benefits, it has been hypothesised that BT capability improvements may increasingly enable harm: the ability to design and experiment with biological agents can be repurposed by malicious actors for weaponisation and misuse in the absence of safeguards and security measures.
Whether BTs pose significant misuse risks above baseline has not been fully established, and proportional mitigation strategies should be considered only in combination with rigorous risk assessment. To mitigate the potential misuse risk from BTs, it is crucial to be able to comprehensively assess the capabilities of these tools and evaluate their potentially dangerous applications.
This report summarises a methodological framework for risk assessment of BTs developed by The Centre for Long-Term Resilience in December 2023. We present an adaptable approach with example criteria and highlight limitations that can be addressed in future work.
* Author contributions ceased in June 2024 after commencing a new role that precluded further involvement
† For enquires regarding this report, please reach out to CLTR’s Biosecurity Policy Unit using biosecurity@longtermresilience.org.