top of page

Our Work

Much of our current work consists of workshops, briefings and reports for the UK Government. Please see below for an update on our policy work from December 2023 - February 2024 and some examples of our past work.

  • Strategy retreat: We hosted an AI unit strategy retreat in December, to refresh the high-level strategy for CLTR’s AI policy work, and start planning our priorities for 2024.

  • Roundtable: We worked with the Department for Science, Innovation and Technology (DSIT) to convene a roundtable with the Secretary of State and a small group of academic and civil society experts - focused on providing feedback on the UK’s updated approach to AI regulation in advance of publication.

  • Blog post: We published a short response to the aforementioned update on the UK’s approach to AI regulation on our blog, outlining some of our views on the need for faster action on regulating highly capable AI systems, and eight concrete recommendations for the UK Government in driving this forwards.

  • DSIT advice: We are continuing to advise policy teams across DSIT on several aspects of frontier AI regulation.

  • Policy briefing: We have been working on a policy briefing on applying best practice risk governance processes to frontier AI companies, in collaboration with the CLTR Risk Management Unit.

  • Feedback: We provided feedback on several “chronic risk assessments” related to AI being developed by the Cabinet Office.

  • Crisis preparation: We have been working with DSIT on developing crisis preparation processes in relation to AI capability discoveries.

  • Other ongoing projects: Analysis of some specific areas of misuse risk and their implications for open source policies; developing some policy proposals related to incident reporting; and thinking about policies that would support broader resilience to advances in AI beyond the ability to mitigate specific risks.

      Past work:


Artificial Intelligence


  • Hiring: In January, we opened a job application for an additional Biosecurity Policy Advisor position, the full description of which can be found here. The application closed on March 8, 2024.

  • Strategy retreat: We hosted a Biosecurity unit strategy retreat in early January, to refresh the high-level strategy for CLTR’s biosecurity policy work, and start planning our priorities for 2024.

  • AI:Bio work: We continue to carry out work focused on assessing and mitigating biological risks at the intersection of AI and the life sciences (or AI:Bio). Over the last few months, this has included the development of a capability-based risk assessment for narrow, specialised biological tools and additional work focused specifically on the potential biological risks posed by Frontier large language models (or LLMs). You can see some of our previous, public-facing work on AI:Bio risks here.

  • Workshop: We ran a workshop on overcoming challenges with synthetic nucleic acid screening implementation at the request of DSIT, and our public report on this work is now available.

  • Support to FAS: We contributed to the Federation of American Scientists (FAS) policy sprint on AI:Bio risks.

  • Cabinet Office feedback: We provided feedback on several “chronic risk assessments” related to biosecurity issues being developed by the Cabinet Office.

  • RAND workshop: Our Senior Biosecurity Policy Advisor, Sophie Rose, attended RAND’s workshop on Frontier AI and biosecurity in Washington DC in early February.

      Past work:

Risk Management

  • Resilience Framework response: We published a response to the Government’s Resilience Framework Implementation Update, recognising solid progress made over the past year and making five concrete recommendations for further development.

  • OECD input: We provided input to the OECD High Level Risk Forum’s draft framework for managing critical emerging risks, and attended the Forum’s plenary meeting in early February.

  • Policy brief: We started work on a policy brief on corporate governance and risk management recommendations for frontier AI developers.

  • DSIT workshops: We supported the AI unit to facilitate two workshops for DSIT on crisis response to AI risk events.

  • Cabinet Office feedback: We submitted the first three of a series of commentaries to the Cabinet Office on draft chronic risk assessments, which will involve expert input from both our AI and Biosecurity Units.

  • IfG feedback: We provided feedback to the Institute for Government’s Whitehall Monitor and Commission on the Centre of Government.

      Past work:


     For more details about our work, please contact us at


     Past work:


bottom of page