top of page

Our Work

Much of our current work consists of workshops, briefings and reports for the UK Government. Please see below for an update on our policy work from March - May 2024 and some examples of our past work.

  • Disinformation policy brief: We published an assessment of the impact of AI on the disinformation threat, providing ten key findings and three recommendations to the UK Government.

  • Risk governance policy brief: We completed a policy brief for the Department of Science, Innovation and Technology (DSIT) on risk governance recommendations for frontier AI developers. This aims to inform both future risk management regulation for AI companies and improve practices within AI companies themselves. We’re in the final stages of edits and will publish soon.

  • Evidence submission: We submitted written evidence to the House of Commons Public Accounts Committee’s inquiry, 'Use of artificial intelligence in government', making recommendations on clarifying roles and implementing incident reporting to manage risks and incidents.

  • Ongoing policy advice to DSIT: We are continuing to advise policy teams across DSIT on several aspects of frontier AI regulation.

  • Other ongoing projects: We are finalising drafts of reports on specific areas of misuse risk and their implications for open source policies; developing some policy proposals related to incident reporting; and thinking about policies that would support broader resilience to advances in AI beyond the ability to mitigate specific risks.

      Past work:

 

Artificial Intelligence

Biosecurity

  • Hiring: We are in the final stages of our hiring round for a Biosecurity Policy Adviser position. The full description can be found here

  • Policy paper: We published a report detailing how the UK Government should address the misuse risk from AI-enabled biological tools.

  • Position statement: We posted an outline of why we recommend risk assessments over evaluations for AI-enabled biological tools.

  • Report feedback: We provided comments and feedback to the International Scientific Report on the Safety of Advanced AI Interim Report ahead of the AI Seoul Summit.

  • House of Lords Oral Evidence: In addition to written evidence and ongoing engagement, we provided oral evidence to the House of Lords Science and Technology Committee’s Engineering Biology inquiry.

  • Roundtable: We contributed to an AI Safety in Focus Roundtable event with Secretary of State Michelle Donelan and Minister Bhatti ahead of the AI Seoul Summit.

      Past work:

Risk Management

  • Policy brief: We completed a policy brief for the Department of Science, Innovation and Technology (DSIT) on risk governance recommendations for frontier AI developers.

  • Cabinet Office feedback: We’ve submitted six commentaries to the Cabinet Office on draft chronic risk assessments, which involve expert input from both our AI and Biosecurity Units.

  • Op-ed: We published an op-ed on resilience published in the New Statesman in March.

  • Letter: We published a letter in response to the Institute for Government (IfG)’s Centre of Government Commission report in the Financial Times in March.

  • Meetings with Labour: We met with Labour representatives to brief on a recommended government risk management structure (adopting a ‘three lines’-based model).  

      Past work:

General

     For more details about our work, please contact us at info@longtermresilience.org.

 

     Past work:

 

bottom of page