Shape of a globe
Shape of a globe
Shape of a globe

Striving for a safe and flourishing world

An independent think tank with a mission to transform global resilience to extreme risks

Our Vision

Our vision is a safe and flourishing world with high resilience to extreme risks, such as those from pandemics and emerging technologies.

Our Mission

Our mission is to transform global resilience to extreme risks — both in the UK and internationally. Our core focus areas are AI risk, biological risk and government risk management

We help governments and other institutions transform resilience to extreme risks by:

Helping decision-makers and the wider public to understand extreme risks.
Providing expert advice and red-teaming on policy decisions.
Convening cross-sector conversations and workshops related to extreme risks.
Developing and advocating for policy recommendations and effective risk management frameworks and systems.
Providing an exchange for specialist knowledge, including by facilitating expert placements into government.

Our Latest Work

Artificial Intelligence

Risks stemming from improper application, unintended behaviours of AI systems in critical domains, and the broader socioeconomic impacts of AI on both the economy and society.

Transforming risk governance at frontier AI companies


This report explores how aspects of best practice risk governance – particularly the Three Lines Model (3LoD), which separates risk ownership, oversight and

Jul 19, 2024

Download

AI incident reporting: Addressing a gap in the UK’s regulation of AI


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since

Jun 26, 2024

Download

Biosecurity

Risks arising from natural pandemics, laboratory leaks, bioweapons, and ‘dual-use’ research — advancements with the potential for both beneficial and harmful applications.

Biological Tools and the EU AI Act


In this report, we examine the definitions of general-purpose and systemic risk as classified in the EU AI Act and discuss how these

Jan 8, 2025

Download

CLTR’s Response to the National Institute of Standards and Technology’s Safety Considerations for Chemical and/or Biological AI Models


The Centre for Long-Term Resilience recently contributed to the National Institute of Standards and Technology’s Safety Considerations for Chemical and/or biological AI models.

Dec 27, 2024

Download

Risk Management

Global-reaching, high-impact threats possess the potential to cause widespread devastation to both lives and economies on a global scale.

Transforming risk governance at frontier AI companies


This report explores how aspects of best practice risk governance – particularly the Three Lines Model (3LoD), which separates risk ownership, oversight and

Jul 19, 2024

Download

Update to the Integrated Review inquiry – submission of evidence


Submission of evidence to the House of Commons Foreign Affairs Committee’s inquiry into updating the UK’s Integrated Review of Security, Defence, Development and

Dec 31, 2022

Download