Shape of a globe
Shape of a globe
Topic/Area:

Artificial Intelligence

Mitigating extreme risks from AI through sound policymaking

Introduction

AI systems could pose a number of large-scale extreme risks to society. These include severe misuse in bioweapon development or disinformation, societal harms such as power concentration or threats to democracy, or key aspects of society being increasingly controlled by insufficiently trustworthy AI systems.

We work with the UK Government and wider Artificial Intelligence policy community to develop and implement best-practice governance recommendations to protect against these risks while enabling the benefits of AI.

Jess Whittlestone speaking at UK Artificial Intelligence Policy summit
Panel at Artificial Intelligence policy summit

Current focus areas

  • Supporting the development of frontier AI regulation
  • Research on open source and misuse risks
  • Applying best practices risk management and governance to AI companies
  • Mitigating chronic and societal AI risks and building broader societal resilience
  • UK Government coordination in response to AI risks and incidents

Featured Work

Artificial Intelligence

Transforming risk governance at frontier AI companies


This report explores how aspects of best practice risk governance – particularly the Three Lines Model (3LoD), which separates risk ownership, oversight and

Jul 19, 2024

Download

AI incident reporting: Addressing a gap in the UK’s regulation of AI


AI has a history of failing in unanticipated ways, with over 10,000 safety incidents recorded by news outlets in deployed AI systems since

Jun 26, 2024

Download

The near-term impact of AI on disinformation


It is rightly concerning to many around the world that AI-enabled disinformation could represent one of the greatest global risks we face, whether

May 16, 2024

Download

The UK is heading in the right direction on AI regulation, but must move faster


This week marks an important milestone in the UK Government’s journey to regulating AI, with the first official update on the UK’s approach

Feb 7, 2024

What we want to see

  • Ensure the delivery of well-considered frontier AI legislation by the end of 2026
  • Implementation of best practice risk management by AI companies
  • Better understanding of AI’s risks within the UK Government and civil society
  • Launch of a government-led AI incident reporting regime
  • A coordinated approach to mitigating the misuse of open source

10,000+

reported safety incidents in deployed AI systems

1.8 billion

monthly visits to ChatGPT

$200 billion

forecasted investment in AI by 2025

Our future plans

Help the UK Government deliver frontier AI legislation

Build a better understanding of risks from AI

Implement effective risk management in AI companies and the UK Government